Today's Bulletin: April 4, 2026

More results...

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Filter by Categories
Africacom
AfricaCom 2024
AfricaCom 2025
AI
Apps
Apps
Arabsat
Banking
Broadcast
Cabsat
CABSAT
Cloud
Column
Content
Corona
Cryptocurrency
DTT
eCommerce
Editorial
Education
Entertainment
Events
Fintech
Fixed
Gitex
Gitex Africa
Gitex Africa 2025
GSMA Cape Town
Healthcare
IBC
Industry Voices
Infrastructure
IoT
MNVO Nation Africa
Mobile
Mobile Payments
Music
MWC Barcelona
MWC Barcelona 2025
MWC Barcelona 2026
MWC Kigali
MWC Kigali 2025
News
Online
Opinion Piece
Orbiting Innovations
Podcast
Q&A
Satellite
Security
Software
Startups
Streaming
Technology
TechTalks
TechTalkThursday
Telecoms
Utilities
Video Interview
Follow us

Global CSPs to Spend $710B on AI Servers in 2026, TrendForce Says

February 26, 2026
3 min read
Author: Joyce Onyeagoro

Combined capital expenditures by the world’s eight leading CSPs—Google, AWS, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu—are projected to exceed $710 billion in 2026, representing roughly 61% year-over-year growth.

Global communication service providers (CSPs) are accelerating investment in AI servers and infrastructure to support expanding AI workloads, according to the latest TrendForce analysis of the AI server market . Combined capital expenditures by the world’s eight leading CSPs—Google , AWS,  Meta,  Microsoft,  Oracle,  Tencent,  Alibaba,  and Baidu—are projected to exceed $710 billion in 2026, representing roughly 61% year-over-year growth.

In addition to continued procurement of NVIDIA and AMD GPU platforms, CSPs are increasingly investing in ASICs to optimize AI workloads and improve the cost efficiency of their data centers. Alphabet, the parent company of Google, is projected to see 2026 capital expenditure surpass $178.3 billion, up 95% YoY. Google’s early development of in-house ASICs, including its TPU roadmap advancing to the next generation v8 platform, positions it ahead of peers. Driven by Google Cloud Platform and Gemini AI applications, TPUs are expected to account for nearly 78% of AI servers shipped to Google in 2026, making it the only CSP with more ASIC-based servers than GPU-based systems.

Amazon is scaling procurement of NVIDIA GB300 and V200 rack-scale GPU systems to support AI training and inference workloads. GPUs are expected to represent nearly 60% of AWS’s AI server build-out in 2026. On the ASIC front, Amazon’s next-generation Trainium 3 will ramp starting 2Q26, following Trainium 2/2.5 deployment, with shipment momentum likely stronger in the second half of the year as software and system validation mature.

Meta’s projected CapEx for 2026 exceeds $124.5 billion, up 77% YoY, with AI server deployments relying primarily on NVIDIA and AMD GPUs, which are expected to account for over 80% of its build-out. While Meta seeks to advance its in-house MTIA ASIC platform to reduce unit compute costs and supplier dependence, software-hardware tuning challenges may limit shipment volumes relative to initial targets.

Microsoft remains focused on long-term demand for large-scale AI model training and inference, continuing procurement of NVIDIA rack-scale systems while introducing its in-house Maia 200 chip for high-efficiency AI inference. Oracle is expanding GPU rack-scale deployments to support AI data center projects related to initiatives like Stargate and OpenAI integration.

In China, ByteDance’s 2026 capital expenditure is estimated to allocate over half toward AI chip procurement, with NVIDIA H200 expected to play a key role, alongside expanding adoption of domestic AI chips such as Cambricon. Tencent continues procuring NVIDIA GPUs for cloud and generative AI services while collaborating with local partners to develop in-house ASICs for networking, data centers, and AI applications.

Alibaba and Baidu are advancing proprietary ASIC development to support large-scale AI workloads. Alibaba, through T-head and Alibaba Cloud, focuses on public cloud infrastructure and Qwen LLMs for enterprise and consumer applications. Baidu plans to roll out next-generation Kunlun chips post-2026, alongside its Tianchi AI server cluster platform, capable of linking hundreds of AI chips to enhance system-level computing power.

The TechAfrica News Podcast

Follow us on LinkedIn

Newsletter signup

Sign up for our weekly newsletter and get the latest industry insights right in your inbox!

Please wait...

Thank you for sign up!