The $7 Trillion Race: AI Infrastructure as a Decade-Long Investment Cycle
McKinsey's landmark demand analysis, combined with KKR's structural framework, yields a clear investment conclusion: the AI data center build-out is unlike prior technology bubbles — it is constrained by physics, contracted before it is built, and compounding at a rate that will exceed even optimistic projections. This paper maps the full investment landscape and identifies where the real money is made.
McKinsey projects $6.7 trillion in global data center capex by 2030, with 70% driven by AI workloads. Unlike the 1990s fiber overbuild, this cycle is contracts-first, power-constrained, and vacancy-tight at 2.3%. NVDA, VRT, EQIX, and CEG represent the four high-conviction positions across the Energizer, Technology Developer, and Operator archetypes.
By 2030, data centers will require $6.7 trillion worldwide. That number — from McKinsey's most rigorous technology infrastructure analysis to date — represents roughly the combined GDP of Japan and Germany. The AI data center cycle is regularly compared to the 1990s fiber overbuild. It is a seductive analogy and a fundamentally misleading one. Understanding why is the difference between capturing a decade-long compounding trade and being burned by a narrative that looked good on paper.
Demand Landscape
McKinsey's research shows global demand for data center capacity could almost triple by 2030, with approximately 70% of that demand driven by AI workloads. Total projected capital expenditure: $6.7 trillion, of which $5.2 trillion is attributable to AI processing loads and $1.5 trillion to traditional IT applications.
The hyperscalers are leading the investment wave. Amazon, Google, Microsoft, and Meta are expected to spend over $350 billion on capex in 2025 alone — a year-over-year increase in the mid-30% range. In aggregate, AI-related infrastructure spend in 2025 is estimated at approximately $500 billion, and in H1 2025 it contributed more to US GDP growth than consumer spending. As a share of GDP, AI-related capex now sits at approximately 5% — a level comparable to the late-1990s technology boom.
| Year | Total Capacity (GW) | AI Workloads (70%) | Non-AI Workloads (30%) | YoY Growth |
|---|---|---|---|---|
| 2025 | 82 GW | ~57 GW | ~25 GW | Baseline |
| 2026 | 105 GW | ~74 GW | ~32 GW | +28% |
| 2027 | 137 GW | ~96 GW | ~41 GW | +30% |
| 2028 | 163 GW | ~114 GW | ~49 GW | +19% |
| 2029 | 191 GW | ~134 GW | ~57 GW | +17% |
| 2030 | 207 GW | ~145 GW | ~62 GW | +8% |
McKinsey constructed three scenarios ranging from constrained to accelerated demand, shaped by semiconductor supply constraints, enterprise AI adoption rates, efficiency improvements, and regulatory challenges. The base case — $5.2 trillion in AI data center capex — assumes continued growth without runaway acceleration or structural constraints.
| Scenario | Drivers | Incremental GW | AI Capex | Total (AI + Non-AI) |
|---|---|---|---|---|
| Accelerated | Transformative AI adoption; enterprise integration across all sectors; no supply constraints | 205 GW | $7.9T | $9.4T est. |
| Base Case ★ BASE | Continued growth; moderate enterprise adoption; some efficiency gains offset demand | 125 GW | $5.2T | $6.7T |
| Constrained | Supply chain bottlenecks; slower enterprise deployment; AI efficiency gains suppress demand | 78 GW | $3.7T | $5.2T est. |
Structural Analysis
The analogy to the late-1990s telecommunications infrastructure bubble is compelling in one dimension — the scale of capital deployment — and misleading in every other. Fiber in the 1990s was built speculatively, with virtually unlimited capacity once laid and zero refresh requirement. Data centers are physically constrained, contractually committed before construction, and subject to accelerated depreciation cycles that naturally absorb any temporary excess. The evidence is visible in vacancy data: North American colocation vacancy has fallen from 9.8% in 2020 to 2.3% in H1 2025 (JLL Research), while the fiber glut post-2001 saw vacancy exceed 20%.
The key structural difference McKinsey identifies is the cost of carrying excess capacity. Fiber, once laid, is nearly free to maintain. Data centers are the opposite: power, cooling, and maintenance are ongoing high costs regardless of utilization. But crucially, AI accelerators have 3–4 year refresh cycles — meaning any overcapacity is rapidly converted into obsolescence, and new workloads pull spare capacity well before it becomes stranded.
Investment Architecture
McKinsey's analysis maps the $5.2 trillion AI capex envelope across five distinct investor archetypes. Understanding this architecture is essential: the investment case, risk profile, and return dynamics differ fundamentally across archetypes.
Signal vs. Noise
- Vacancy at 2.3% in N. America H1 2025 — no speculative overbuild visible (JLL)
- Contracts-first builds: hyperscalers require offtake agreements before construction begins
- Power is the ultimate physical constraint on overbuild — grid queues, transformer lead times, permits
- 3–4 year accelerator refresh cycles naturally absorb any temporary excess capacity
- AI is a horizontal productivity layer across all industries, not a niche connectivity play
- Lower unit costs drive accelerated adoption (Jevons Paradox — efficiency creates more demand)
- Both inference and training workloads growing; inference to dominate by 2030
- AI use-case failure: enterprises building but not deploying at scale — ROI visibility remains limited
- Efficiency disruption: DeepSeek V3's 18× training cost reduction could suppress GPU demand
- Concentration risk: NVIDIA at ~8% of S&P 500 — single-stock exposure in any AI basket
- Geopolitical: US–China semiconductor export controls create supply chain and demand uncertainty
- Rising power costs squeeze operators without long-term power contracts
- Some business models (GPU rental, thin-margin operators, non-core markets) will not survive
"The stakes are high. Overinvesting in data center infrastructure risks stranding assets, while underinvesting means falling behind. The winners of the AI-driven computing era will be the companies that anticipate compute power demand and invest accordingly."
— McKinsey & Company, "The Cost of Compute," April 2025
Investor Framework
The $5.2–$6.7 trillion capex envelope flows through a defined set of public equities. But raw exposure to the AI theme is not sufficient — the archetype, moat, and balance sheet quality of each company determine whether they capture compounding returns or get crushed in the shake-out.
Projections & Outlook
| Asset / Sector | Phase 1: Build (2024–26) | Phase 2: Deploy (2026–28) | Phase 3: Compound (2028–30) | A.L.C. View |
|---|---|---|---|---|
| AI Semiconductors NVDA, AMD |
↑ Accelerating. Backlog extends 12–18 months. Pricing power at peak. | ► Elevated but normalising. Efficiency gains may compress unit economics. | ↑ Next-gen inference demand drives new cycle. Moat compounds. | High Conviction Long |
| Power & Cooling VRT, CEG |
↑ Rapid growth as rack density escalates. Power PPAs being locked in now. | ↑ Continued deployment of liquid cooling. Nuclear PPAs extending. | ↑ Structural beneficiary of all three phases. Most durable earnings quality. | High Conviction Long |
| Data Center REITs EQIX, DLR |
↑ Vacancy tightening. Premium pricing in core markets. Land value accruing. | ↑ Expansion of AI-optimised facilities. Interconnect moats widen. | ↑ Long-term lease revenue compounds. REIT dividend yield supported. | High Conviction Long |
| Hyperscalers MSFT, GOOGL, AMZN |
↓ Capex absorbs free cash flow. Market questions ROI discipline. | ► Cloud revenue inflection as AI workloads monetise. Watch margins. | ↑ AI-driven cloud revenue compounds. CapEx declining as % of revenue. | Selective. Monitor capex |
| Construction / Builders | ↑ Labour and materials in high demand. Early-cycle beneficiary. | ► Growth but margins compress as capacity builds. | ↓ Cycle matures. Commodity dynamics. No moat. | Tactical only. Not core. |
| GPU Rental / Thin-Margin Ops | ► Works during scarcity. Business model intact for now. | ↓ Hyperscalers self-build eliminates demand for rented compute. | ↓ Model collapses. Structural shake-out. Avoid. | Avoid |
Portfolio Construction Framework — Five principles for building the AI infrastructure position without getting burned:
References
- 1. McKinsey & Company. "The cost of compute: A $7 trillion race to scale data centers." Jesse Noffsinger, Mark Patel, Pankaj Sachdeva. TMT Practice, April 2025.
- 2. JLL Research. North America Colocation Vacancy, H1 2025. Published June 2025.
- 3. KKR Global Infrastructure. "Beyond the Bubble: Why We Think AI Infrastructure Will Compound Long after the Hype." November 2025.
- 4. U.S. Bureau of Labor Statistics. GDP and capex share data. Bloomberg terminal data as of June 30, 2025 (cited via KKR GMAA).
- 5. DeepSeek V3 efficiency claims: TechCrunch January 27, 2025; Artificial Analysis January 27, 2025.
- 6. All stock-specific analysis and projections represent independent views of A.L. Capital Advisory. Not investment advice.
Translate research into portfolio decisions
The Strategic Session is where we take research like this and build concrete allocation decisions — position sizing, archetype exposure, phase timing — tailored to your risk profile.
Book a Strategic Session →