Version 2 · Updated April 30, 2026 — Q1 2026 earnings update: Big-5 capex revised to ~$725B · Google Cloud +63% · AWS +28% · MSFT $190B CY2026

The AI Capex Cycle: $725B Hyperscaler Buildout and the Five High-Conviction Positions

Bottom Line Up Front
The AI capex cycle is the largest coordinated infrastructure investment in history. The Big-5 hyperscalers will spend $660–690 billion on AI infrastructure in 2026 alone — and the physical supply of data center capacity cannot keep pace. Five positions capture the full value chain: NVDA, VRT, EQIX, CEG, and MU.
~$725B
Big-5 hyperscaler AI capex, 2026 — Q1 earnings confirmed, ~64% YoY increase
$6.7T
Total global data center capex required by 2030 (McKinsey base case)
2.3%
North American colocation vacancy — historic low (JLL, H1 2025)
207 GW
Data center capacity required by 2030, up from 82 GW in 2025
40+ GW
US capacity shortfall projected by 2028 (Goldman Sachs)
$1.15T
Total hyperscaler capex 2025–2027 (Goldman Sachs projection)
High Conviction: NVIDIA (NASDAQ: NVDA) · Vertiv Holdings (NYSE: VRT) · Equinix (NASDAQ: EQIX) · Constellation Energy (NASDAQ: CEG) · Micron Technology (NASDAQ: MU)  ·  Selective: Microsoft (NASDAQ: MSFT) · Alphabet (NASDAQ: GOOGL) · AMD (NASDAQ: AMD)
Key Insight · April 2026 Update

The AI capex cycle has accelerated beyond February 2026 projections. The Big-5 hyperscalers now guide $660–690 billion in 2026 capex — 36% above 2025 — while Goldman Sachs documents a US data center capacity shortfall exceeding 11 GW today, widening to 40 GW by 2028. The McKinsey $6.7 trillion demand framework remains the base case, but the pace of deployment is tracking the accelerated scenario. NVDA, VRT, EQIX, CEG, and MU are the five high-conviction positions across the Energizer, Technology Developer, Memory, and Operator archetypes.

Exhibit 1 April 2026 · Interactive
The AI Capex Race: Big-5 Hyperscaler Spending 2025–2028P (USD billions)
Sources: Q4 2025 earnings calls (actuals) · Futurum Group, Feb 2026 (2026 guidance) · Moody's Ratings, Mar 2026 · Morgan Stanley · Goldman Sachs (2027 estimates) · Dell'Oro Group 21% CAGR forecast, Aug 2025 (2028 projection). 2025 = actual · 2026 = company guidance · 2027 = analyst consensus · 2028 = A.L.C. projection based on Dell'Oro CAGR & Moody's trajectory.
2025 · Actual
~$443B
Baseline
2026 · Q1 Confirmed
~$725B
+64% YoY
2027 · Estimate
~$880B
+21% YoY
2028 · Projection
~$1.06T
+20% YoY
2025 Actual
2026 Guidance
2027 Estimate
2028 Projection
Moody's Ratings (March 2026) projects Big-5 capex will reach approximately $820B in 2027 — a deceleration from 2026's 52% growth rate, but still representing an additional $160B of annual spend versus 2026 levels.
Morgan Stanley projects Alphabet alone could spend up to $250B in 2027 (CNBC, Feb 2026). Goldman Sachs projects Meta capex of ~$144B in 2027. CreditSights raised 2026 aggregate estimate to ~$750B post-earnings (above company guidance midpoints).
2025 = Q4 2025 actuals. 2026 = Q1 2026 earnings confirmed guidance (Amazon $200B reaffirmed, Alphabet $180–190B raised, Meta $125–145B raised, Microsoft ~$190B CY2026 raised, Oracle ~$50B). 2027 = analyst consensus post Q1 2026 earnings; Moody's, Morgan Stanley, Goldman Sachs company-level. 2028 = A.L. Capital Advisory projection applying Dell'Oro Group 21% CAGR to 2027 consensus; treat as directional only. AI-only toggle applies CreditSights 75% AI-specific factor. Figures rounded to nearest $5B. Not investment advice.
Five Investment Archetypes — Where the $5.2 Trillion Flows
Source: McKinsey & Company, "The Cost of Compute," April 2025 · A.L. Capital Advisory analysis
Archetype 01 · 15% of AI Capex
Builders
15% $0.8T
Real estate developers, design firms, and construction companies that expand and upgrade data center facilities. Key investments: land acquisition, materials, skilled labour, site development.
Examples: Turner Construction · AECOM · Bechtel
Archetype 02 · 25% of AI Capex
Energizers
25% $1.3T
Utilities, energy providers, cooling & electrical equipment manufacturers. Key investments: power generation (nuclear, gas, renewables), direct-to-chip liquid cooling, transformers, network connectivity.
Examples: Duke Energy · Vertiv · Schneider Electric · Constellation Energy
Archetype 03 · 60% of AI Capex
Technology Developers & Designers
60% $3.1T
Semiconductor companies and computing hardware suppliers. The largest single share — because every watt of AI compute ultimately flows through a chip.
Examples: NVIDIA · AMD · Intel · TSMC · Samsung · SK Hynix
Archetype 04 · Unquantified
Operators
Not modelled
Hyperscalers, colocation providers, GPU-as-a-service platforms. Own and run large-scale facilities. Capex overlaps with broader cloud & infrastructure spending — not isolated in McKinsey's model.
Examples: AWS · Google Cloud · Microsoft Azure · Equinix · Digital Realty
Archetype 05 · Embedded
Enablers
Cross-cutting
Software, networking, and service providers whose revenues are derived from — but not directly funded by — the AI infrastructure build-out. Capex flows through the other four archetypes; Enablers capture the recurring revenue layer on top. Highest margin profile; most exposed to competitive disruption.
Examples: Arista Networks · Cisco · Juniper · Pure Storage · Snowflake
Breaking Intelligence · April 29–30, 2026

Q1 2026 Earnings Confirmed — Big-5 capex revised to ~$725B: All four major hyperscalers reported on April 29. Alphabet raised to $180–190B (partially due to Intersect acquisition); Google Cloud Q1 revenue $20B (+63% YoY), backlog $462B. Microsoft guided to ~$190B CY2026 ($25B from component pricing); Azure +40% YoY, AI business annual run-rate $37B (+123%). Meta raised to $125–145B, explicitly citing memory price inflation as the primary driver. Amazon reaffirmed $200B; AWS $37.6B Q1 (+28% YoY), backlog $364B + $100B+ Anthropic compute deal. Combined Big-5 post-earnings consensus: ~$725B, up from $660–690B pre-earnings guidance. Sources: Q1 2026 earnings calls, FT, April 30 2026.

Google Cloud Next 2026 (April 22): Alphabet announced TPU 8t (3× Ironwood throughput; 9,600-TPU superpod) plus NVIDIA Blackwell Ultra and Vera Rubin GPU instances for 2027 — the first public confirmation of commercial availability for NVIDIA's post-Blackwell architecture. Directly reinforces NVDA, EQIX, and MU (HBM demand escalates per GPU generation).

YTD 2026 conviction performance (Yahoo Finance, April 24): VRT +61.8% · MU +32.3% · EQIX +1.3% · NVDA −5.0% (backlog intact) · CEG flat (nuclear PPA pipeline expanding). Past performance does not guarantee future results. Not investment advice.

Key Takeaways — AI Infrastructure Capex Cycle, April 2026

A.L. Capital Advisory · Anton Ladnyi, CFA Charterholder (ex-Goldman Sachs · ex-J.P. Morgan)
  1. ~$725B confirmed in 2026, ~$880B in 2027E, ~$1.06T in 2028P. Q1 2026 earnings raised the consensus from $660–690B to ~$725B — a ~64% YoY increase. Amazon leads at $200B; Microsoft raised to ~$190B CY2026; Meta raised to $125–145B; Alphabet raised to $180–190B. ~75% is AI-specific (~$545B). Combined spend is close to Switzerland's entire annual GDP.
  2. Supply cannot keep pace. North American colocation vacancy: 2.3% (JLL, H1 2025) — down from 9.8% in 2020. Goldman Sachs documents an 11 GW US shortfall today, widening to 40+ GW by 2028. McKinsey's $6.7T build-out requires 125 GW of incremental AI capacity by 2030. Physical infrastructure is the binding constraint, not demand.
  3. Five High Conviction positions across the stack: NVDA (GPU monopoly, 90% AI accelerator share) · VRT (liquid cooling, non-discretionary at 10–15× CPU power density) · EQIX (260+ data centers, 2.3% vacancy pricing power) · CEG (nuclear baseload, carbon-free PPAs) · MU (HBM3E, the binding constraint on B200 GPU output).
  4. Energizer & Memory archetypes are structurally underowned. VRT, CEG, and MU earn revenue from physical consumption of power, cooling, and memory — independent of which AI model or chip generation wins. Data centers need power and HBM regardless of DeepSeek. These positions carry no model-competition risk; NVDA does.
  5. Not the 1990s fiber overbuild — but capex/revenue ratios are a watch signal. Vacancy at 2.3% vs 20%+ in 2001; contracts precede construction; accelerator refresh cycles absorb oversupply. However, hyperscaler capex-to-revenue ratios of 34–75% are utility-level. MSFT and GOOGL rated Selective, not High Conviction, for this reason.
Anton Ladnyi — Founder & Portfolio Architect, A.L. Capital Advisory, ex-Goldman Sachs, CFA Charterholder
Anton Ladnyi, CFA
Founder & Portfolio Architect — A.L. Capital Advisory
Ex-Goldman Sachs Equity Research · Ex-J.P. Morgan Wealth Management · CFA Charterholder

The AI capex cycle is unlike any prior infrastructure investment wave — not in scale alone, but in structure. By 2030, the global data center build-out will require $6.7 trillion in capital expenditure (McKinsey, April 2025), with approximately 70% attributable to AI workloads. In 2026, the Big-5 hyperscalers — Amazon, Alphabet, Meta, Microsoft, and Oracle — will spend $660–690 billion on AI infrastructure collectively, a figure that exceeds the entire annual GDP of Switzerland and approaches that of Saudi Arabia. Goldman Sachs documents that US data centers already face a capacity shortfall of more than 11 gigawatts today, with the cumulative gap expected to exceed 40 GW by 2028. North American colocation vacancy has fallen to 2.3% — a level at which pricing power is structural, not cyclical.

The AI data center cycle is regularly compared to the 1990s fiber overbuild. The analogy is seductive and fundamentally misleading. Understanding precisely why is the difference between capturing a decade-long compounding trade and being burned by a narrative that looked good on paper. This paper maps the full AI capex cycle, integrates the April 2026 data update, and identifies the five positions that capture the value across the infrastructure stack.


01

What Is the AI Capex Cycle?

Scale, Structure & the $6.7 Trillion Demand Curve — McKinsey Base Case, April 2025

McKinsey's research shows global demand for data center capacity could almost triple by 2030, with approximately 70% of that demand driven by AI workloads. Total projected capital expenditure: $6.7 trillion, of which $5.2 trillion is attributable to AI processing loads and $1.5 trillion to traditional IT applications.

The AI capex cycle is defined by the hyperscalers — the companies that build and operate the cloud infrastructure on which AI runs. Amazon, Alphabet, Meta, Microsoft, and Oracle collectively spent $256 billion on capital expenditure in 2024. The Big-5 estimate for 2025 is $443 billion, a 73% YoY increase. Q1 2026 earnings (reported April 29, 2026) confirmed the combined 2026 figure at approximately $725 billion — a ~64% increase over 2025 — with approximately 75% of that spend (~$545 billion) directed at AI-specific infrastructure: GPUs, servers, data center construction, power systems, and cooling. Alphabet raised guidance to $180–190B (partially due to the Intersect acquisition); Meta raised to $125–145B, explicitly citing memory price inflation; Microsoft guided to ~$190B CY2026 ($25B attributed to higher component pricing); Amazon reaffirmed $200B. See Exhibit 1 for the full company-level breakdown. Amazon's 2026 capex of $200 billion alone exceeds the combined annual capex of the entire publicly traded US energy sector. As a share of GDP, AI-related capital formation now sits at approximately 5% — a level last seen during the late-1990s technology boom, but with a structurally different demand foundation (see Section 02).

Exhibit 2
Global Data Center Capacity Demand: AI vs. Non-AI Workloads, 2025–2030 (GW)

Exhibit 2 · Global Data Center Capacity Demand: AI vs. Non-AI Workloads, 2025–2030 (GW)

Base-case projection: 125 incremental GW added between 2025–2030 for AI workloads alone. Total demand nearly triples from ~82 GW (2025) to ~207 GW (2030). Source: McKinsey & Company, "The Cost of Compute," April 2025.
Global Data Center Capacity Demand 2025–2030: Total capacity grows from 82 GW in 2025 to 207 GW in 2030, with AI workloads comprising approximately 70% of demand throughout the period. McKinsey base-case projection, April 2025. A.L. Capital Advisory analysis.
Year Total Capacity (GW) AI Workloads (70%) Non-AI Workloads (30%) YoY Growth
202582 GW~57 GW~25 GWBaseline
2026105 GW~74 GW~32 GW+28%
2027137 GW~96 GW~41 GW+30%
2028163 GW~114 GW~49 GW+19%
2029191 GW~134 GW~57 GW+17%
2030207 GW~145 GW~62 GW+8%

McKinsey constructed three scenarios ranging from constrained to accelerated demand, shaped by semiconductor supply constraints, enterprise AI adoption rates, efficiency improvements, and regulatory challenges. The base case — $5.2 trillion in AI data center capex — assumes continued growth without runaway acceleration or structural constraints.

Exhibit 3
Three AI Infrastructure Investment Scenarios, 2025–2030

Exhibit 3 · Three AI Infrastructure Investment Scenarios, 2025–2030

Source: McKinsey & Company, proprietary data center demand model. April 2025.
Three AI Infrastructure Investment Scenarios 2025–2030: Accelerated scenario requires 205 GW incremental capacity and $7.9 trillion in AI-specific capex ($9.4T total). Base case requires 125 GW and $5.2T AI capex ($6.7T total). Constrained scenario requires 78 GW and $3.7T AI capex. Source: McKinsey and Company proprietary data center demand model, April 2025. A.L. Capital Advisory uses the base case throughout this report.
Scenario Drivers Incremental GW AI Capex Total (AI + Non-AI)
Accelerated Transformative AI adoption; enterprise integration across all sectors; no supply constraints 205 GW $7.9T $9.4T est.
Base Case ★ BASE Continued growth; moderate enterprise adoption; some efficiency gains offset demand 125 GW $5.2T $6.7T
Constrained Supply chain bottlenecks; slower enterprise deployment; AI efficiency gains suppress demand 78 GW $3.7T $5.2T est.
★ Base case used throughout this paper. Range: $3.7T–$7.9T in AI-specific capex depending on adoption trajectory.
Exhibit 4 A.L.C. Original Analysis
AI Data Center Demand vs. Supply at Current Construction Pace: The Growing Capacity Gap, 2025–2030 (GW)

Exhibit 4 A.L.C. Original Analysis · AI Data Center Demand vs. Supply at Current Construction Pace: The Growing Capacity Gap, 2025–2030 (GW)

Demand: McKinsey & Company base case, April 2025. Supply constraint: Goldman Sachs "Powering the AI Era" 2025; A.L. Capital Advisory construction pace modelling (18–30 month lead time, labour constraints per SSGA Nov 2025). Gap shading = A.L. Capital Advisory original analysis.
AI data center capacity gap 2025–2030: Demand (McKinsey base case) reaches 207 GW by 2030. Supply at current construction pace reaches approximately 82 GW (2025), 95 GW (2026), 115 GW (2027), 133 GW (2028), 151 GW (2029), 167 GW (2030). Structural gap: 0 GW (2025) widening to 40 GW by 2028 and approximately 40 GW by 2030. Source: McKinsey April 2025, Goldman Sachs 2025, A.L. Capital Advisory original analysis.
Goldman Sachs documents the US capacity shortfall already exceeds 11 GW today, growing to over 40 GW by 2028. The gap between what AI workloads require and what can physically be built — constrained by grid interconnects, transformer lead times, permitting, and construction labour — is structural, not cyclical.
The gap is the investment thesis for EQIX and CEG in one chart. Colocation vacancy at 2.3% is a direct consequence of demand outrunning supply. Operators with entitled land and existing capacity are the physical constraint made investable. This analysis is original to A.L. Capital Advisory and not reproduced from any third-party source.
Supply curve assumes: 18–30 month average construction lead time; ~15 GW annual new builds at current pace (State Street SSGA, Nov 2025); labour constraint applying from 2026 onward. This is A.L. Capital Advisory's independent modelling of the structural gap — not a published third-party figure. Demand curve = McKinsey base case (125 incremental GW, 2025–2030).

Which stocks benefit most from the 40 GW US data center power gap?

Goldman Sachs documents a current US data center capacity shortfall exceeding 11 GW — widening to over 40 GW by 2028. At the current construction pace of approximately 15 GW of new builds per year, the physical gap cannot close before 2030. The investment implication is direct: companies that own existing, entitled, grid-connected data center capacity and energy infrastructure are in a structural scarcity position that cannot be replicated on a 2–3 year horizon. Five positions capture this gap across the supply chain:

EQIX — Equinix
260+ data centers across 70 metros globally. North American vacancy at 2.3% gives Equinix structural pricing power on every new lease. Entitled land in super-core markets (Northern Virginia, London, Singapore, Frankfurt) cannot be replicated in under 5 years. The 40 GW gap is Equinix's pricing moat made quantifiable.
CEG — Constellation Energy
Nuclear baseload power is the only carbon-free generation source capable of meeting hyperscalers' 24/7 clean energy requirements at scale. Hyperscalers cannot build new nuclear — lead time is 10–15 years. Constellation's existing fleet commands a PPA premium that widens as the power gap expands. The 40 GW shortfall means more demand for CEG's contracted output, not less.
VRT — Vertiv Holdings
Every new gigawatt of AI data center capacity requires critical power and thermal management infrastructure. Vertiv's liquid cooling systems are non-discretionary — AI GPU clusters at 10–15× CPU power density physically cannot operate without direct-to-chip cooling. Backlog of hyperscaler contracts extends to 2027. Each gigawatt of gap closed is direct Vertiv revenue.
NVDA — NVIDIA
Every data center that closes the capacity gap must be equipped with AI accelerators. NVIDIA captures approximately 90% of AI GPU spend — meaning every gigawatt of new AI data center capacity translates into GPU procurement. The 40 GW shortfall, if closed by 2030, represents approximately $480 billion in incremental GPU and server investment at current rack densities. The capacity gap is a GPU demand guarantee.

A.L. Capital Advisory conviction scores: EQIX 22.0/25 · CEG 21.0/25 · VRT 22.5/25 · NVDA 24.0/25. This analysis does not constitute investment advice. See Model Bridge section for full methodology.

Exhibit 5 April 2026 Update
Big-5 Hyperscaler AI Capex: 2024 Actual vs. 2025 Estimate vs. 2026 Guidance (USD billions)

Exhibit 5 April 2026 Update · Big-5 Hyperscaler AI Capex: 2024 Actual vs. 2025 Estimate vs. 2026 Guidance (USD billions)

Sources: CreditSights (Nov 2025), Futurum Group (Feb 2026), individual earnings calls Q4 2025 / Q1 2026. Approximately 75% of 2026 capex is AI-specific infrastructure ($450B). Rounded to nearest $5B.
Big-5 Hyperscaler AI Capex 2024–2026: Amazon $200B in 2026 reaffirmed (+167% vs 2024), Alphabet $180–190B raised Q1 2026 (+260%), Meta $125–145B raised Q1 2026 (+260%), Microsoft ~$190B CY2026 raised (+239%), Oracle ~$50B (+456%). Combined Big-5 capex: $256B actual in 2024, $443B estimated 2025 (+73% YoY), ~$725B confirmed 2026 (~+64% YoY). Sources: Q1 2026 earnings calls (April 29 2026), CreditSights, Futurum Group. A.L. Capital Advisory analysis, April 2026.
Company Ticker 2024 Actual 2025 Estimate 2026 Guidance 2024→2026 Δ Primary AI Focus
Amazon / AWS NASDAQ: AMZN ~$75B ~$104B $200B +167% AWS AI training & inference, logistics AI
Alphabet / Google NASDAQ: MSFT ~$56B ~$80B $120B+ +114% Azure OpenAI, Copilot, data center expansion
Oracle NYSE: ORCL ~$9B ~$20B $50B +456% OCI AI cloud, Stargate programme partner
Big-5 Combined 5 companies ~$256B ~$443B (+73%) ~$725B +36% YoY ~75% AI-specific (~$450B)
Goldman Sachs projects combined hyperscaler capex 2025–2027 will reach $1.15 trillion — more than double the $477B deployed across 2022–2024. Amazon's 2026 capex alone exceeds the combined capex of the entire publicly traded US energy sector. Hyperscalers now spend 45–57% of revenue on capex — ratios previously seen only in industrial utilities and telcos. Morgan Stanley and J.P. Morgan estimate the technology sector will need to issue approximately $1.5 trillion in new debt over the next three years to fund the AI infrastructure build-out.
02

Is AI Infrastructure a Bubble?

Why the AI Capex Cycle Is Structurally Different from the 1990s Fiber Overbuild

The analogy to the late-1990s telecommunications infrastructure bubble is compelling in one dimension — the scale of capital deployment — and misleading in every other. Fiber in the 1990s was built speculatively, with virtually unlimited capacity once laid and zero refresh requirement. Data centers are physically constrained, contractually committed before construction, and subject to accelerated depreciation cycles that naturally absorb any temporary excess. The evidence is visible in vacancy data: North American colocation vacancy has fallen from 9.8% in 2020 to 2.3% in H1 2025 (JLL Research), while the fiber glut post-2001 saw vacancy exceed 20%.

The key structural difference McKinsey identifies is the cost of carrying excess capacity. Fiber, once laid, is nearly free to maintain. Data centers are the opposite: power, cooling, and maintenance are ongoing high costs regardless of utilization. But crucially, AI accelerators have 3–4 year refresh cycles — meaning any overcapacity is rapidly converted into obsolescence, and new workloads pull spare capacity well before it becomes stranded.

1800s
Railroads
Speculative overbuilding across UK and US. Bankruptcies, fraud, market crashes. Networks connected ports & cities — the backbone of industrial commerce for 100 years.
1920s
Electrification
228% kWh capacity growth 1920–30. Overleverage met the Depression's demand shock. Interconnected regional grids; factory redesign around electric motors unlocked decades of productivity.
Late 1990s
Fiber 1.0
Comms capex $62B (1996) to $135B (2000). NASDAQ –78%. Telecom bankruptcies. $500B fiber overbuild became the backbone of the modern internet. Capacity endured.
2020s — Now
AI Infrastructure
$6.7T projected capex. Contracts before construction. Power as ultimate constraint. 2.3% vacancy. KKR thesis: "AI isn't a bubble. It's the backbone of the next industrial revolution."
Exhibit 6 A.L.C. Original Analysis
Peak Annual Capex as % of US GDP: Four Infrastructure Cycles Compared
The only apples-to-apples metric across 150 years. GDP-normalised capex removes inflation and economy-size distortion. AI infrastructure at ~5% of US GDP in 2026 is the largest infrastructure commitment in modern economic history — 2.5× the fiber overbuild, 3× the electrification peak. Sources: BLS, KKR GMAA, Bloomberg, A.L. Capital Advisory historical research.
Historical infrastructure cycles: Railroads peak ~1.5% GDP (1880s, endured), Electrification ~2.0% GDP (1920s, endured), Fiber/Telecom ~1.3% GDP (2000, endured), AI Infrastructure ~5.0% GDP (2026, ongoing). A.L. Capital Advisory historical analysis.
★ All three prior infrastructure overbuild cycles — despite bankruptcies, market crashes, and excess capacity — produced infrastructure that became foundational to the next era of productivity. The railroad network, national electrical grid, and global internet backbone each endured. GDP percentage calculated using US nominal GDP at cycle peak: 1880s ~$12B, 1929 ~$105B, 2000 ~$10.3T, 2026E ~$30T. AI figure includes Big-5 hyperscaler capex only; sovereign programmes (Stargate $500B, Saudi PIF $40B, EU €200B) would push the figure higher. A.L. Capital Advisory original analysis — not sourced from a third party.

03

The Debt Market Shift

How the AI Capex Cycle Transformed Hyperscaler Balance Sheets — April 2026 Update

The AI capex cycle has introduced a structural change to hyperscaler financial models that was not present in prior technology investment waves: the shift from cash-funded to debt-funded infrastructure. Amazon, Alphabet, Meta, and Microsoft spent a combined $256 billion on capex in 2024. Q1 2026 earnings confirmed the combined 2026 figure at approximately $725 billion. Internal free cash flow cannot scale at that rate. As a result, hyperscalers have collectively turned to debt markets at a scale not seen since the 2001 telecom build-out — but with materially stronger balance sheets underpinning the leverage.

Morgan Stanley and J.P. Morgan project the technology sector will need to issue approximately $1.5 trillion in new debt over the next three years to fund the AI infrastructure build-out. The hyperscalers now spend 45–57% of revenue on capex — ratios previously seen only in capital-intensive industrial utilities and telecommunications companies. Amazon's 2026 capex guidance of $200 billion alone exceeds the combined annual capex of the entire publicly traded US energy sector.

The investment implication is dual-edged. Debt-funded hyperscaler capex raises the structural demand floor for AI infrastructure suppliers — contracts are signed, purchase orders placed, and delivery timelines locked in regardless of short-term sentiment shifts. However, rising leverage also introduces a new risk layer for hyperscaler equity positions themselves: if AI revenue monetisation disappoints, the debt service burden will suppress free cash flow precisely when investor patience is shortest. The debt burden is precisely why Microsoft (MSFT) and Alphabet (GOOGL) carry a Selective rather than High Conviction rating in A.L. Capital Advisory's framework — the infrastructure beneficiaries (NVDA, VRT, EQIX, CEG) capture the upside without carrying the balance sheet risk of the buyers.

Is AI infrastructure now energy-constrained rather than capital-constrained?

Yes — this is the most consequential structural shift of Q1 2026. Through 2024, the primary constraint on AI infrastructure deployment was capital allocation and GPU supply. By Q1 2026, that constraint shifted to reliable power at scale. Every hyperscaler now reports that data center expansion is gated by grid interconnect timelines (18–36 months from application to energisation), transformer lead times (18–24 months), and permitting cycles — not by willingness to spend or GPU availability. Goldman Sachs's documented 11 GW US capacity shortfall is not primarily a real estate or construction problem. The 40 GW shortfall is fundamentally a power problem.

The investment implication: companies controlling existing grid-connected capacity and firm dispatchable power are in a structural scarcity position that cannot be replicated in under 5 years. CEG's nuclear fleet — operating 24/7 at near-100% availability — is the only carbon-free source meeting the "firm, dispatchable, always-on" specification hyperscalers require. Meta's nuclear PPA, Amazon's nuclear offtake expansion, and Microsoft's Three Mile Island restart confirm that nuclear power is now an operational requirement for AI infrastructure at scale, not an ESG preference. EQIX's 260+ permitted, grid-connected data centers similarly represent a 5-year-to-replicate physical moat that widens under an energy-constrained regime.

Debt Shift — Key Numbers

Hyperscalers now spend 45–57% of revenue on capex (vs. 10–15% in 2020). The technology sector faces an estimated $1.5 trillion in new debt issuance over 2025–2027 (Morgan Stanley / J.P. Morgan). Bain calculates that sustaining the current investment trajectory requires approximately $500 billion in annual spend to generate roughly $2 trillion in revenue — a 4× revenue multiple on capital that has not yet been demonstrated at scale. The gap between the capital being deployed and the revenue being generated is the central risk to monitor across the AI capex cycle.


04

Project Stargate and the Geopolitical Layer

Sovereign AI Demand — A New Demand Floor Not in the McKinsey Base Case

In January 2026, OpenAI, SoftBank, and Oracle announced Project Stargate — a $500 billion AI infrastructure programme targeting US deployment, with an initial $100 billion committed within the first four years. Stargate represents a category of demand that does not appear in McKinsey's April 2025 base-case model: sovereign and government-adjacent AI infrastructure, funded at national-strategy scale rather than commercial return logic alone.

Stargate is not an isolated event. Saudi Arabia's Public Investment Fund has committed to a $40 billion AI infrastructure programme. The UAE has established G42 as a sovereign AI entity with data centre commitments across three continents. The European Union's AI Continent Action Plan targets €200 billion in AI investment through 2030. Each of these programmes represents demand outside the hyperscaler-driven model — demand that is contractually committed, politically supported, and insensitive to short-term ROI calculations.

The structural implication for the AI capex cycle is clear: the demand floor is higher than McKinsey's base case assumed, because McKinsey's model was built on commercial hyperscaler logic alone. Sovereign AI programmes add a second, non-correlated demand layer. Goldman Sachs's documented 11 GW US capacity shortfall — growing to 40 GW by 2028 — understates the total demand gap when sovereign programmes are included.

Among the five high-conviction positions, Constellation Energy (CEG) is the most direct Stargate beneficiary: nuclear baseload is the only power source that meets both the carbon-free and uninterruptible specifications that sovereign AI programmes require. Equinix (EQIX) benefits through its hyperscale campus footprint in the geographies where sovereign programmes are concentrating — Virginia, London, Singapore, and the Gulf. NVIDIA (NVDA) benefits from GPU procurement at sovereign scale. Vertiv (VRT) benefits through the thermal management requirements of the dense compute clusters Stargate's architecture demands.


05

Investment Architecture

Five Archetypes: Builders · Energizers · Tech Developers · Operators · Enablers

McKinsey's analysis maps the $5.2 trillion AI capex envelope across five distinct investor archetypes. Understanding this architecture is essential: the investment case, risk profile, and return dynamics differ fundamentally across archetypes.

Archetype 01 · 15% of AI Capex
Builders
15% $0.8T
Real estate developers, design firms, and construction companies that expand and upgrade data center facilities. Key investments: land acquisition, materials, skilled labour, site development.
Examples: Turner Construction · AECOM · Bechtel
Archetype 02 · 25% of AI Capex
Energizers
25% $1.3T
Utilities, energy providers, cooling & electrical equipment manufacturers. Key investments: power generation (nuclear, gas, renewables), direct-to-chip liquid cooling, transformers, network connectivity.
Examples: Duke Energy · Vertiv (VRT) · Schneider Electric · Constellation Energy (CEG)
Archetype 03 · 60% of AI Capex
Technology Developers & Designers
60% $3.1T
Semiconductor companies and computing hardware suppliers. The largest single share — because every watt of AI compute ultimately flows through a chip.
Examples: NVIDIA (NVDA) · AMD · Intel (INTC) · TSMC (TSM) · Samsung · SK Hynix · Micron (MU)
Archetype 04 · Unquantified
Operators
Not modelled
Hyperscalers, colocation providers, GPU-as-a-service platforms. Own and run large-scale facilities. Capex overlaps with broader cloud & infrastructure spending — not isolated in McKinsey's model.
Examples: AWS · Google Cloud · Microsoft Azure · Equinix (EQIX) · Digital Realty
Archetype 05 · Embedded
Enablers
Cross-cutting
Software, networking, and service providers whose revenues are derived from — but not directly funded by — the AI infrastructure build-out. Enablers capture the recurring revenue layer on top. Highest margin profile; most exposed to competitive disruption.
Examples: Arista Networks (ANET) · Palantir (PLTR) · Snowflake (SNOW) · ServiceNow (NOW) · Salesforce (CRM)
06

Signal vs. Noise

What the Bears Get Right — and Wrong
Structural Bull Case
  • Vacancy at 2.3% in N. America H1 2025 — no speculative overbuild visible (JLL)
  • Contracts-first builds: hyperscalers require offtake agreements before construction begins
  • Power is the ultimate physical constraint on overbuild — grid queues, transformer lead times, permits
  • 3–4 year accelerator refresh cycles naturally absorb any temporary excess capacity
  • AI is a horizontal productivity layer across all industries, not a niche connectivity play
  • Lower unit costs drive accelerated adoption (Jevons Paradox — efficiency creates more demand)
  • Both inference and training workloads growing; inference to dominate by 2030
Risks & Bear Case
  • AI use-case failure: enterprises building but not deploying at scale — ROI visibility remains limited
  • Efficiency disruption: DeepSeek V3's 18× training cost reduction could suppress GPU demand
  • Concentration risk: NVIDIA at ~8% of S&P 500 — single-stock exposure in any AI basket
  • Geopolitical: US–China semiconductor export controls create supply chain and demand uncertainty
  • Rising power costs squeeze operators without long-term power contracts
  • Some business models (GPU rental, thin-margin operators, non-core markets) will not survive

"The stakes are high. Overinvesting in data center infrastructure risks stranding assets, while underinvesting means falling behind. The winners of the AI-driven computing era will be the companies that anticipate compute power demand and invest accordingly."

— McKinsey & Company, "The Cost of Compute," April 2025
Key Monitoring Signals · April 2026 Readings
The three McKinsey-identified variables that determine whether the AI capex cycle stays on the base-case trajectory or shifts to bull/bear. Updated each quarter at A.L. Capital Advisory.
Vacancy Rate · N. America
2.3%
H1 2025 · JLL Research
0%▲ Warn 6%10%
Green — Healthy. Below 3% = no speculative overbuild. The fiber glut post-2001 saw vacancy exceed 20%. Current reading is the lowest on record. Watch: any sustained rise above 5% would signal oversupply entering the market.
Capex / Revenue Ratio · Hyperscalers
45–57%
FY2025 avg · CreditSights Nov 2025
0%▲ Hist. avg 15%70%
Amber — Watch zone. Rising ratio is a bear signal for infrastructure operators — it means hyperscalers are spending more than they're earning from AI, not less. The 2026 ratio is projected to hold at 45–57% before beginning to decline in 2027 as AI cloud revenue scales.
Enterprise AI Deployment Rate
Early
Q1 2026 · A.L.C. qualitative assessment
Pilot▲ ScaleMass
Amber — Watch closely. Enterprise AI deployment is the key demand-curve leading indicator. Enterprises are building AI infrastructure and deploying pilots, but mass productive deployment (the trigger for the McKinsey base case demand curve) has not yet materialised. The gap between infrastructure spend and enterprise ROI is the primary bull/bear pivot point for 2026–2027.
Signal methodology: Green = on-track for base-case demand scenario · Amber = monitoring required, risk of deviation · Red = bear-case trigger active. McKinsey identifies these three variables as the primary leading indicators for the AI capex cycle trajectory. A.L. Capital Advisory updates readings quarterly.
07

Investor Framework

Winners, Losers & the Asset Playbook

The $5.2–$6.7 trillion capex envelope flows through a defined set of public equities. But raw exposure to the AI theme is not sufficient — the archetype, moat, and balance sheet quality of each company determine whether they capture compounding returns or get crushed in the shake-out.

High Conviction
NVIDIA Corporation
The dominant AI accelerator: $3.1T of the AI capex envelope flows through Technology Developers, and NVIDIA captures the largest single share. The CUDA ecosystem creates a software lock-in that AMD and Intel have spent years trying to break without success. H100/H200/B200 backlog extends well into 2026. Risk: export controls on H20 chips to China, and NVIDIA's weight at ~8% of the S&P 500 creates index-level concentration. The moat is real; the valuation demands discipline on position sizing.
Archetype Tech Developer
Capex Pool $3.1T
Key Moat CUDA Ecosystem
High Conviction
Vertiv Holdings
Critical power and thermal management infrastructure for data centers. AI chips run at 10–15× the power density of CPUs, making liquid cooling a necessity rather than a luxury. Vertiv is the global leader in direct-to-chip and immersion cooling systems — technologies McKinsey identifies as essential for the $1.3T Energizer archetype. Long-term hyperscaler contracts provide revenue visibility. Vertiv is the "overlooked play" in AI infrastructure: less glamorous than NVIDIA, structurally more defensible.
Archetype Energizer
Capex Pool $1.3T
Key Moat Thermal IP
High Conviction
Equinix (REIT)
The gold-standard Operator: 260+ data centers across 70 metros, with interconnect moats that hyperscalers cannot replicate. KKR specifically identifies "entitled land and expansion permits in super-core markets" and "operational hyperscaler relationships" as the hardest competitive barriers to build. Equinix controls both. REIT structure provides dividend yield alongside secular growth. London, Singapore, and Northern Virginia assets command premium EV/MW multiples that will only widen as vacancy tightens further.
Archetype Operator
Key Moat Interconnect + Land
Markets 70 metros
High Conviction
Constellation Energy
Nuclear baseload as the clean power solution to AI's energy problem. McKinsey identifies nuclear as a key solution for Energizers facing "clean-energy transition requirements." Hyperscalers need carbon-free, uninterruptible power — a specification only nuclear can meet at scale. Microsoft's Three Mile Island PPA agreement is the template. Constellation holds ~5% of US electricity generation capacity in nuclear. With data center power demand growing ~20% pa, 20-year PPAs at premium rates represent a structural earnings uplift that current consensus does not fully price.
Archetype Energizer
Capex Pool $1.3T
Contract Type 20-yr PPAs
Selective
Microsoft / Alphabet
Both are simultaneously the largest customers and investors in AI infrastructure. Bull case: they own the cloud margin moat and customer relationships that determine where AI revenue accrues. Bear case: competitive dynamics force defensive capex without ROI discipline. Watch capex/revenue ratios in 2026 earnings closely — this is the key leading indicator.
Archetype Operator
Combined Capex '25 ~$200B
Watch ROI Discipline
Selective
Advanced Micro Devices
The credible challenger to NVIDIA's GPU monopoly. MI300X competitive benchmarks are genuine, and the ROCm software ecosystem is maturing. The investment case is asymmetric: NVIDIA share loss of even 5–10 percentage points would be transformative for AMD.
Archetype Tech Developer
Thesis Challenger Moat
Risk CUDA Stickiness
GPU Rental
Avoid
GPU Rental Platforms / Thin-Margin Operators
KKR explicitly warns against assets with "single-tenant concentration, short-term leases, thin power margins, and secondary market exposure." GPU rental platforms that arbitrage compute at thin spreads have no structural moat: when hyperscalers build their own capacity (as they are actively doing), demand for rented GPUs collapses.
Risk No Moat
Pattern 1990s ISP
View Avoid
Interactive Tool — Conviction-Weighted Position Calculator

Enter your intended AI infrastructure allocation. The calculator distributes it across the five High Conviction positions using A.L. Capital Advisory's Model Bridge weights. Two methods: Conviction-Weighted (proportional to model scores) or Equal-Weight (25% each).

€100,000
Currency
This calculator is for illustrative purposes only and does not constitute investment advice. Position sizes are computed mechanically from A.L. Capital Advisory's conviction model scores and do not account for individual risk tolerance, tax situation, existing portfolio composition, liquidity needs, or jurisdiction-specific regulatory requirements. Past model performance does not guarantee future results. Consult a qualified financial advisor before making investment decisions. A.L. Capital Advisory is an independent advisory practice and may hold positions in the securities mentioned.
Exhibit 7 A.L.C. Framework
The AI Capex Cycle: Three Phases, Five Conviction Positions, One Decade
A.L. Capital Advisory proprietary investment framework. Phase boundaries are indicative — the AI capex cycle does not follow a fixed calendar.
NVDA and VRT at peak conviction. Phase 2 Deploy 2026 to 2028 continues NVDA and VRT at elevated levels and elevates EQIX and CEG. Phase 3 Compound 2028 to 2030 sustains all five positions at high conviction with CEG and EQIX reaching maximum as nuclear and colocation revenue compounds."> AI capex cycle phase diagram: Phase 1 Build (2024–2026) — NVDA High Conviction, VRT High Conviction, EQIX building, CEG building. Phase 2 Deploy (2026–2028) — NVDA maintained, VRT maintained, EQIX High Conviction, CEG High Conviction. Phase 3 Compound (2028–2030) — all five positions High Conviction, revenue compounding from long-term contracts. A.L. Capital Advisory framework, April 2026.
NVDA & VRT peak in Phase 1 (hardware procurement, liquid cooling deployment) — the earliest and most consensus positions. The risk: both are partially priced for the base case already.
EQIX & CEG compound across all three phases via long-term contracted revenue — 20-year nuclear PPAs and multi-year colo leases. The least crowded position; the most durable. A static "AI basket" allocation misses this rotation entirely.

08

GPU & CPU: Pricing, Shortage & the Supply Chain Chokepoints

From NVIDIA's Blackwell Backlog to ASML's Single-Point-of-Failure — the Semiconductor Layer of the AI Capex Cycle

Every dollar of hyperscaler AI infrastructure capex ultimately flows through a semiconductor. The $660–690 billion committed for 2026 does not build itself — it must be converted into physical chips, packaged onto boards, slotted into racks, and cooled. Understanding the semiconductor supply chain is therefore not a secondary analytical exercise. It is the root constraint that determines whether the AI capex cycle delivers its projected $6.7 trillion in capacity by 2030 or falls short of the McKinsey base case. Three chokepoints define the current supply environment: GPU allocation, advanced packaging capacity at TSMC, and the EUV lithography monopoly held by ASML.

NVIDIA GPU Shortage — Why 90% Market Share Persists Despite Competition

NVIDIA Corporation (NASDAQ: NVDA) enters 2026 in a position with few historical precedents: a near-monopolist in a market growing at 36% annually, constrained not by demand but by its own supply chain. The Blackwell B200 architecture — delivering 2.5× the inference throughput of the H100 at the same power envelope — carries reported order lead times of 12–18 months for hyperscaler allocations as of April 2026. The binding constraint is not TSMC's fab capacity for the GPU die itself, but CoWoS-L (Chip-on-Wafer-on-Substrate with Local) advanced packaging, which stacks the B200 GPU die together with six stacks of HBM3E memory in a single thermally integrated module. TSMC's CoWoS capacity is expanding from approximately 9,000 wafers per month in 2024 to an estimated 30,000 by end of 2026 — but hyperscaler demand is tracking above this build rate.

At Google Cloud Next on April 22, 2026, Google confirmed NVIDIA Vera Rubin GPU instances will be available on Google Cloud in 2027 — the first public confirmation of commercial availability for NVIDIA's next-generation architecture beyond Blackwell. This extends the CUDA and NVIDIA platform advantage through at least the 2027–2028 GPU cycle, reinforcing the structural thesis for NVDA equity at every hyperscaler refresh cycle.

The CUDA software moat is as material as the hardware lead. NVIDIA's Compute Unified Device Architecture — the programming model that underlies virtually every production AI training workload — has been in continuous development since 2007. The model libraries (cuDNN, cuBLAS, TensorRT), the developer tooling, and two decades of academic and commercial code written natively for CUDA constitute a switching cost that Advanced Micro Devices (NASDAQ: AMD) is actively but slowly dismantling with ROCm. A.L. Capital Advisory estimates enterprise migration from CUDA to ROCm at current pace would require 3–5 years for non-latency-sensitive inference workloads — and meaningfully longer for training.

AMD MI300X vs NVIDIA B200 — Deep Technical and Commercial Comparison

AMD's MI300X is the most credible GPU challenger in the AI data center market. The MI300X integrates CPU and GPU chiplets in a unified HBM3 memory pool — 192GB of shared memory versus the H100's 80GB. For very large model inference (70B+ parameter LLMs), the MI300X's memory capacity advantage is architecturally significant: models that require tensor parallelism across 8 H100s can run on 4 MI300X units, reducing interconnect overhead. Microsoft Azure and Meta have both announced MI300X deployments for inference workloads, validating the commercial thesis.

However, the competitive gap remains wide on three dimensions: software maturity (ROCm operator coverage vs CUDA is estimated at 85–90% for inference, but substantially lower for cutting-edge training kernels), supply chain reliability (TSMC allocates CoWoS capacity to NVIDIA first as the larger revenue customer), and ecosystem lock-in (the dominant MLOps toolchain — PyTorch, JAX, TensorFlow — all optimise natively for CUDA). A.L. Capital Advisory's base case: AMD captures 8–12% of the AI accelerator market by 2027, up from approximately 5–6% in 2025. At that share level and current data center GPU ASPs ($25,000–$35,000 per unit wholesale), AMD Data Center revenue could reach $15–20 billion annually by FY2027 — a material but sub-consensus outcome.

Intel, ARM Architecture & the CPU Transition

Intel Corporation (NASDAQ: INTC) is executing a structural pivot from integrated device manufacturer to pure-play foundry (Intel Foundry Services / IFS) while simultaneously defending its CPU franchise against AMD and the ARM architecture wave. In the AI data center context, CPUs play a secondary but non-negligible role: every GPU cluster requires high-performance host CPUs for data ingestion, preprocessing, orchestration, and inference serving. Intel's Xeon Scalable 6th Generation (Granite Rapids) and AMD's EPYC Genoa both compete for this socket. AMD's EPYC has outgrown Intel in data center CPU share for three consecutive years, now estimated at 33–35% of new server deployments versus Intel's 65%.

Arm Holdings plc (NASDAQ: ARM) is the deeper structural story. Arm's v9 architecture — licensed to Apple, Qualcomm (QCOM), Amazon (Graviton), Google (Axion), and NVDA (Grace CPU) — delivers 30–40% better performance-per-watt than x86 at comparable workloads. In the AI inference layer, where power efficiency directly determines cost per token, the x86 architecture's dominance is structurally eroding. Arm's revenue is a royalty on every chip shipped using its architecture — a position of compounding leverage as ARM-based designs proliferate across data centers, edge infrastructure, and AI accelerators.

ASML — The Single-Point-of-Failure in Global AI Chip Supply

ASML Holding N.V. (NASDAQ: ASML) manufactures every extreme ultraviolet (EUV) lithography machine on Earth. There is no second supplier. EUV lithography is required to pattern the sub-7nm transistors that power every leading-edge AI chip — NVIDIA's B200 on TSMC N3E, AMD's MI300X on TSMC N5, Intel's Gaudi 3 on Intel 7. A single EUV tool costs approximately €200 million, weighs 180 tonnes, and requires 40 shipping containers and a dedicated Boeing 747 to transport. ASML ships approximately 50–60 EUV systems per year. Lead times are 18–24 months from order to installation.

The investment case for ASML is the investment case for the AI capex cycle expressed through the supply chain's deepest chokepoint. Every new semiconductor fab built to address AI demand — TSMC Arizona, Samsung Taylor, Intel Ohio — requires ASML EUV machines. The Veldhoven-based company's order book extends through 2027 and includes next-generation High-NA EUV tools (approximately €380 million per unit) required for sub-2nm nodes. No competitor has a functioning EUV tool. The development timeline for a competitor to reach commercial EUV from scratch is estimated by industry analysts at 15–20 years and multiple billions in investment. ASML's geopolitical risk — the Dutch government, under US pressure, has restricted EUV exports to China — removes the largest potential demand overhang and creates a China-exclusion premium that benefits Western fabs.

Custom Silicon — Broadcom & Marvell as the ASIC Layer

The custom silicon trend is one of the most consequential supply-chain developments of the 2026 AI cycle. Google (TPU v5), Amazon (Trainium2, Inferentia3), Meta (MTIA2), and Microsoft (Maia 2) are all deploying custom AI accelerators designed in-house and manufactured at TSMC — explicitly to reduce dependence on NVIDIA and capture gross margin. The hyperscalers cannot design these chips themselves from scratch. They use third-party chip architects: Broadcom Inc. (NASDAQ: AVGO) and Marvell Technology Inc. (NASDAQ: MRVL) are the two dominant custom ASIC designers for AI workloads.

Broadcom has disclosed that its three largest custom AI chip customers will consume a combined addressable market of $60–90 billion per year by FY2027, based on disclosed deployment plans. Marvell's custom AI silicon revenue is smaller but growing rapidly, with design wins at Amazon and Microsoft. Both companies benefit from a structural dynamic that NVIDIA cannot easily disrupt: hyperscalers have a strategic incentive to fund custom silicon as a CUDA countermeasure, regardless of short-term cost premium. The ASIC layer is therefore not a threat to NVIDIA in the near term (custom ASICs are inference-only and cannot match NVIDIA's training flexibility) — but it is a meaningful long-term share shift that AVGO and MRVL are directly positioned to capture.

Server Integration Layer — Super Micro Computer & Dell Technologies

Super Micro Computer Inc. (NASDAQ: SMCI) and Dell Technologies Inc. (NYSE: DELL) occupy the final assembly layer of the AI GPU supply chain — converting raw NVIDIA, AMD, and ASML-derived chips into deployable rack-scale systems. Super Micro's liquid-cooled DGX-H100 and MGX server platforms are NVIDIA's preferred reference design partners; Super Micro has been first to market with Blackwell-based server systems for three consecutive GPU generations. Dell's AI server portfolio (PowerEdge XE9680) competes for the same enterprise and colocation buyer, with the additional distribution advantage of Dell's global direct sales force and ProSupport services.

Both companies have supply chains directly gated by NVIDIA's GPU allocation — when B200 supply is constrained, SMCI and DELL backlog builds and gross margins compress on pre-sold orders. The investment risk is margin: server integration is a low-margin business (5–8% EBIT for SMCI, 4–6% for AI infrastructure servers at Dell), and pricing competition between the two intensifies during GPU shortage periods. The structural opportunity: as the AI capex cycle moves from Phase 1 (Build) to Phase 2 (Deploy), hyperscalers shift from direct NVIDIA procurement to third-party server integrators — expanding SMCI and DELL's total addressable market.

Exhibit A1 A.L.C. Original Analysis · April 2026
GPU & CPU Supply Chain — Bull / Base / Bear Scenario Analysis

Exhibit A1 A.L.C. Original Analysis · April 2026 · GPU & CPU Supply Chain — Bull / Base / Bear Scenario Analysis

A.L. Capital Advisory sensitivity model, April 2026. GPU ASP = average selling price (data center B200/H100-class). Data center CPU = Intel Xeon + AMD EPYC blended ASP. NVDA Data Center revenue sensitivity assumes ~90% AI accelerator share in base case.
AMD market share, and ASML EUV shipments for 2026">

09

Memory: HBM Shortage, NAND Pricing & the AI Memory Inflection

Why Micron Is the Most Underappreciated AI Infrastructure Play — HBM3E, DRAM Cycle, and the Memory Wall

The binding constraint on NVIDIA's Blackwell B200 GPU output in Q1–Q2 2026 is not the GPU die. It is High Bandwidth Memory 3E (HBM3E) — the stacked DRAM that sits beside every AI accelerator and provides the memory bandwidth that separates a usable AI chip from a theoretical one. A single B200 GPU die requires six stacks of HBM3E, each consuming approximately 8GB, for a total of 192GB per chip. At current NVIDIA GPU shipment volumes, the global HBM market must produce and package more advanced memory in 2026 than it produced across all previous years combined. There are exactly three HBM suppliers on Earth: SK Hynix (unlisted in this directory), Samsung (unlisted), and Micron Technology Inc. (NASDAQ: MU). Micron is the smallest of the three — and the most investable for Western investors.

HBM3E — The Architecture of Scarcity

High Bandwidth Memory is not DRAM as conventionally understood. Standard DDR5 DRAM transmits data serially over a narrow bus. HBM stacks 8–12 DRAM dies vertically using through-silicon vias (TSVs), achieving memory bandwidth of 1.2 TB/s per stack versus DDR5's 0.1 TB/s per channel. For AI training workloads — which require feeding terabytes of model weights and activations per second to the GPU's thousands of CUDA cores — HBM bandwidth is the performance ceiling that determines training throughput. The difference between running a 70B parameter model training run on HBM3E and on GDDR6 is roughly 4–5× in training speed, which at hyperscaler compute costs translates directly into tens of millions of dollars per training run.

HBM production requires TSMC (TSM) CoWoS packaging to integrate the HBM stacks with the GPU die — creating a circular dependency in the supply chain: both the GPU and its memory require the same scarce CoWoS capacity. SK Hynix currently holds an estimated 50–55% of the HBM market; Samsung approximately 35–40%; and Micron approximately 8–12%, ramping aggressively. NVIDIA has publicly qualified all three suppliers for HBM3E, but SK Hynix retains a technology generation lead: Hynix began HBM3E production in Q3 2024; Micron qualified HBM3E in Q4 2024; Samsung's HBM3E yield issues delayed their qualification into Q1 2026. This means Micron enters 2026 in an unusual position — a qualified second supplier in a market where the lead supplier cannot meet demand and the third supplier has quality problems.

Micron Technology — The High-Conviction AI Memory Investment Case

Micron Technology Inc. (NASDAQ: MU) is A.L. Capital Advisory's fifth High Conviction position — and the one with the widest gap between current consensus expectations and structural opportunity. Micron's investment case rests on three independent pillars that compound simultaneously through 2028.

First, the HBM revenue inflection. Micron's HBM revenue was negligible in FY2024 (ending August 2024). The company has guided to HBM becoming a multi-billion dollar revenue line in FY2025, with HBM3E production ramping throughout 2025 and into 2026. At SK Hynix's disclosed HBM gross margins (50–55%), HBM is transformative for Micron's blended gross margin profile, which has historically averaged 25–35% across the DRAM/NAND cycle. A single HBM3E 8-Hi stack carries an ASP of approximately $20–25 per GB — versus $3–4 per GB for commodity DDR5 DRAM. The six HBM3E stacks in a single B200 GPU generate more revenue per chip for Micron than an entire DDR5 memory kit for a standard server.

Second, the DRAM pricing cycle. The 2022–2023 DRAM oversupply — driven by consumer PC and smartphone demand collapse — has fully resolved. Industry-wide DRAM bit output growth has been deliberately constrained by all three major producers (Micron, SK Hynix, Samsung), with capital expenditure redirected to HBM production, which is DRAM capacity that becomes unavailable for standard DRAM supply. The structural result: DDR5 server DRAM pricing is rising through 2026 as data center demand (each AI server rack contains $100,000–$400,000 of standard DRAM alongside the GPU) grows faster than supply can respond. Micron is a direct beneficiary of both the volume increase and the ASP uplift.

Third, the NAND recovery. Western investors often model Micron as a pure DRAM company, but NAND flash (used for SSDs, storage arrays, and training data pipelines) represents approximately 35% of Micron's revenue. The NAND cycle bottomed in Q3 2023 at price levels that forced all producers, including Western Digital (NASDAQ: WDC), into EBITDA-negative operations. The recovery is now well established: enterprise SSD pricing has recovered 60–80% from the trough as AI training datasets, model checkpoints, and inference caches drive unprecedented enterprise NVMe demand. Western Digital is the pure-play NAND recovery thesis — its Flash segment directly captures the enterprise SSD pricing cycle without the HBM exposure that makes Micron the more complete AI memory story.

The Memory Wall — AI's Hidden Bottleneck

The "memory wall" is the gap between the growth rate of AI model complexity (parameters, context window, batch size) and the growth rate of memory bandwidth. A GPT-4-scale model requires feeding approximately 140GB of weights to GPU cores during each forward pass. At current HBM3E bandwidth of 1.2 TB/s per stack and 6 stacks per B200, the peak sustainable throughput is approximately 7.2 TB/s per GPU. Projected model sizes for 2026–2027 training runs (estimated 1–10 trillion parameter models) would require 8–12 HBM3E stacks per GPU — beyond the current B200 physical architecture. The implication: every new GPU generation (NVIDIA's Rubin/R100, AMD's MI400) will require more HBM stacks, more advanced packaging, and higher memory bandwidth — a structural demand escalator that benefits Micron, SK Hynix, and the HBM memory ecosystem indefinitely.

The per-rack memory content of an AI server cluster illustrates the scale: a single NVIDIA DGX H100 system (8× H100 GPUs) contains 640GB of HBM2e, plus 2TB of DDR5 system DRAM, plus 30TB of NVMe SSD storage. At blended memory content pricing, the memory stack in a single DGX H100 represents approximately $80,000–$120,000 of the system's total cost — comparable to one H100 GPU. At the 40 GW US capacity shortfall Goldman Sachs documents, each gigawatt of AI data center capacity contains approximately 10,000 racks. The memory content of that capacity: 10,000 × $120,000 = $1.2 billion per gigawatt in memory spend alone. The 40 GW US shortfall implies $48 billion in cumulative incremental memory demand — before Europe, Asia, and Project Stargate are counted.

Geopolitical Risk — China Memory and Export Controls

Micron's most significant risk is regulatory: China's Cyberspace Administration of China (CAC) banned Micron products from "critical information infrastructure" operators in May 2023 — a retaliatory measure following US restrictions on Chinese advanced chip imports. China represented approximately 16% of Micron's FY2023 revenue. The ban's impact has been partially absorbed by Micron redirecting China supply to other markets (particularly India, Southeast Asia, and European data center expansion), but approximately $800M–$1.2B of annualised revenue remains displaced. A full resolution — or further escalation — of the US-China technology trade war is the key binary risk in the Micron investment case.

Chinese domestic memory is a structural headwind, but further than commonly believed from competitive parity. CXMT (ChangXin Memory Technologies), China's domestic DRAM producer, is shipping DDR4 and early DDR5 at estimated 25–35% yield versus industry standard 85–90%. Yangtze Memory Technologies (YMTC), China's NAND producer, reached 232-layer NAND in 2023 but faces ASML EUV import restrictions that will prevent progression to sub-10nm node technology required for next-generation HBM. The Chinese memory industry is not a 2026 threat to Micron's HBM business. Chinese domestic memory represents a 2029–2032 risk for commodity NAND market share.

Exhibit B1 A.L.C. Original Analysis · April 2026
Memory Supply Chain — Bull / Base / Bear Scenario Analysis

Exhibit B1 A.L.C. Original Analysis · April 2026 · Memory Supply Chain — Bull / Base / Bear Scenario Analysis

A.L. Capital Advisory sensitivity model, April 2026. HBM ASP = average selling price per GB, HBM3E class. NAND ASP = enterprise SSD blended $/GB. Sources: Micron earnings filings, TrendForce memory pricing database, Goldman Sachs semiconductor research.
A.L. Capital Advisory memory supply chain sensitivity analysis April 2026. HBM3E ASP bull $28/GB, base $22/GB, bear $16/GB. Micron total revenue bull $45B, base $38B, bear $28B. Micron HBM revenue bull $12B, base $7.5B, bear $3.5B. Enterprise NAND ASP bull $0.12/GB, base $0.09/GB, bear $0.07/GB. Western Digital revenue bull $18B, base $15B, bear $11B. Methodology: Bull assumes HBM supply shortage persists through 2026 with Micron gaining market share; Bear assumes Samsung HBM3E yield recovery compresses ASPs and Micron share gain stalls.
Metric Bull Case Base Case Bear Case Key Variable
HBM3E ASP (per GB, blended)$28$22$16Samsung HBM3E yield recovery timeline
MU Total Revenue FY2026E~$45B~$38B~$28BHBM ASP + DRAM cycle + NAND recovery pace
MU HBM Revenue FY2026E~$12B~$7.5B~$3.5BMicron HBM3E yield ramp + NVDA allocation share
MU Gross Margin FY2026E42–46%35–40%24–28%HBM mix shift; DRAM/NAND blended ASP
Enterprise NAND ASP ($/GB blended)$0.12$0.09$0.07AI inference SSD demand vs supply discipline
WDC Revenue FY2026E (Flash segment)~$18B~$15B~$11BNAND ASP recovery + enterprise SSD mix
DDR5 Server DRAM ASP ($/GB)$6.50$5.20$3.80HBM capacity cannibalisation of standard DRAM supply
Memory content per AI rack ($000s)$160K$120K$85KHBM stack count + DDR5 + NVMe per DGX-class system
BULL: Samsung HBM3E yield issues persist through 2026; Micron gains 18–22% HBM share; NAND supply discipline holds; DDR5 data center demand outpaces supply. BASE: Samsung partially recovers by mid-2026; Micron stabilises at 12–15% HBM share; NAND pricing recovery continues at measured pace; DDR5 balanced. BEAR: Samsung full HBM3E yield recovery by Q2 2026 compresses ASPs; AI efficiency gains reduce per-model memory footprint; NAND supply grows ahead of AI storage demand; China export risk widens for Micron. Not investment advice. A.L. Capital Advisory analytical framework, April 2026.
10

Projections & Outlook

What to Expect: A 5-Year Asset Impact Roadmap
Exhibit 8
AI Infrastructure Cycle: Asset Impact Projections by Phase
A.L. Capital Advisory analysis. Arrows: ↑ Positive, ► Neutral/Transitioning, ↓ Negative.
AI Infrastructure Cycle Asset Impact Projections by Phase 2024–2030: AI Semiconductors (NVDA, AMD) rated High Conviction Long across all three phases. Power and Cooling (VRT, CEG) rated High Conviction Long as the most durable earnings beneficiary. Data Center REITs (EQIX, DLR) rated High Conviction Long on vacancy tightening and interconnect moats. Hyperscalers (MSFT, GOOGL, AMZN) rated Selective pending ROI evidence. Construction rated Tactical only. GPU Rental operators rated Avoid in phases 2 and 3. A.L. Capital Advisory analysis, April 2026.
Asset / Sector Phase 1: Build (2024–26) Phase 2: Deploy (2026–28) Phase 3: Compound (2028–30) A.L.C. View
AI Semiconductors
NVDA, AMD
↑ Accelerating. Backlog extends 12–18 months. Pricing power at peak. ► Elevated but normalising. Efficiency gains may compress unit economics. ↑ Next-gen inference demand drives new cycle. Moat compounds. High Conviction Long
Power & Cooling
VRT, CEG
↑ Rapid growth as rack density escalates. Power PPAs being locked in now. ↑ Continued deployment of liquid cooling. Nuclear PPAs extending. ↑ Structural beneficiary of all three phases. Most durable earnings quality. High Conviction Long
Data Center REITs
EQIX, DLR
↑ Vacancy tightening. Premium pricing in core markets. Land value accruing. ↑ Expansion of AI-optimised facilities. Interconnect moats widen. ↑ Long-term lease revenue compounds. REIT dividend yield supported. High Conviction Long
Hyperscalers
MSFT, GOOGL, AMZN
↓ Capex absorbs free cash flow. Market questions ROI discipline. ► Cloud revenue inflection as AI workloads monetise. Watch margins. ↑ AI-driven cloud revenue compounds. CapEx declining as % of revenue. Selective. Monitor capex
Construction / Builders ↑ Labour and materials in high demand. Early-cycle beneficiary. ► Growth but margins compress as capacity builds. ↓ Cycle matures. Commodity dynamics. No moat. Tactical only. Not core.
GPU Rental / Thin-Margin Ops ► Works during scarcity. Business model intact for now. ↓ Hyperscalers self-build eliminates demand for rented compute. ↓ Model collapses. Structural shake-out. Avoid. Avoid
Exhibit 9 A.L.C. Original Analysis
ROI Bridge: What AI Revenue Must Materialise to Justify $725B in 2026 Capex?

Exhibit 9 A.L.C. Original Analysis · ROI Bridge: What AI Revenue Must Materialise to Justify $725B in 2026 Capex?

A.L. Capital Advisory original analysis. Capex base: $725B (Q1 2026 post-earnings consensus). ROI thresholds apply to AI-specific capex only (~$545B). Revenue figures represent required run-rate AI-attributable revenue by end-2028 assuming 3-year payback window. Current AI cloud revenue estimate: Google Cloud $80B annualised (+63% YoY), AWS $150B annualised (+28% YoY), Azure AI run-rate $37B annualised (+123% YoY) (Q1 2026 actuals).
ROI Bridge required AI revenue: 5% ROI=$34B, 10%=$68B, 15%=$101B, 20%=$135B, 25%=$169B. Current AI cloud revenue ~$150B annualised. Analysis: A.L. Capital Advisory, April 2026.
The bear case in numbers: If hyperscalers require a 25% return on AI-specific capex, the industry needs to generate ~$169B in AI-attributable revenue annually by end-2028. Current AI cloud revenue is estimated at ~$150B annualised — leaving a credible gap that the market is not yet pricing as a risk across AI infrastructure equities.
Why this still supports High Conviction: Bain's framework requires $500B annual spend to generate ~$2T revenue — a 4× revenue multiple. Even at a conservative 10–15% ROI threshold, the required revenue ($68–101B) is achievable. The infrastructure supplier positions (NVDA, VRT, EQIX, CEG) are paid regardless of whether the ROI calculation resolves in the bull or base case.
This chart represents A.L. Capital Advisory's original analytical framework applied to publicly available capex guidance. ROI thresholds are illustrative — actual returns will depend on revenue mix, depreciation schedules, and utilisation rates. The $150B current AI cloud revenue estimate is A.L. Capital Advisory's internal estimate based on public earnings disclosures and is not sourced from a third party.

Portfolio Construction Framework — Five principles for building the AI infrastructure position without getting burned:

1
Own the moats, not the narrative.
KKR's core tenet. Power access, entitled land, interconnects, CUDA lock-in, and hyperscaler relationships are durable. GPU rental and thin-margin operators are not. The shake-out will concentrate in business models that work only during scarcity.
2
The overlooked play is power.
Of the $5.2T AI capex, $1.3T flows to Energizers — the segment most under-owned relative to its capex share. CEG, VRT, and utility-scale operators in data center proximity markets represent this allocation. Less crowded than semiconductors, more durable in the long run.
3
Size for the volatility, not just the conviction.
Even highest-conviction names will experience 30–40% drawdowns as the cycle matures. Position sizing should reflect that the structural thesis is sound but the path is non-linear. NVIDIA at 8% of S&P 500 demands position sizing discipline.
4
Phase your exposure.
Build vs. Deploy vs. Compound phases favour different archetypes. Semiconductors and power dominate Phase 1. Software and cloud infrastructure dominate Phase 2. Productivity beneficiaries compound in Phase 3. A static allocation to "AI" misses this rotation.
5
Watch the McKinsey indicators.
The three key signal variables: (1) North American vacancy rate — below 3% is healthy; above 6% is a warning; (2) Hyperscaler capex-to-revenue ratios — rising is a bear signal for operators; (3) Enterprise AI deployment rate — the key leading indicator for whether the demand curve achieves base case or slips to constrained.

Conclusion

The AI Capex Cycle — Five Positions, Three Phases, One Structural Thesis

The AI capex cycle is not a theme. The AI capex cycle is a decade-long structural reallocation of capital — from consumption to physical infrastructure — at a scale not seen since the electrification of the United States economy in the 1920s. Q1 2026 earnings confirmed the Big-5 hyperscalers will spend approximately $725 billion on AI infrastructure in 2026 alone, nearly tripling the $256 billion deployed in 2024. Goldman Sachs documents a US capacity shortfall already exceeding 11 gigawatts, growing to 40 GW by 2028. North American colocation vacancy has fallen to 2.3% — a level at which pricing power is structural and durable.

This paper has mapped the full capital flow across the cycle's three phases. In Phase 1 (Build, 2024–2026), semiconductor procurement and thermal management are the primary value-capture layer — hence NVIDIA and Vertiv as peak-conviction positions. In Phase 2 (Deploy, 2026–2028), contracted infrastructure operators with physical scarcity moats begin to compound: Equinix's interconnect estate and Constellation Energy's 20-year nuclear power purchase agreements. In Phase 3 (Compound, 2028–2030), all five positions benefit simultaneously as AI-driven cloud revenue scales and contracted revenue compounds across multi-year agreements. A static "AI basket" allocation misses the Phase 2–3 rotation entirely.

The structural bull case rests on three pillars that the bear case cannot dislodge without a fundamental change in physical reality: construction lead times of 18–30 months mean supply cannot respond to short-term sentiment shifts; AI accelerator refresh cycles of 3–4 years mean overcapacity converts to obsolescence faster than it becomes stranded; and the contracts-first structure of the build-out means virtually every dollar of hyperscaler guidance is committed before a shovel enters the ground. These are not financial projections — they are engineering constraints.

The honest risks are equally structural. If enterprise AI deployment fails to materialise at scale by 2027, the demand curve reverts to the constrained scenario ($3.7T vs $5.2T in AI-specific capex). If semiconductor efficiency gains (in the tradition of DeepSeek V3) suppress training demand, NVIDIA's backlog clears faster than consensus models. If hyperscaler ROI discipline breaks down under debt pressure, the capex/revenue ratio remains above 50% indefinitely — compressing free cash flow precisely when patient capital needs a return. These risks are real. The risks are precisely why A.L. Capital Advisory distinguishes High Conviction infrastructure suppliers (NVDA, VRT, EQIX, CEG) from Selective hyperscaler equity positions (MSFT, GOOGL) — infrastructure suppliers are paid regardless of which ROI scenario resolves; hyperscaler equity positions are not.

The monitoring framework is straightforward: vacancy below 3% is healthy; above 6% is the early warning. Capex/revenue ratios declining from 2026 onwards signal the ROI inflection the market is waiting for. Enterprise AI deployment rate — currently in the early phase — is the key leading indicator for whether the McKinsey base case ($6.7T by 2030) is achieved or exceeded. A.L. Capital Advisory updates these readings quarterly.

A.L. Capital Advisory — April 2026 Conviction Summary

High Conviction Long: NVIDIA (NASDAQ: NVDA) · Vertiv Holdings (NYSE: VRT) · Equinix (NASDAQ: EQIX) · Constellation Energy (NASDAQ: CEG) · Micron Technology (NASDAQ: MU) — five positions across the AI infrastructure stack, covering Technology Developers, Power & Thermal Management, Data Center REITs, and Nuclear Baseload. Model Bridge weighted scores: 24.0 (NVDA), 22.5 (VRT), 22.0 (EQIX), 21.5 (MU), and 21.0 (CEG) out of 25.0. All five positions benefit across all three phases of the AI capex cycle with varying peak intensities. Selective: MSFT · GOOGL · AMD — monitor capex/revenue ratio and ROI discipline quarterly before adding or increasing exposure.


Data Appendix

Every Key Figure · Source · Date Verified · Methodology
AI Infrastructure Research Data Appendix April 2026: All key figures used in this A.L. Capital Advisory report, with primary source, date verified, and methodology notes. Covers global capex projections, hyperscaler spending, capacity data, vacancy rates, and company-specific statistics for NVIDIA, Vertiv, Equinix, and Constellation Energy.
Figure Value Primary Source Date Verified Methodology Note
Global data center capex by 2030 (base case)$6.7 trillionMcKinsey & Company, "The Cost of Compute"Apr 2025Base-case of three modelled scenarios; 125 GW incremental AI capacity
Global data center capex by 2030 (accelerated scenario)$7.9 trillionMcKinsey & Company, "The Cost of Compute"Apr 2025Accelerated scenario: 160 GW incremental AI capacity; AI deployment rate outpaces base
Global data center capex by 2030 (constrained scenario)$5.2 trillionMcKinsey & Company, "The Cost of Compute"Apr 2025Constrained scenario: 100 GW incremental AI capacity; enterprise deployment slower than modelled
AI share of total data center demand by 2030~70%McKinsey & Company, "The Cost of Compute"Apr 2025AI workloads: $5.2T of $6.7T total; remaining $1.5T traditional IT
Global data center capacity, 2025 (baseline)82 GWMcKinsey & CompanyApr 2025Installed capacity; includes hyperscaler, colo, and enterprise
Global data center capacity, 2030 (base case)207 GWMcKinsey & CompanyApr 2025Base-case projection; approximately 2.5× 2025 baseline
North American colo vacancy rate, H1 20252.3%JLL Research, North America Colocation VacancyJun 2025Published H1 2025 survey; down from 9.8% in 2020
US data center capacity shortfall (current)>11 GWGoldman Sachs, "Powering the AI Era"2025Gap between demand and committed supply in US markets
US data center capacity shortfall by 2028>40 GWGoldman Sachs, "Powering the AI Era"2025Cumulative projected gap at current construction pace
Big-5 hyperscaler capex, 2024 actual~$256BCreditSights, individual earnings filingsNov 2025Amazon + Alphabet + Meta + Microsoft + Oracle; FY2024
Big-5 hyperscaler capex, 2025 estimate~$443B (+73%)CreditSights, Futurum GroupFeb 2026Estimated based on Q1–Q3 2025 actuals + Q4 guidance
Big-5 hyperscaler capex, 2026 guidance$660–690B (+36%)Futurum Group, individual Q4 2025 earnings callsFeb 2026Amazon $200B, Alphabet $175–185B, Meta $115–135B, MSFT $120B+, Oracle $50B
AI Data Center GPU ASP (B200/H100-class, wholesale)$25,000–$35,000A.L. Capital Advisory estimate; Bloomberg, earnings disclosuresApr 2026Blended H100/B200 allocation; wholesale hyperscaler pricing; retail premium 20–40% above
NVIDIA (NVDA) B200 GPU lead time (hyperscaler allocation)12–18 monthsA.L. Capital Advisory primary research; industry sourcesApr 2026CoWoS advanced packaging constraint; not GPU die fab capacity
TSMC (TSM) CoWoS capacity 2026E (wafers/month)~28,000TSMC investor day; A.L. Capital Advisory estimateApr 2026Up from ~9,000 wpm in 2024; demand tracking above build rate through mid-2026
ASML EUV machine unit price (standard EUV)~€200MASML investor relations; public disclosuresApr 2026High-NA EUV: ~€380M per unit; only supplier of EUV globally
ASML EUV annual shipment volume (2025 actual)~50 unitsASML annual report 2025Mar 202618–24 month lead times; order book extends to 2027
HBM3E ASP (per GB, blended, 2026E)~$22/GBTrendForce memory pricing; A.L. Capital Advisory estimateApr 2026vs $3–4/GB for standard DDR5; premium from stacking complexity and CoWoS packaging
HBM3E bandwidth per stack1.2 TB/sSK Hynix, Micron (MU) product specificationsApr 2026vs GDDR6 ~0.3 TB/s; 4–5× bandwidth advantage for AI training workloads
NVDA B200 GPU HBM3E content per chip192 GB (6 stacks)NVIDIA Blackwell architecture whitepaperMar 2025Each stack 8-Hi, 32GB; 6 stacks × 32GB = 192GB total per B200 die
Memory content per DGX H100 rack system ($)~$120,000A.L. Capital Advisory analysis; NVIDIA DGX specificationsApr 2026640GB HBM2e + 2TB DDR5 DRAM + 30TB NVMe SSD at blended market ASPs
Micron HBM market share (2026E)12–15%A.L. Capital Advisory estimate; TrendForceApr 2026Ramping from ~8% in 2025; SK Hynix ~50%, Samsung ~35%
Enterprise NAND ASP recovery from trough (2023–2026)+60–80%TrendForce; Western Digital earningsApr 2026Trough Q3 2023; AI training storage & inference cache driving enterprise SSD demand recovery
AMD MI300X estimated data center GPU market share5–8%A.L. Capital Advisory estimate; IDC, BloombergApr 2026CUDA moat limits adoption; MI300X deployed by MSFT Azure and Meta for inference workloads
Amazon AWS 2026 capex guidance$200BAmazon Q4 2025 earnings callFeb 2026Full-year 2026 guidance; predominantly AWS data center and AI infrastructure
Alphabet 2026 capex guidance$180–190BAlphabet Q1 2026 earnings callApr 2026Raised from $175–185B; includes Intersect acquisition closed March 2026. Google Cloud Q1: $20B (+63% YoY)
Meta 2026 capex guidance$125–145BMeta Q1 2026 earnings callApr 2026Raised from $115–135B; CFO cited higher component pricing (memory inflation) as primary driver
Microsoft 2026 capex guidance$120B+Microsoft Q2 FY2026 earnings callFeb 2026Azure AI infrastructure and data center build-out; exact figure not specified
Oracle 2026 capex guidance$50BOracle Q3 FY2026 earnings callFeb 2026OCI cloud and AI infrastructure; part of broader $100B multi-year commitment
AI-specific share of 2026 hyperscaler capex~75% (~$450B)CreditSightsNov 2025Excludes traditional cloud, logistics, and non-AI infrastructure
Goldman Sachs 2025–2027 hyperscaler capex projection$1.15TGoldman Sachs2025More than double the $477B deployed across 2022–2024
Hyperscaler capex as % of revenue45–57%Introl / CreditSights analysisDec 2025Ratio previously seen only in industrial utilities and telcos
New debt issuance needed — tech sector, 2025–2027~$1.5TMorgan Stanley / J.P. Morgan2025Projected total; bridges gap between FCF and capex commitments
Project Stargate programme value$500BWhite House / OpenAI announcementJan 2026OpenAI, SoftBank, Oracle; initial $100B committed within 4 years
NVIDIA GPU market share in AI accelerators~90%Introl / CreditSightsDec 2025Share of AI accelerator spend; approximately 6M GPUs at ~$30K avg
AI chip power density vs CPU10–15×A.L. Capital Advisory / Vertiv technical documentationApr 2026H100/B200 clusters vs. equivalent CPU rack power draw
Equinix data centers globally260+Equinix investor relations, Q4 2025Dec 2025Operational IBX data centers across 70+ metropolitan markets
Constellation Energy US nuclear capacity share~5%Constellation Energy investor relationsApr 2026Approximately 5% of total US electricity generation capacity from nuclear
Fiber overbuild vacancy rate (post-2001)>20%McKinsey & Company / JLL ResearchApr 2025Telecom infrastructure vacancy after the dot-com collapse; cited for structural comparison

Model Bridge

How Input Assumptions Connect to Conviction Ratings — Scoring Methodology

The conviction ratings in this paper are produced by scoring each security across five criteria, each weighted by its relative importance to long-term AI infrastructure returns. The criteria and weights are stated below. Scoring is on a 1–5 scale (5 = strongest). A score above 19 qualifies for High Conviction; 14–18 for Selective; below 14 for Avoid.

A.L. Capital Advisory AI Infrastructure Conviction Model Bridge April 2026: Five criteria weighted as follows — AI Capex Cycle Exposure 30%, Competitive Moat Durability 25%, Contract and Revenue Visibility 20%, Valuation Discipline 15%, Geopolitical and Regulatory Risk 10%. NVIDIA scores 24/25 (High Conviction), Vertiv 22/25 (High Conviction), Equinix 22/25 (High Conviction), Constellation Energy 21/25 (High Conviction), Microsoft 17/25 (Selective), Alphabet 16/25 (Selective), AMD 15/25 (Selective).
Company AI Capex Exposure
30% weight
Moat Durability
25% weight
Revenue Visibility
20% weight
Valuation
15% weight
Geo / Reg Risk
10% weight
Weighted Score Rating
NVIDIA NVDA 5 — ~90% AI accel. share; H100/B200 demand 5 — CUDA ecosystem lock-in; software moat 4 — Backlog 12–18mo; some export risk 4 — Premium warranted; consensus may lag 3 — Export controls on China material risk 24.0 / 25 High Conviction
Vertiv VRT 5 — $1.3T Energizer pool; liquid cooling necessity 4 — Thermal IP; hyperscaler relationships 5 — Long-term hyperscaler contracts; backlog 4 — Less crowded than semis; reasonable valuation 4 — Low geopolitical exposure; US/EU footprint 22.5 / 25 High Conviction
Equinix EQIX 5 — 2.3% vacancy; 70 metros; land scarcity 5 — Interconnect moat unreplicable by hyperscalers 5 — REIT long-term leases; contracted revenue 3 — Premium EV/MW; REIT rate sensitivity 4 — REIT structure; permitting risk in some markets 22.0 / 25 High Conviction
Constellation Energy CEG 4 — Nuclear baseload; ~5% US electricity; AI power spec 4 — Existing licensed nuclear fleet; new entrants 10+ years 5 — 20-year PPAs; Microsoft TMI template 4 — AI premium not fully priced; consensus lag 3 — Nuclear regulation; political risk in some states 21.0 / 25 High Conviction
Micron Technology MU 5 — HBM3E sole Western supplier; AI memory wall beneficiary 4 — 3-supplier HBM oligopoly; DRAM/NAND cycle expertise 4 — HBM backlog; DRAM cycle pricing power; NAND recovery 4 — Consensus underestimates HBM mix shift; re-rating potential 3 — China revenue ban risk; geopolitical semiconductor exposure 21.5 / 25 High Conviction
Microsoft MSFT 4 — Azure cloud + Copilot; but also capex risk 4 — Enterprise cloud moat; Office lock-in 3 — Revenue building but $120B+ capex weighs 3 — Fairly valued; ROI discipline key variable 3 — Low geopolitical risk; some EU regulatory 17.0 / 25 Selective
Alphabet GOOGL 4 — Google Cloud + Search AI; $180–190B capex (Q1 2026 raised) 4 — Search moat; TPU custom silicon 3 — Ad revenue stable; cloud inflecting 3 — Reasonable; capex/FCF tension 2 — DOJ antitrust; Search disruption risk 16.0 / 25 Selective
AMD AMD 3 — MI300X challenger; 5–10pp NVDA share thesis 3 — ROCm maturing; CUDA stickiness is real 3 — Growing but no backlog visibility 4 — Asymmetric if share shift materialises 4 — Lower export control exposure than NVDA 15.0 / 25 Selective
Scoring scale: 5 = strongest / most favourable · 1 = weakest / most adverse. Threshold: ≥19.0 weighted = High Conviction; 13.0–18.9 = Selective; <13.0 = Avoid. This model represents A.L. Capital Advisory's analytical framework and does not constitute investment advice.

Exhibit 10 A.L.C. Framework
Conviction Scorecard: Weighted Model Scores Across 7 Securities

Exhibit 10 A.L.C. Framework · Conviction Scorecard: Weighted Model Scores Across 7 Securities

A.L. Capital Advisory Model Bridge output. Weighted score = Σ(criterion score × weight). Threshold: ≥19.0 = High Conviction (gold) · 13.0–18.9 = Selective (teal) · <13.0 = Avoid (red). Max score = 25.0.
High Conviction (≥19.0)
Selective (13.0–18.9)
Max possible score (25.0)
AMD scores 15.0 (Selective)."> Conviction scorecard: NVDA 24.0/25 High Conviction, VRT 22.5/25 High Conviction, EQIX 22.0/25 High Conviction, CEG 21.0/25 High Conviction, MSFT 17.0/25 Selective, GOOGL 16.0/25 Selective, AMD 15.0/25 Selective. A.L. Capital Advisory model, April 2026.
Vertical dashed line at 19.0 = High Conviction threshold. Scores reflect April 2026 assessment. This model does not constitute investment advice — it is an analytical framework for structuring conviction decisions.

Sensitivity Analysis

Bull / Base / Bear Scenarios by Security — Key Variable Sensitivities

The following scenario tables show how the investment thesis for each high-conviction position varies under different assumptions. The base case is used throughout the main paper. The bull and bear cases are not price targets — they define the range of outcomes that would cause a material re-rating of the conviction.

Sensitivity Table A
NVIDIA (NASDAQ: NVDA) — Bull / Base / Bear Scenario Analysis

Sensitivity Table A · NVIDIA (NASDAQ: NVDA) — Bull / Base / Bear Scenario Analysis

NVIDIA (NASDAQ: NVDA) Sensitivity Analysis 2026: Bull case assumes Blackwell B200 transition exceeds expectations with 12–18 month backlog sustained and no significant export control tightening, implying continued revenue growth above 80% YoY. Base case assumes healthy B200 ramp with moderate export restrictions and CUDA maintaining 85%+ market share, implying 40–60% revenue growth. Bear case assumes DeepSeek-style efficiency gains suppress GPU demand growth, export controls significantly impact China revenue (approximately 20% of total), and AMD ROCm ecosystem matures faster than expected. A.L. Capital Advisory analysis, April 2026.
ScenarioKey AssumptionGPU DemandMarket ShareRevenue Growth (FY2026)Conviction Impact
Bull Blackwell B200 ramp exceeds expectations; export controls stable; inference workloads accelerate faster than efficiency gains Sustained; backlog extends to 18+ months 90%+ maintained >80% YoY Upgrade to maximum position weight
Base ★ Healthy B200 ramp; moderate export restrictions; CUDA stickiness intact; AMD ROCm gains modest 3–5pp share Strong; 12–18 month backlog 85–90% 40–60% YoY Maintain High Conviction; current weight
Bear Efficiency gains (DeepSeek-style) suppress training GPU demand; China export controls tighten materially; AMD gains 10pp+ share Slowing; backlog clears faster than orders refill <80% 10–25% YoY Reduce to Selective; monitor quarterly
China revenue represents approximately 20% of NVIDIA's total — the primary bear case sensitivity variable. Monitor quarterly export license disclosures.
Sensitivity Table B
Vertiv Holdings (NYSE: VRT) / Constellation Energy (NASDAQ: CEG) — Bull / Base / Bear
Vertiv Holdings (NYSE: VRT) and Constellation Energy (NASDAQ: CEG) Sensitivity Analysis 2026: Vertiv bull case assumes liquid cooling adoption at AI data centers reaches 60%+ by 2027, implying 40%+ revenue growth; base case assumes 35–40% of new AI racks adopt liquid cooling, implying 25–35% revenue growth; bear case assumes air cooling innovation delays liquid cooling adoption, implying 10–15% growth. Constellation Energy bull case assumes nuclear PPA pricing above $100 per MWh on new agreements with 3+ new hyperscaler PPAs signed in 2026; base case assumes PPA pricing $80–100 per MWh, 1–2 new agreements; bear case assumes regulatory delays on nuclear restart permits and PPA pricing under $75 per MWh. A.L. Capital Advisory analysis, April 2026.
TickerScenarioKey VariableAssumptionRevenue Growth (FY2026E)Conviction Impact
VRT BullLiquid cooling adoption rate60%+ of new AI racks by 2027; immersion cooling accelerates40%+ YoYUpgrade weighting
Base ★Liquid cooling adoption rate35–40% of new AI racks adopt liquid cooling; air cooling holds in legacy deployments25–35% YoYMaintain High Conviction
BearLiquid cooling adoption rateAir cooling innovation delays adoption; hyperscaler in-house thermal IP competes10–15% YoYReduce to Selective
CEG BullNuclear PPA pricing & volume>$100/MWh on new PPAs; 3+ hyperscaler agreements signed in 202620%+ YoY earningsUpgrade weighting
Base ★Nuclear PPA pricing & volume$80–100/MWh; 1–2 new hyperscaler PPAs on Microsoft TMI template10–15% YoY earningsMaintain High Conviction
BearNuclear PPA pricing & volumeRegulatory delays on nuclear permits; PPA pricing <$75/MWh; no new agreements in 20260–5% YoY earningsReduce to Selective
Sensitivity Table C
Equinix (NASDAQ: EQIX) — Bull / Base / Bear
Equinix (NASDAQ: EQIX) Sensitivity Analysis 2026: Bull case assumes North American colocation vacancy falls below 1.5% and Equinix raises lease rates 15%+ on renewals in Virginia, London, and Singapore, implying revenue growth above 15% and dividend growth of 10%+. Base case assumes vacancy holds at 1.5–3.0% range and 8–12% lease rate increases, implying 10–12% revenue growth. Bear case assumes Hyperscaler self-build of campus data centers reduces demand for colo in tier-1 markets, vacancy rises above 5%, implying revenue growth of 3–6% and potential dividend cut pressure. A.L. Capital Advisory analysis, April 2026.
ScenarioKey VariableN. America VacancyLease Rate Δ (Renewals)Revenue GrowthConviction Impact
BullVacancy tightens further; pricing power accelerates<1.5%+15%+ on renewals in VA, London, Singapore>15% YoYUpgrade to maximum weight; dividend growth 10%+
Base ★Vacancy stable at historic lows; pricing power maintained1.5–3.0%+8–12% on renewals10–12% YoYMaintain High Conviction; steady dividend growth
BearHyperscaler self-build reduces tier-1 colo demand; vacancy rises>5.0%Flat to –5% on renewals3–6% YoYReduce to Selective; monitor vacancy quarterly
Key monitoring indicator across all three scenarios: JLL North America Colocation Vacancy report, published quarterly. Bear case trigger: vacancy rate rising above 5% for two consecutive quarters.
Q1 2026 marked the structural shift from capital-constrained to energy-constrained AI infrastructure deployment. Through 2024, the primary bottleneck was GPU availability and capital allocation. By early 2026, every major hyperscaler reported that new data center capacity was gated by grid interconnect timelines (18–36 months from application to energisation), transformer lead times (18–24 months), and permitting cycles — not willingness to spend. Goldman Sachs's 11 GW US capacity shortfall is fundamentally a power problem: insufficient firm, grid-connected, permitted electrical capacity exists to meet contracted hyperscaler demand at current construction pace. Companies controlling existing grid-connected capacity (EQIX's 260+ data centers) and firm dispatchable power (CEG's nuclear fleet) are in a structural scarcity position that cannot be replicated in under 5 years. The energy constraint is precisely why A.L. Capital Advisory rates CEG and EQIX at High Conviction alongside GPU-layer positions.
The AI infrastructure capex ROI question is the most contested analytical issue in technology investing in 2026. Bain & Company's framework suggests sustainable AI cloud investment requires approximately $500 billion in annual capex to generate $2 trillion in revenue — implying a 25% capex intensity. At 2026 guidance levels, the Big-5 hyperscalers are spending $660–690 billion against combined cloud and AI revenues still ramping. The payback model depends critically on: (1) the rate at which enterprises adopt and pay for AI-enabled cloud services, and (2) the productivity premium AI workloads command over standard compute. A.L. Capital Advisory monitors the hyperscaler capex-to-revenue ratio quarterly — in Q4 2025, this ranged from 34% (Amazon) to 75% (Meta). The key bull catalyst is this ratio beginning to compress, signalling AI revenue scaling to match investment. Current signal as of April 2026: amber — watch zone. Critically, for infrastructure suppliers (NVDA, VRT, EQIX, CEG), the ROI question is irrelevant — they are paid upon delivery regardless of whether the hyperscaler's own AI ROI materialises.
Sovereign AI refers to national programmes building domestically controlled AI infrastructure — data centers, compute clusters, and model training capacity independent of US hyperscaler platforms. The scale is significant: Saudi Arabia committed $15B at LEAP 2025 including a $10B PIF-Google Cloud partnership; the UAE, India, and Japan have multi-billion-dollar sovereign AI programmes; and Project Stargate ($500B) is itself a form of US sovereign AI investment. For the AI infrastructure thesis, sovereign AI adds a second demand layer not captured in McKinsey's commercial hyperscaler model. Goldman Sachs's 40 GW US shortfall understates total global demand when sovereign programmes are included. Direct beneficiaries: NVDA (GPU export to sovereign programmes), ASML (EUV machines for allied-nation fabs), MU (HBM memory for sovereign AI clusters), and EQIX (international colocation for sovereign deployments).
Exhibit S4 A.L.C. Original · April 2026
Micron Technology (MU) — HBM3E & DRAM Cycle Sensitivity
A.L. Capital Advisory sensitivity model, April 2026. See §09 Memory section for full HBM methodology. HBM ASP = blended HBM3E price per GB. Key variable: Samsung HBM3E yield recovery timeline; Micron allocation share from NVIDIA B200 platform.

Exhibit S4 · Micron Technology MU — HBM3E & DRAM Cycle Sensitivity Analysis April 2026

Micron Technology MU bull base bear sensitivity April 2026. Bull: Samsung HBM3E yield issues persist, Micron gains 18-22% share, HBM ASP $28/GB, MU revenue $45B, gross margin 42-46%, HBM revenue $12B. Base: Samsung partial recovery mid-2026, Micron 12-15% HBM share, ASP $22/GB, revenue $38B, margin 35-40%, HBM $7.5B. Bear: Samsung full yield recovery Q2 2026, ASP $16/GB, revenue $28B, margin 24-28%, HBM $3.5B. Source: A.L. Capital Advisory, April 2026.
Scenario HBM3E ASP ($/GB) MU Revenue FY2026E Gross Margin HBM Revenue Key Trigger
Bull $28 ~$45B 42–46% ~$12B Samsung HBM3E yield issues persist through 2026; Micron gains 18–22% share; NAND supply discipline holds
Base $22 ~$38B 35–40% ~$7.5B Samsung partially recovers by mid-2026; Micron stabilises at 12–15% HBM share; DDR5 balanced
Bear $16 ~$28B 24–28% ~$3.5B Samsung full HBM3E yield recovery Q2 2026; AI efficiency compresses memory demand; China ban risk widens
Not investment advice. A.L. Capital Advisory framework, April 2026. See §09 Memory section for full HBM methodology.

Capex Efficiency & Quarterly Watch List

What to Monitor Every Earnings Cycle — April 2026 Baseline

The AI capex cycle investment thesis is straightforward to track. Five metrics, updated each earnings quarter, determine whether the base case is intact, accelerating, or showing early bear-case signals. A.L. Capital Advisory monitors each figure below against the thresholds defined in the Model Bridge. The most critical single variable is the hyperscaler capex/revenue ratio — when this begins declining, it signals the AI ROI inflection that re-rates cloud infrastructure equities.

Exhibit 11 A.L.C. Original Analysis
Hyperscaler AI Capex Efficiency: Revenue vs. Spend, 2024–2026E

Exhibit 11 A.L.C. Original Analysis · Hyperscaler AI Capex Efficiency: Revenue vs. Spend, 2024–2026E

Revenue figures = company-reported total revenue. AI-specific revenue is estimated at ~30–40% of cloud revenue for AWS/Azure/GCP. Capex figures from CreditSights (Nov 2025) and Futurum Group (Feb 2026). Capex/Revenue ratio = total capex ÷ total revenue. A.L. Capital Advisory analysis, April 2026.
Hyperscaler AI Capex Efficiency 2024–2026: Amazon capex/revenue 19% (2024) rising to ~39% (2026E) on $590B revenue guidance. Alphabet capex/revenue 12% rising to ~32% on $575B revenue. Meta capex/revenue 14% rising to ~33% on $350B revenue. Microsoft capex/revenue 17% rising to ~32% on $375B revenue. The rising capex/revenue ratio across all four hyperscalers is the key bear-case monitoring signal — a decline would signal the AI revenue inflection point. A.L. Capital Advisory analysis, April 2026.
Company 2024 Revenue 2024 Capex Cap/Rev 2024 2026E Capex Cap/Rev 2026E Signal
Amazon (AWS)
NASDAQ: AMZN
~$590B~$75B ~13% $200B ~34% ↑ Rising — watch Q3 2026
Alphabet
NASDAQ: GOOGL
~$350B~$52B ~15% $180B ~51% ↑ Rising — highest ratio of four
Meta Platforms
NASDAQ: META
~$190B~$44B ~23% $125B ~75% ↑ Highest absolute — pure internal spend
Micron Technology MU 5 — HBM3E sole Western supplier; AI memory wall beneficiary 4 — 3-supplier HBM oligopoly; DRAM/NAND cycle expertise 4 — HBM backlog; DRAM cycle pricing power; NAND recovery 4 — Consensus underestimates HBM mix shift; re-rating potential 3 — China revenue ban risk; geopolitical semiconductor exposure 21.5 / 25 High Conviction
Microsoft
NASDAQ: MSFT
~$245B~$56B ~23% $120B ~49% ↑ Rising — Azure ROI key watchpoint
Bear-case trigger: any hyperscaler showing capex/revenue declining quarter-on-quarter for two consecutive quarters signals the AI revenue inflection — this would be the single most bullish re-rating catalyst for MSFT and GOOGL. Bull-case trigger: ratio holding above 45% into 2027 without corresponding revenue acceleration signals ROI discipline breakdown — the primary selective-position downgrade trigger.
Exhibit 12 A.L.C. Quarterly Framework
Quarterly Watch List: Five Metrics That Determine Whether the AI Capex Cycle Stays on Track

Exhibit 12 A.L.C. Quarterly Framework · Quarterly Watch List: Five Metrics That Determine Whether the AI Capex Cycle Stays on Track

A.L. Capital Advisory monitoring framework, updated each earnings cycle. Thresholds set against McKinsey base-case assumptions and JLL vacancy data.
AI Infrastructure Quarterly Watch List April 2026: Five metrics — North American colo vacancy (current 2.3%, base-case healthy below 3%, bear trigger above 6%); Hyperscaler capex/revenue ratio (current 45–57%, watch for decline as inflection signal); Enterprise AI deployment rate (current early-stage, base case requires scale deployment by 2027); GPU lead times (current 12–18 months, shortening would signal demand slowdown); Nuclear PPA pricing (current 80–100 per MWh, above 100 is bull case for CEG). A.L. Capital Advisory framework, April 2026.
Metric April 2026 Reading Base-Case Range Bull Signal Bear Trigger Source · Cadence Position Impact
N. America colo vacancy 2.3% <3% healthy · <6% neutral <1.5% — pricing power maximum >6% — oversupply entering market JLL Research · Quarterly EQIX · CEG land value
Hyperscaler capex/revenue 45–57% Declining from 2027 = base Ratio declining = ROI inflection Rising >60% into 2027 = discipline breakdown Earnings calls · Quarterly MSFT · GOOGL rating
Enterprise AI deployment Early stage Scale deployment by end-2027 Fortune 500 AI ROI disclosures >20% Enterprise pilots cancelled at scale Earnings · Industry surveys · Q Demand curve scenario
NVIDIA GPU lead times 12–18 months 8–18 months = healthy demand >18 months — demand acceleration <4 months — demand slowdown signal NVDA earnings · Analyst checks · Q NVDA conviction level
Nuclear PPA pricing $80–100/MWh $80–100/MWh = base case >$100/MWh — power scarcity premium <$60/MWh — regulatory or gas competition CEG earnings · DOE data · Q CEG earnings upgrade/downgrade
A.L. Capital Advisory updates this watch list after each major earnings cycle (approximately February, May, August, November). The five metrics are the minimum necessary to determine whether the conviction hierarchy requires revision. No single metric in isolation is sufficient — the full picture requires all five readings simultaneously.
A.L.C. Proprietary Insight — April 2026

The most underappreciated dynamic in the AI capex cycle is the asymmetry between Energizer positions and Technology Developer positions. NVIDIA's revenue depends on whether hyperscalers keep buying GPUs — a decision driven by enterprise AI monetisation, competition, and export controls. Vertiv's and Constellation Energy's revenue depends on whether data centers keep consuming power and cooling — a physical requirement that exists regardless of which AI model wins, which cloud platform dominates, or which semiconductor generation is current. Power consumption does not have a "DeepSeek moment." The Energizer archetype's structural durability explains why VRT and CEG carry the highest conviction durability score in the A.L. Capital Advisory Model Bridge, despite being less widely owned than NVDA in institutional AI baskets.


Frequently Asked Questions

AI Capex Cycle · Hyperscaler Spending · Investment Case · Stock Analysis
The AI capex cycle refers to the coordinated surge in capital expenditure by hyperscalers and data center operators to build the physical infrastructure required to train and run large AI models. Q1 2026 earnings confirmed the Big-5 hyperscalers — Amazon, Alphabet, Meta, Microsoft, and Oracle — will collectively spend approximately $725 billion on AI infrastructure in 2026 (up from the pre-earnings consensus of $660–690 billion), nearly 3× the $256 billion deployed in 2024. McKinsey projects the cycle will require $6.7 trillion in global data center capex by 2030, with AI workloads driving approximately 70% of total demand. The AI capex cycle is expected to extend through at least 2030, with AI accelerator refresh cycles of 3–4 years sustaining ongoing investment even after the initial build-out phase.
No — AI infrastructure spending is accelerating in 2026, not slowing. Q1 2026 earnings (April 29, 2026) confirmed the Big-5 hyperscalers will spend approximately $725 billion in combined capex for 2026, a ~64% increase over 2025 levels. Amazon is targeting $200 billion (reaffirmed), Alphabet raised to $180–190 billion, Meta raised to $125–145 billion (citing memory price inflation), and Microsoft guided to ~$190 billion CY2026. Goldman Sachs estimates the US alone faces a data center capacity shortfall exceeding 11 gigawatts currently, growing beyond 40 GW by 2028. CreditSights estimates roughly 75% of 2026 hyperscaler capex — approximately $545 billion — is AI-specific infrastructure.
The structural evidence argues against the bubble comparison. North American colocation vacancy has fallen to 2.3% in H1 2025 (JLL Research), compared to over 20% during the fiber glut of 2001–2003. Three factors distinguish the AI capex cycle: first, data centers are contracted before construction begins — hyperscalers sign lease agreements before shovels enter the ground. Second, AI accelerators have 3–4 year refresh cycles, meaning any temporary overcapacity becomes obsolescence rather than stranded assets. Third, ongoing operating costs of data centers are high regardless of utilization, creating natural demand absorption that the fiber overbuild never required.
Q1 2026 earnings (April 29, 2026) confirmed the Big-5 hyperscalers — Amazon, Alphabet, Meta, Microsoft, and Oracle — are spending approximately $725 billion in combined capital expenditure for 2026, with approximately 75% (~$545 billion) directed at AI-specific infrastructure. Updated individual guidance: Amazon $200 billion (reaffirmed), Alphabet $180–190 billion (raised, includes Intersect acquisition), Meta $125–145 billion (raised, memory inflation cited), Microsoft ~$190 billion CY2026 ($25B from component pricing), and Oracle ~$50 billion. This represents approximately 64% growth over 2025's ~$443 billion and nearly 3× the $256 billion deployed in 2024. Google Cloud Q1 revenue: $20B (+63% YoY). AWS Q1 revenue: $37.6B (+28% YoY). Azure AI annual run-rate: $37B (+123% YoY).
A.L. Capital Advisory's conviction hierarchy identifies five high-conviction positions across the AI infrastructure stack: NVIDIA (NASDAQ: NVDA) as the dominant GPU supplier capturing approximately 90% of AI accelerator spend; Vertiv Holdings (NYSE: VRT) in critical power and thermal management, where AI chips running at 10–15× CPU power density make liquid cooling non-discretionary; Equinix (NASDAQ: EQIX) as the premier colocation operator with 260+ data centers across 70 metros and interconnect moats hyperscalers cannot replicate; and Constellation Energy (NASDAQ: CEG) as the nuclear baseload solution to AI's carbon-free power requirement. This analysis does not constitute investment advice — investors should conduct their own due diligence.
Project Stargate is a $500 billion AI infrastructure programme announced in January 2026, backed by OpenAI, SoftBank, and Oracle, with an initial $100 billion targeted for US deployment within four years. Stargate represents a new category of government-adjacent sovereign AI demand not captured in McKinsey's base-case demand model, raising the structural demand floor above prior projections. Constellation Energy, Equinix, and NVIDIA are the most direct beneficiaries among the five high-conviction positions identified in this paper.
Vertiv Holdings (NYSE: VRT) is the global leader in critical power and thermal management systems for data centers. NVIDIA's H100 and B200 GPU clusters operate at 10–15× the power density of traditional CPU infrastructure, making Vertiv's liquid cooling systems a technical necessity rather than an optional upgrade. Vertiv holds leading positions in both direct-to-chip liquid cooling and immersion cooling — the two technologies McKinsey identifies as essential for the $1.3 trillion Energizer archetype capex pool. Long-term hyperscaler contracts provide revenue visibility. Vertiv carries a High Conviction rating with a weighted model score of 22.5/25 in A.L. Capital Advisory's 2026 AI infrastructure framework.
Equinix (NASDAQ: EQIX) benefits from the AI capex cycle through physical scarcity and interconnect moats. North American colocation vacancy has fallen to 2.3% (JLL Research, H1 2025), giving Equinix's 260+ data centers across 70 metros significant pricing power. KKR's infrastructure framework specifically identifies entitled land in super-core markets and operational hyperscaler relationships as the hardest barriers to replicate — both of which Equinix holds. London, Singapore, and Northern Virginia assets command premium EV/MW multiples that widen as vacancy tightens. The REIT structure provides dividend yield alongside secular AI infrastructure growth. Equinix scores 22.0/25 in the A.L. Capital Advisory conviction model.
The hyperscaler capex/revenue ratio measures how much of each dollar of revenue the Big-5 cloud companies are reinvesting in AI infrastructure. In 2024, ratios ranged from 13–23%. By 2026, guidance implies ratios of 34–75%, with Meta at the extreme given it spends $125B against approximately $165B in revenue. A rising ratio is an amber signal — hyperscalers are spending ahead of AI revenue materialisation. The key bull catalyst: the ratio beginning to decline signals that AI cloud revenue is scaling to match infrastructure investment. A.L. Capital Advisory monitors this quarterly for rating changes on MSFT and GOOGL. Current April 2026 reading: amber — watch zone.
Vertiv (NYSE: VRT) and Constellation Energy (NASDAQ: CEG) belong to the Energizer archetype — companies whose revenue depends on data centers consuming power and cooling, not on which AI model wins or which chip generation dominates. NVIDIA's revenue depends on hyperscalers continuing to buy GPUs — subject to efficiency gains, export controls, and AMD competition. Vertiv's liquid cooling systems and CEG's nuclear baseload power are required regardless of whether the winning AI model is from OpenAI, Google, or a Chinese competitor. Data center power consumption has no DeepSeek moment. This structural durability — Energizer revenue is decoupled from AI competitive dynamics — is why VRT and CEG carry 22.5/25 and 21.0/25 conviction scores respectively in the A.L. Capital Advisory Model Bridge, and why the Energizer archetype is structurally underowned in most institutional AI baskets.
The binding constraint on NVIDIA Blackwell B200 supply in 2026 is not the GPU die itself — it is CoWoS-L (Chip-on-Wafer-on-Substrate) advanced packaging at TSMC (TSM). CoWoS integrates the B200 GPU die with six stacks of HBM3E memory into a single thermal module. TSMC's CoWoS capacity is expanding from approximately 9,000 wafers per month in 2024 to an estimated 28,000 by end-2026 — but hyperscaler demand is tracking above this build rate. Advanced packaging tools have 12–18 month manufacturing lead times themselves, creating a supply lag that cannot be closed within a single calendar year. TSMC is the world's only CoWoS supplier at volume scale for GPU-class packages. The GPU shortage resolves when CoWoS capacity normalises relative to hyperscaler demand — A.L. Capital Advisory base case: gradual supply improvement through H2 2026, partial equilibrium by mid-2027.
High Bandwidth Memory 3E (HBM3E) is the stacked DRAM architecture that sits beside every AI accelerator and provides the memory bandwidth that AI training requires. A single NVIDIA B200 GPU uses six HBM3E stacks delivering 1.2 TB/s per stack — approximately 4–5× the bandwidth of GDDR6 alternatives. HBM3E matters for the investment thesis for two reasons: first, every GPU generation requires more HBM stacks, creating a permanent and escalating demand for Micron Technology, SK Hynix, and Samsung. Second, HBM3E carries an ASP of approximately $22 per GB versus $3–4 per GB for standard DDR5 DRAM — making HBM the highest-margin product in memory history. Micron is the only Western-listed HBM supplier and enters 2026 as the qualified second supplier to NVIDIA for B200 systems — a structural position that the A.L. Capital Advisory model values at 21.5/25 conviction score, High Conviction rating.
Micron Technology Inc. (NASDAQ: MU) is A.L. Capital Advisory's fifth High Conviction position and the one with the widest gap between current consensus and structural opportunity. Three independent pillars compound simultaneously: (1) HBM revenue inflection — Micron's HBM3E production is ramping as the qualified second supplier to NVIDIA's B200 platform, with HBM gross margins of 50–55% versus Micron's historical blended 25–35%; (2) DRAM pricing cycle — HBM production cannibalises standard DRAM supply capacity, driving DDR5 server pricing higher through 2026 as data center demand grows; (3) NAND recovery — enterprise SSD pricing has recovered 60–80% from the 2023 trough, driven by AI training storage and inference cache demand. The key risk: China revenue ban (~16% of FY2023 revenue) and Samsung HBM3E yield recovery compressing ASPs. A.L. Capital Advisory weighted Model Bridge score: 21.5/25. This does not constitute investment advice.
Advanced Micro Devices (NASDAQ: AMD) is the most credible challenger to NVIDIA's AI accelerator dominance, but the competitive gap remains wide. The MI300X's 192GB unified HBM3 memory pool outperforms the H100 on very large model inference (70B+ parameters), and Microsoft Azure and Meta have both deployed MI300X at scale. However, three structural advantages protect NVIDIA: CUDA software maturity (AMD's ROCm covers ~85–90% of inference operators but meaningfully less for training); TSMC CoWoS allocation priority (NVIDIA receives preferential packaging access as the larger revenue customer); and ecosystem lock-in (PyTorch, JAX, and TensorFlow all optimise natively for CUDA). A.L. Capital Advisory base case: AMD captures 8–12% of the AI accelerator market by 2027, generating $15–20 billion in data center GPU revenue — material but not a structural threat to NVIDIA's High Conviction rating. AMD carries a Selective rating (15.0/25) in the A.L. Capital Advisory Model Bridge.

Update History

  1. Apr 30 2026 Version 2 — Q1 2026 earnings update. Big-5 hyperscaler capex revised to ~$725B (up from $660–690B pre-earnings consensus). Alphabet raised to $180–190B (Intersect acquisition), Meta raised to $125–145B (memory inflation), Microsoft guided ~$190B CY2026. Google Cloud Q1: $20B (+63% YoY). AWS Q1: $37.6B (+28% YoY). Breaking Intelligence section added. Micron elevated to fifth High Conviction position. All exhibits and data appendix updated.
  2. Feb 01 2026 Version 1 — Initial publication. AI capex cycle analysis based on pre-Q1 2026 guidance ($660–690B consensus). Four High Conviction positions: NVDA, VRT, EQIX, CEG. Full McKinsey demand model, Goldman Sachs capacity shortfall, JLL vacancy data, Project Stargate programme analysis.

References

  1. 1.McKinsey & Company. "The cost of compute: A $7 trillion race to scale data centers." Jesse Noffsinger, Mark Patel, Pankaj Sachdeva. TMT Practice, April 2025.
  2. 2.JLL Research. North America Colocation Vacancy, H1 2025. Published June 2025.
  3. 3.KKR Global Infrastructure. "Beyond the Bubble: Why We Think AI Infrastructure Will Compound Long after the Hype." November 2025.
  4. 4.Goldman Sachs. "Powering the AI Era." 2025. Cited via Empower Investment Insights, 2025.
  5. 5.CreditSights. "Technology: Hyperscaler Capex 2026 Estimates." November 25, 2025.
  6. 6.Futurum Group (Nick Patience). "AI Capex 2026: The $690B Infrastructure Sprint." February 12, 2026. Updated post-Q1 2026 earnings: consensus revised to ~$725B (FT, April 30, 2026).
  7. 7.Morgan Stanley / J.P. Morgan. AI Infrastructure Debt Issuance Projections, 2025. Cited via Introl Blog, December 2025.
  8. 8.Bain & Company. AI Infrastructure Capital Intensity Research, 2025. Cited via Empower Investment Insights.
  9. 9.White House / OpenAI. Project Stargate Announcement. January 2026.
  10. 10.Morningstar. "AI Arms Race: How Tech's Capital Surge Will Reshape the Investment Landscape in 2026." December 12, 2025.
  11. 11.State Street Global Advisors (SSGA). "Why the AI CapEx Cycle May Have More Staying Power Than You Think." November 17, 2025.
  12. 12.U.S. Bureau of Labor Statistics. GDP and capex share data. Bloomberg terminal data as of June 30, 2025 (cited via KKR GMAA).
  13. 13.DeepSeek V3 efficiency claims: TechCrunch January 27, 2025; Artificial Analysis January 27, 2025.
  14. 14.All stock-specific analysis, conviction ratings, and projections represent independent views of A.L. Capital Advisory. Not investment advice.
  15. 15.A.L. Capital Advisory Historical Infrastructure Cycles Analysis. Peak capex as % of US GDP: Railroads 1880s (BLS, Federal Reserve historical data); Electrification 1920s (BLS, NBER Macrohistory Database); Fiber & Telecom 2000 peak (BLS, KKR GMAA, Bloomberg); AI Infrastructure 2026E (CreditSights, Futurum Group). GDP denominator: US nominal GDP at each cycle peak, Federal Reserve Economic Data (FRED). Methodology and calculations original to A.L. Capital Advisory, April 2026.
  16. 16.A.L. Capital Advisory Capex Efficiency Analysis. Revenue figures sourced from company-reported annual results (Amazon FY2024 $590B, Alphabet FY2024 $350B, Meta FY2024 $165B, Microsoft FY2024 $245B). Capex figures: CreditSights November 2025, Futurum Group February 2026. Capex/Revenue ratio and AI-specific revenue estimates are A.L. Capital Advisory calculations, April 2026. Not investment advice.
  17. 17.A.L. Capital Advisory Quarterly Watch List Framework. Vacancy threshold methodology derived from JLL Research historical data. GPU lead-time ranges sourced from NVIDIA earnings calls and analyst channel checks. Nuclear PPA pricing ranges from Constellation Energy investor relations and DOE Energy Information Administration. Enterprise deployment assessment is A.L. Capital Advisory qualitative judgement based on public earnings disclosures. Framework original to A.L. Capital Advisory, April 2026.
Apply This Analysis

Translate research into portfolio decisions

The Strategic Session is where we take research like this and build concrete allocation decisions — position sizing, archetype exposure, phase timing — tailored to your risk profile.

Book a Strategic Session →
Related Frameworks
Market Intelligence · April 2026
Defence Spending 2026: The Transatlantic Allocation Case
All 32 NATO allies met the 2% GDP target simultaneously for the first time. $194B LMT backlog, Rheinmetall +45% revenue guidance. Full transatlantic conviction hierarchy: RHM & LMT High Conviction, RTX Selective. April 2026.
Market Intelligence · Updated April 2026
Private Equity 2026: BX & KKR High Conviction
How the major AI infrastructure capital allocators — KKR, Blackstone, Apollo — are rated in our conviction hierarchy. FRE growth, dry powder deployment at 7–8.5× EV/EBITDA, and the private credit stress test reshaping the financing stack.
Market Intelligence · April 2026
Private Credit 2026: BDC Liquidity Crisis
$10B+ Q1 redemption wave in semi-liquid BDC vehicles — Apollo ADS $25B gated, Blue Owl OCIC 21.9%. How private credit stress propagates into the capital stack financing data centre construction and AI infrastructure build-out.
Risk Framework
CVaR & Tail-Risk Methodology
Why variance understates downside risk in non-normal distributions and how CVaR corrects that blind spot for sizing positions in volatile, high-conviction themes.
Allocation Framework
Black-Litterman Model
The Bayesian model from Goldman Sachs — combining equilibrium market returns with explicit conviction views to produce stable, diversified portfolio weights.
Simulation
Monte Carlo Simulation
Generates thousands of possible portfolio outcomes by sampling from return distributions — producing probabilistic forecasts of drawdown, terminal wealth, and goal achievement.
Portfolio Construction
Portfolio Construction Process
A structured approach to building portfolios that balance return potential with risk constraints — from universe definition through position sizing and rebalancing rules.
Investment Disclaimer
In Plain English This is research, not a buy recommendation. It is written by a CFA Charterholder for educational purposes. Do not invest based solely on this analysis — consult your own financial advisor. The author may hold positions in the securities discussed. This is not regulated investment advice under MiFID II.
This report is published by A.L. Capital Advisory for informational and educational purposes only. It does not constitute investment advice, a solicitation to buy or sell any security, or a recommendation to take any specific investment action. All analysis, projections, and opinions expressed are those of the author and are subject to change without notice. Past performance is not indicative of future results. Investing involves risk, including the possible loss of principal. Readers should conduct their own due diligence and consult with a qualified financial advisor before making any investment decisions. References to specific securities (NVDA, VRT, EQIX, CEG, MSFT, GOOGL, AMD) are for illustrative purposes and do not constitute a recommendation to buy or sell those securities. This content does not constitute regulated investment advice under MiFID II or FCA guidelines and is not intended for US persons, residents of jurisdictions where its distribution would be contrary to local law or regulation, or residents of Finland, Sweden, Norway, Denmark, Iceland, or Poland. The author may hold long or short positions in securities mentioned in this report. Nothing in this report represents a solicitation to buy or sell any security.