The AI Capex Cycle: $725B Hyperscaler Buildout and the Five High-Conviction Positions
The AI capex cycle has accelerated beyond February 2026 projections. The Big-5 hyperscalers now guide $660–690 billion in 2026 capex — 36% above 2025 — while Goldman Sachs documents a US data center capacity shortfall exceeding 11 GW today, widening to 40 GW by 2028. The McKinsey $6.7 trillion demand framework remains the base case, but the pace of deployment is tracking the accelerated scenario. NVDA, VRT, EQIX, CEG, and MU are the five high-conviction positions across the Energizer, Technology Developer, Memory, and Operator archetypes.
Q1 2026 Earnings Confirmed — Big-5 capex revised to ~$725B: All four major hyperscalers reported on April 29. Alphabet raised to $180–190B (partially due to Intersect acquisition); Google Cloud Q1 revenue $20B (+63% YoY), backlog $462B. Microsoft guided to ~$190B CY2026 ($25B from component pricing); Azure +40% YoY, AI business annual run-rate $37B (+123%). Meta raised to $125–145B, explicitly citing memory price inflation as the primary driver. Amazon reaffirmed $200B; AWS $37.6B Q1 (+28% YoY), backlog $364B + $100B+ Anthropic compute deal. Combined Big-5 post-earnings consensus: ~$725B, up from $660–690B pre-earnings guidance. Sources: Q1 2026 earnings calls, FT, April 30 2026.
Google Cloud Next 2026 (April 22): Alphabet announced TPU 8t (3× Ironwood throughput; 9,600-TPU superpod) plus NVIDIA Blackwell Ultra and Vera Rubin GPU instances for 2027 — the first public confirmation of commercial availability for NVIDIA's post-Blackwell architecture. Directly reinforces NVDA, EQIX, and MU (HBM demand escalates per GPU generation).
YTD 2026 conviction performance (Yahoo Finance, April 24): VRT +61.8% · MU +32.3% · EQIX +1.3% · NVDA −5.0% (backlog intact) · CEG flat (nuclear PPA pipeline expanding). Past performance does not guarantee future results. Not investment advice.
Key Takeaways — AI Infrastructure Capex Cycle, April 2026
- ~$725B confirmed in 2026, ~$880B in 2027E, ~$1.06T in 2028P. Q1 2026 earnings raised the consensus from $660–690B to ~$725B — a ~64% YoY increase. Amazon leads at $200B; Microsoft raised to ~$190B CY2026; Meta raised to $125–145B; Alphabet raised to $180–190B. ~75% is AI-specific (~$545B). Combined spend is close to Switzerland's entire annual GDP.
- Supply cannot keep pace. North American colocation vacancy: 2.3% (JLL, H1 2025) — down from 9.8% in 2020. Goldman Sachs documents an 11 GW US shortfall today, widening to 40+ GW by 2028. McKinsey's $6.7T build-out requires 125 GW of incremental AI capacity by 2030. Physical infrastructure is the binding constraint, not demand.
- Five High Conviction positions across the stack: NVDA (GPU monopoly, 90% AI accelerator share) · VRT (liquid cooling, non-discretionary at 10–15× CPU power density) · EQIX (260+ data centers, 2.3% vacancy pricing power) · CEG (nuclear baseload, carbon-free PPAs) · MU (HBM3E, the binding constraint on B200 GPU output).
- Energizer & Memory archetypes are structurally underowned. VRT, CEG, and MU earn revenue from physical consumption of power, cooling, and memory — independent of which AI model or chip generation wins. Data centers need power and HBM regardless of DeepSeek. These positions carry no model-competition risk; NVDA does.
- Not the 1990s fiber overbuild — but capex/revenue ratios are a watch signal. Vacancy at 2.3% vs 20%+ in 2001; contracts precede construction; accelerator refresh cycles absorb oversupply. However, hyperscaler capex-to-revenue ratios of 34–75% are utility-level. MSFT and GOOGL rated Selective, not High Conviction, for this reason.
The AI capex cycle is unlike any prior infrastructure investment wave — not in scale alone, but in structure. By 2030, the global data center build-out will require $6.7 trillion in capital expenditure (McKinsey, April 2025), with approximately 70% attributable to AI workloads. In 2026, the Big-5 hyperscalers — Amazon, Alphabet, Meta, Microsoft, and Oracle — will spend $660–690 billion on AI infrastructure collectively, a figure that exceeds the entire annual GDP of Switzerland and approaches that of Saudi Arabia. Goldman Sachs documents that US data centers already face a capacity shortfall of more than 11 gigawatts today, with the cumulative gap expected to exceed 40 GW by 2028. North American colocation vacancy has fallen to 2.3% — a level at which pricing power is structural, not cyclical.
The AI data center cycle is regularly compared to the 1990s fiber overbuild. The analogy is seductive and fundamentally misleading. Understanding precisely why is the difference between capturing a decade-long compounding trade and being burned by a narrative that looked good on paper. This paper maps the full AI capex cycle, integrates the April 2026 data update, and identifies the five positions that capture the value across the infrastructure stack.
What Is the AI Capex Cycle?
McKinsey's research shows global demand for data center capacity could almost triple by 2030, with approximately 70% of that demand driven by AI workloads. Total projected capital expenditure: $6.7 trillion, of which $5.2 trillion is attributable to AI processing loads and $1.5 trillion to traditional IT applications.
The AI capex cycle is defined by the hyperscalers — the companies that build and operate the cloud infrastructure on which AI runs. Amazon, Alphabet, Meta, Microsoft, and Oracle collectively spent $256 billion on capital expenditure in 2024. The Big-5 estimate for 2025 is $443 billion, a 73% YoY increase. Q1 2026 earnings (reported April 29, 2026) confirmed the combined 2026 figure at approximately $725 billion — a ~64% increase over 2025 — with approximately 75% of that spend (~$545 billion) directed at AI-specific infrastructure: GPUs, servers, data center construction, power systems, and cooling. Alphabet raised guidance to $180–190B (partially due to the Intersect acquisition); Meta raised to $125–145B, explicitly citing memory price inflation; Microsoft guided to ~$190B CY2026 ($25B attributed to higher component pricing); Amazon reaffirmed $200B. See Exhibit 1 for the full company-level breakdown. Amazon's 2026 capex of $200 billion alone exceeds the combined annual capex of the entire publicly traded US energy sector. As a share of GDP, AI-related capital formation now sits at approximately 5% — a level last seen during the late-1990s technology boom, but with a structurally different demand foundation (see Section 02).
Exhibit 2 · Global Data Center Capacity Demand: AI vs. Non-AI Workloads, 2025–2030 (GW)
| Year | Total Capacity (GW) | AI Workloads (70%) | Non-AI Workloads (30%) | YoY Growth |
|---|---|---|---|---|
| 2025 | 82 GW | ~57 GW | ~25 GW | Baseline |
| 2026 | 105 GW | ~74 GW | ~32 GW | +28% |
| 2027 | 137 GW | ~96 GW | ~41 GW | +30% |
| 2028 | 163 GW | ~114 GW | ~49 GW | +19% |
| 2029 | 191 GW | ~134 GW | ~57 GW | +17% |
| 2030 | 207 GW | ~145 GW | ~62 GW | +8% |
McKinsey constructed three scenarios ranging from constrained to accelerated demand, shaped by semiconductor supply constraints, enterprise AI adoption rates, efficiency improvements, and regulatory challenges. The base case — $5.2 trillion in AI data center capex — assumes continued growth without runaway acceleration or structural constraints.
Exhibit 3 · Three AI Infrastructure Investment Scenarios, 2025–2030
| Scenario | Drivers | Incremental GW | AI Capex | Total (AI + Non-AI) |
|---|---|---|---|---|
| Accelerated | Transformative AI adoption; enterprise integration across all sectors; no supply constraints | 205 GW | $7.9T | $9.4T est. |
| Base Case ★ BASE | Continued growth; moderate enterprise adoption; some efficiency gains offset demand | 125 GW | $5.2T | $6.7T |
| Constrained | Supply chain bottlenecks; slower enterprise deployment; AI efficiency gains suppress demand | 78 GW | $3.7T | $5.2T est. |
Exhibit 4 A.L.C. Original Analysis · AI Data Center Demand vs. Supply at Current Construction Pace: The Growing Capacity Gap, 2025–2030 (GW)
Which stocks benefit most from the 40 GW US data center power gap?
Goldman Sachs documents a current US data center capacity shortfall exceeding 11 GW — widening to over 40 GW by 2028. At the current construction pace of approximately 15 GW of new builds per year, the physical gap cannot close before 2030. The investment implication is direct: companies that own existing, entitled, grid-connected data center capacity and energy infrastructure are in a structural scarcity position that cannot be replicated on a 2–3 year horizon. Five positions capture this gap across the supply chain:
- EQIX — Equinix
- 260+ data centers across 70 metros globally. North American vacancy at 2.3% gives Equinix structural pricing power on every new lease. Entitled land in super-core markets (Northern Virginia, London, Singapore, Frankfurt) cannot be replicated in under 5 years. The 40 GW gap is Equinix's pricing moat made quantifiable.
- CEG — Constellation Energy
- Nuclear baseload power is the only carbon-free generation source capable of meeting hyperscalers' 24/7 clean energy requirements at scale. Hyperscalers cannot build new nuclear — lead time is 10–15 years. Constellation's existing fleet commands a PPA premium that widens as the power gap expands. The 40 GW shortfall means more demand for CEG's contracted output, not less.
- VRT — Vertiv Holdings
- Every new gigawatt of AI data center capacity requires critical power and thermal management infrastructure. Vertiv's liquid cooling systems are non-discretionary — AI GPU clusters at 10–15× CPU power density physically cannot operate without direct-to-chip cooling. Backlog of hyperscaler contracts extends to 2027. Each gigawatt of gap closed is direct Vertiv revenue.
- NVDA — NVIDIA
- Every data center that closes the capacity gap must be equipped with AI accelerators. NVIDIA captures approximately 90% of AI GPU spend — meaning every gigawatt of new AI data center capacity translates into GPU procurement. The 40 GW shortfall, if closed by 2030, represents approximately $480 billion in incremental GPU and server investment at current rack densities. The capacity gap is a GPU demand guarantee.
A.L. Capital Advisory conviction scores: EQIX 22.0/25 · CEG 21.0/25 · VRT 22.5/25 · NVDA 24.0/25. This analysis does not constitute investment advice. See Model Bridge section for full methodology.
Exhibit 5 April 2026 Update · Big-5 Hyperscaler AI Capex: 2024 Actual vs. 2025 Estimate vs. 2026 Guidance (USD billions)
| Company | Ticker | 2024 Actual | 2025 Estimate | 2026 Guidance | 2024→2026 Δ | Primary AI Focus |
|---|---|---|---|---|---|---|
| Amazon / AWS | NASDAQ: AMZN | ~$75B | ~$104B | $200B | +167% | AWS AI training & inference, logistics AI |
| Alphabet / Google | NASDAQ: MSFT | ~$56B | ~$80B | $120B+ | +114% | Azure OpenAI, Copilot, data center expansion |
| Oracle | NYSE: ORCL | ~$9B | ~$20B | $50B | +456% | OCI AI cloud, Stargate programme partner |
| Big-5 Combined | 5 companies | ~$256B | ~$443B (+73%) | ~$725B | +36% YoY | ~75% AI-specific (~$450B) |
Is AI Infrastructure a Bubble?
The analogy to the late-1990s telecommunications infrastructure bubble is compelling in one dimension — the scale of capital deployment — and misleading in every other. Fiber in the 1990s was built speculatively, with virtually unlimited capacity once laid and zero refresh requirement. Data centers are physically constrained, contractually committed before construction, and subject to accelerated depreciation cycles that naturally absorb any temporary excess. The evidence is visible in vacancy data: North American colocation vacancy has fallen from 9.8% in 2020 to 2.3% in H1 2025 (JLL Research), while the fiber glut post-2001 saw vacancy exceed 20%.
The key structural difference McKinsey identifies is the cost of carrying excess capacity. Fiber, once laid, is nearly free to maintain. Data centers are the opposite: power, cooling, and maintenance are ongoing high costs regardless of utilization. But crucially, AI accelerators have 3–4 year refresh cycles — meaning any overcapacity is rapidly converted into obsolescence, and new workloads pull spare capacity well before it becomes stranded.
The Debt Market Shift
The AI capex cycle has introduced a structural change to hyperscaler financial models that was not present in prior technology investment waves: the shift from cash-funded to debt-funded infrastructure. Amazon, Alphabet, Meta, and Microsoft spent a combined $256 billion on capex in 2024. Q1 2026 earnings confirmed the combined 2026 figure at approximately $725 billion. Internal free cash flow cannot scale at that rate. As a result, hyperscalers have collectively turned to debt markets at a scale not seen since the 2001 telecom build-out — but with materially stronger balance sheets underpinning the leverage.
Morgan Stanley and J.P. Morgan project the technology sector will need to issue approximately $1.5 trillion in new debt over the next three years to fund the AI infrastructure build-out. The hyperscalers now spend 45–57% of revenue on capex — ratios previously seen only in capital-intensive industrial utilities and telecommunications companies. Amazon's 2026 capex guidance of $200 billion alone exceeds the combined annual capex of the entire publicly traded US energy sector.
The investment implication is dual-edged. Debt-funded hyperscaler capex raises the structural demand floor for AI infrastructure suppliers — contracts are signed, purchase orders placed, and delivery timelines locked in regardless of short-term sentiment shifts. However, rising leverage also introduces a new risk layer for hyperscaler equity positions themselves: if AI revenue monetisation disappoints, the debt service burden will suppress free cash flow precisely when investor patience is shortest. The debt burden is precisely why Microsoft (MSFT) and Alphabet (GOOGL) carry a Selective rather than High Conviction rating in A.L. Capital Advisory's framework — the infrastructure beneficiaries (NVDA, VRT, EQIX, CEG) capture the upside without carrying the balance sheet risk of the buyers.
Is AI infrastructure now energy-constrained rather than capital-constrained?
Yes — this is the most consequential structural shift of Q1 2026. Through 2024, the primary constraint on AI infrastructure deployment was capital allocation and GPU supply. By Q1 2026, that constraint shifted to reliable power at scale. Every hyperscaler now reports that data center expansion is gated by grid interconnect timelines (18–36 months from application to energisation), transformer lead times (18–24 months), and permitting cycles — not by willingness to spend or GPU availability. Goldman Sachs's documented 11 GW US capacity shortfall is not primarily a real estate or construction problem. The 40 GW shortfall is fundamentally a power problem.
The investment implication: companies controlling existing grid-connected capacity and firm dispatchable power are in a structural scarcity position that cannot be replicated in under 5 years. CEG's nuclear fleet — operating 24/7 at near-100% availability — is the only carbon-free source meeting the "firm, dispatchable, always-on" specification hyperscalers require. Meta's nuclear PPA, Amazon's nuclear offtake expansion, and Microsoft's Three Mile Island restart confirm that nuclear power is now an operational requirement for AI infrastructure at scale, not an ESG preference. EQIX's 260+ permitted, grid-connected data centers similarly represent a 5-year-to-replicate physical moat that widens under an energy-constrained regime.
Hyperscalers now spend 45–57% of revenue on capex (vs. 10–15% in 2020). The technology sector faces an estimated $1.5 trillion in new debt issuance over 2025–2027 (Morgan Stanley / J.P. Morgan). Bain calculates that sustaining the current investment trajectory requires approximately $500 billion in annual spend to generate roughly $2 trillion in revenue — a 4× revenue multiple on capital that has not yet been demonstrated at scale. The gap between the capital being deployed and the revenue being generated is the central risk to monitor across the AI capex cycle.
Project Stargate and the Geopolitical Layer
In January 2026, OpenAI, SoftBank, and Oracle announced Project Stargate — a $500 billion AI infrastructure programme targeting US deployment, with an initial $100 billion committed within the first four years. Stargate represents a category of demand that does not appear in McKinsey's April 2025 base-case model: sovereign and government-adjacent AI infrastructure, funded at national-strategy scale rather than commercial return logic alone.
Stargate is not an isolated event. Saudi Arabia's Public Investment Fund has committed to a $40 billion AI infrastructure programme. The UAE has established G42 as a sovereign AI entity with data centre commitments across three continents. The European Union's AI Continent Action Plan targets €200 billion in AI investment through 2030. Each of these programmes represents demand outside the hyperscaler-driven model — demand that is contractually committed, politically supported, and insensitive to short-term ROI calculations.
The structural implication for the AI capex cycle is clear: the demand floor is higher than McKinsey's base case assumed, because McKinsey's model was built on commercial hyperscaler logic alone. Sovereign AI programmes add a second, non-correlated demand layer. Goldman Sachs's documented 11 GW US capacity shortfall — growing to 40 GW by 2028 — understates the total demand gap when sovereign programmes are included.
Among the five high-conviction positions, Constellation Energy (CEG) is the most direct Stargate beneficiary: nuclear baseload is the only power source that meets both the carbon-free and uninterruptible specifications that sovereign AI programmes require. Equinix (EQIX) benefits through its hyperscale campus footprint in the geographies where sovereign programmes are concentrating — Virginia, London, Singapore, and the Gulf. NVIDIA (NVDA) benefits from GPU procurement at sovereign scale. Vertiv (VRT) benefits through the thermal management requirements of the dense compute clusters Stargate's architecture demands.
Investment Architecture
McKinsey's analysis maps the $5.2 trillion AI capex envelope across five distinct investor archetypes. Understanding this architecture is essential: the investment case, risk profile, and return dynamics differ fundamentally across archetypes.
Signal vs. Noise
- Vacancy at 2.3% in N. America H1 2025 — no speculative overbuild visible (JLL)
- Contracts-first builds: hyperscalers require offtake agreements before construction begins
- Power is the ultimate physical constraint on overbuild — grid queues, transformer lead times, permits
- 3–4 year accelerator refresh cycles naturally absorb any temporary excess capacity
- AI is a horizontal productivity layer across all industries, not a niche connectivity play
- Lower unit costs drive accelerated adoption (Jevons Paradox — efficiency creates more demand)
- Both inference and training workloads growing; inference to dominate by 2030
- AI use-case failure: enterprises building but not deploying at scale — ROI visibility remains limited
- Efficiency disruption: DeepSeek V3's 18× training cost reduction could suppress GPU demand
- Concentration risk: NVIDIA at ~8% of S&P 500 — single-stock exposure in any AI basket
- Geopolitical: US–China semiconductor export controls create supply chain and demand uncertainty
- Rising power costs squeeze operators without long-term power contracts
- Some business models (GPU rental, thin-margin operators, non-core markets) will not survive
"The stakes are high. Overinvesting in data center infrastructure risks stranding assets, while underinvesting means falling behind. The winners of the AI-driven computing era will be the companies that anticipate compute power demand and invest accordingly."
— McKinsey & Company, "The Cost of Compute," April 2025
Investor Framework
The $5.2–$6.7 trillion capex envelope flows through a defined set of public equities. But raw exposure to the AI theme is not sufficient — the archetype, moat, and balance sheet quality of each company determine whether they capture compounding returns or get crushed in the shake-out.
Enter your intended AI infrastructure allocation. The calculator distributes it across the five High Conviction positions using A.L. Capital Advisory's Model Bridge weights. Two methods: Conviction-Weighted (proportional to model scores) or Equal-Weight (25% each).
GPU & CPU: Pricing, Shortage & the Supply Chain Chokepoints
Every dollar of hyperscaler AI infrastructure capex ultimately flows through a semiconductor. The $660–690 billion committed for 2026 does not build itself — it must be converted into physical chips, packaged onto boards, slotted into racks, and cooled. Understanding the semiconductor supply chain is therefore not a secondary analytical exercise. It is the root constraint that determines whether the AI capex cycle delivers its projected $6.7 trillion in capacity by 2030 or falls short of the McKinsey base case. Three chokepoints define the current supply environment: GPU allocation, advanced packaging capacity at TSMC, and the EUV lithography monopoly held by ASML.
NVIDIA GPU Shortage — Why 90% Market Share Persists Despite Competition
NVIDIA Corporation (NASDAQ: NVDA) enters 2026 in a position with few historical precedents: a near-monopolist in a market growing at 36% annually, constrained not by demand but by its own supply chain. The Blackwell B200 architecture — delivering 2.5× the inference throughput of the H100 at the same power envelope — carries reported order lead times of 12–18 months for hyperscaler allocations as of April 2026. The binding constraint is not TSMC's fab capacity for the GPU die itself, but CoWoS-L (Chip-on-Wafer-on-Substrate with Local) advanced packaging, which stacks the B200 GPU die together with six stacks of HBM3E memory in a single thermally integrated module. TSMC's CoWoS capacity is expanding from approximately 9,000 wafers per month in 2024 to an estimated 30,000 by end of 2026 — but hyperscaler demand is tracking above this build rate.
At Google Cloud Next on April 22, 2026, Google confirmed NVIDIA Vera Rubin GPU instances will be available on Google Cloud in 2027 — the first public confirmation of commercial availability for NVIDIA's next-generation architecture beyond Blackwell. This extends the CUDA and NVIDIA platform advantage through at least the 2027–2028 GPU cycle, reinforcing the structural thesis for NVDA equity at every hyperscaler refresh cycle.
The CUDA software moat is as material as the hardware lead. NVIDIA's Compute Unified Device Architecture — the programming model that underlies virtually every production AI training workload — has been in continuous development since 2007. The model libraries (cuDNN, cuBLAS, TensorRT), the developer tooling, and two decades of academic and commercial code written natively for CUDA constitute a switching cost that Advanced Micro Devices (NASDAQ: AMD) is actively but slowly dismantling with ROCm. A.L. Capital Advisory estimates enterprise migration from CUDA to ROCm at current pace would require 3–5 years for non-latency-sensitive inference workloads — and meaningfully longer for training.
AMD MI300X vs NVIDIA B200 — Deep Technical and Commercial Comparison
AMD's MI300X is the most credible GPU challenger in the AI data center market. The MI300X integrates CPU and GPU chiplets in a unified HBM3 memory pool — 192GB of shared memory versus the H100's 80GB. For very large model inference (70B+ parameter LLMs), the MI300X's memory capacity advantage is architecturally significant: models that require tensor parallelism across 8 H100s can run on 4 MI300X units, reducing interconnect overhead. Microsoft Azure and Meta have both announced MI300X deployments for inference workloads, validating the commercial thesis.
However, the competitive gap remains wide on three dimensions: software maturity (ROCm operator coverage vs CUDA is estimated at 85–90% for inference, but substantially lower for cutting-edge training kernels), supply chain reliability (TSMC allocates CoWoS capacity to NVIDIA first as the larger revenue customer), and ecosystem lock-in (the dominant MLOps toolchain — PyTorch, JAX, TensorFlow — all optimise natively for CUDA). A.L. Capital Advisory's base case: AMD captures 8–12% of the AI accelerator market by 2027, up from approximately 5–6% in 2025. At that share level and current data center GPU ASPs ($25,000–$35,000 per unit wholesale), AMD Data Center revenue could reach $15–20 billion annually by FY2027 — a material but sub-consensus outcome.
Intel, ARM Architecture & the CPU Transition
Intel Corporation (NASDAQ: INTC) is executing a structural pivot from integrated device manufacturer to pure-play foundry (Intel Foundry Services / IFS) while simultaneously defending its CPU franchise against AMD and the ARM architecture wave. In the AI data center context, CPUs play a secondary but non-negligible role: every GPU cluster requires high-performance host CPUs for data ingestion, preprocessing, orchestration, and inference serving. Intel's Xeon Scalable 6th Generation (Granite Rapids) and AMD's EPYC Genoa both compete for this socket. AMD's EPYC has outgrown Intel in data center CPU share for three consecutive years, now estimated at 33–35% of new server deployments versus Intel's 65%.
Arm Holdings plc (NASDAQ: ARM) is the deeper structural story. Arm's v9 architecture — licensed to Apple, Qualcomm (QCOM), Amazon (Graviton), Google (Axion), and NVDA (Grace CPU) — delivers 30–40% better performance-per-watt than x86 at comparable workloads. In the AI inference layer, where power efficiency directly determines cost per token, the x86 architecture's dominance is structurally eroding. Arm's revenue is a royalty on every chip shipped using its architecture — a position of compounding leverage as ARM-based designs proliferate across data centers, edge infrastructure, and AI accelerators.
ASML — The Single-Point-of-Failure in Global AI Chip Supply
ASML Holding N.V. (NASDAQ: ASML) manufactures every extreme ultraviolet (EUV) lithography machine on Earth. There is no second supplier. EUV lithography is required to pattern the sub-7nm transistors that power every leading-edge AI chip — NVIDIA's B200 on TSMC N3E, AMD's MI300X on TSMC N5, Intel's Gaudi 3 on Intel 7. A single EUV tool costs approximately €200 million, weighs 180 tonnes, and requires 40 shipping containers and a dedicated Boeing 747 to transport. ASML ships approximately 50–60 EUV systems per year. Lead times are 18–24 months from order to installation.
The investment case for ASML is the investment case for the AI capex cycle expressed through the supply chain's deepest chokepoint. Every new semiconductor fab built to address AI demand — TSMC Arizona, Samsung Taylor, Intel Ohio — requires ASML EUV machines. The Veldhoven-based company's order book extends through 2027 and includes next-generation High-NA EUV tools (approximately €380 million per unit) required for sub-2nm nodes. No competitor has a functioning EUV tool. The development timeline for a competitor to reach commercial EUV from scratch is estimated by industry analysts at 15–20 years and multiple billions in investment. ASML's geopolitical risk — the Dutch government, under US pressure, has restricted EUV exports to China — removes the largest potential demand overhang and creates a China-exclusion premium that benefits Western fabs.
Custom Silicon — Broadcom & Marvell as the ASIC Layer
The custom silicon trend is one of the most consequential supply-chain developments of the 2026 AI cycle. Google (TPU v5), Amazon (Trainium2, Inferentia3), Meta (MTIA2), and Microsoft (Maia 2) are all deploying custom AI accelerators designed in-house and manufactured at TSMC — explicitly to reduce dependence on NVIDIA and capture gross margin. The hyperscalers cannot design these chips themselves from scratch. They use third-party chip architects: Broadcom Inc. (NASDAQ: AVGO) and Marvell Technology Inc. (NASDAQ: MRVL) are the two dominant custom ASIC designers for AI workloads.
Broadcom has disclosed that its three largest custom AI chip customers will consume a combined addressable market of $60–90 billion per year by FY2027, based on disclosed deployment plans. Marvell's custom AI silicon revenue is smaller but growing rapidly, with design wins at Amazon and Microsoft. Both companies benefit from a structural dynamic that NVIDIA cannot easily disrupt: hyperscalers have a strategic incentive to fund custom silicon as a CUDA countermeasure, regardless of short-term cost premium. The ASIC layer is therefore not a threat to NVIDIA in the near term (custom ASICs are inference-only and cannot match NVIDIA's training flexibility) — but it is a meaningful long-term share shift that AVGO and MRVL are directly positioned to capture.
Server Integration Layer — Super Micro Computer & Dell Technologies
Super Micro Computer Inc. (NASDAQ: SMCI) and Dell Technologies Inc. (NYSE: DELL) occupy the final assembly layer of the AI GPU supply chain — converting raw NVIDIA, AMD, and ASML-derived chips into deployable rack-scale systems. Super Micro's liquid-cooled DGX-H100 and MGX server platforms are NVIDIA's preferred reference design partners; Super Micro has been first to market with Blackwell-based server systems for three consecutive GPU generations. Dell's AI server portfolio (PowerEdge XE9680) competes for the same enterprise and colocation buyer, with the additional distribution advantage of Dell's global direct sales force and ProSupport services.
Both companies have supply chains directly gated by NVIDIA's GPU allocation — when B200 supply is constrained, SMCI and DELL backlog builds and gross margins compress on pre-sold orders. The investment risk is margin: server integration is a low-margin business (5–8% EBIT for SMCI, 4–6% for AI infrastructure servers at Dell), and pricing competition between the two intensifies during GPU shortage periods. The structural opportunity: as the AI capex cycle moves from Phase 1 (Build) to Phase 2 (Deploy), hyperscalers shift from direct NVIDIA procurement to third-party server integrators — expanding SMCI and DELL's total addressable market.
Exhibit A1 A.L.C. Original Analysis · April 2026 · GPU & CPU Supply Chain — Bull / Base / Bear Scenario Analysis
| Metric | Bull Case | Base Case | Bear Case | Key Variable |
|---|---|---|---|---|
| AI Data Center GPU ASP (blended B200/H100) | $38,000 | $30,000 | $22,000 | CoWoS supply ramp vs AMD competition intensity |
| NVDA Data Center Revenue FY2027E | ~$180B | ~$145B | ~$100B | GPU ASP × unit volume × China export regime |
| AMD AI Accelerator Market Share (2027E) | 15–18% | 8–12% | 4–6% | ROCm software maturity & hyperscaler ASIC shift pace |
| ASML EUV Shipments (units, 2026) | 60 | 52 | 42 | Fab investment cycles; High-NA ramp timing |
| ASML Revenue FY2026E | €38B | €33B | €27B | EUV + DUV mix; High-NA ASP premium |
| AVGO Custom AI Silicon Revenue FY2027E | $25–30B | $18–22B | $10–14B | Hyperscaler ASIC programme deployment pace |
| TSMC (TSM) CoWoS Capacity (wafers/month, end-2026) | 35,000 | 28,000 | 22,000 | Advanced packaging tool delivery lead times |
| BULL: CoWoS supply fully ramps to meet demand; AMD ROCm progress slower than consensus; ASML High-NA shipments begin in volume; AVGO custom silicon programme acceleration. BASE: CoWoS constrained through mid-2026; AMD captures inference share incrementally; ASML EUV at guidance midpoint; custom silicon 15–20% of incremental hyperscaler AI spend. BEAR: Efficiency gains (DeepSeek-class) compress inference GPU demand; AMD ROCm reaches parity for inference earlier than expected; TSMC CoWoS investment cycle delayed; NVDA export controls materially widened. Not investment advice. A.L. Capital Advisory analytical framework, April 2026. | ||||
Memory: HBM Shortage, NAND Pricing & the AI Memory Inflection
The binding constraint on NVIDIA's Blackwell B200 GPU output in Q1–Q2 2026 is not the GPU die. It is High Bandwidth Memory 3E (HBM3E) — the stacked DRAM that sits beside every AI accelerator and provides the memory bandwidth that separates a usable AI chip from a theoretical one. A single B200 GPU die requires six stacks of HBM3E, each consuming approximately 8GB, for a total of 192GB per chip. At current NVIDIA GPU shipment volumes, the global HBM market must produce and package more advanced memory in 2026 than it produced across all previous years combined. There are exactly three HBM suppliers on Earth: SK Hynix (unlisted in this directory), Samsung (unlisted), and Micron Technology Inc. (NASDAQ: MU). Micron is the smallest of the three — and the most investable for Western investors.
HBM3E — The Architecture of Scarcity
High Bandwidth Memory is not DRAM as conventionally understood. Standard DDR5 DRAM transmits data serially over a narrow bus. HBM stacks 8–12 DRAM dies vertically using through-silicon vias (TSVs), achieving memory bandwidth of 1.2 TB/s per stack versus DDR5's 0.1 TB/s per channel. For AI training workloads — which require feeding terabytes of model weights and activations per second to the GPU's thousands of CUDA cores — HBM bandwidth is the performance ceiling that determines training throughput. The difference between running a 70B parameter model training run on HBM3E and on GDDR6 is roughly 4–5× in training speed, which at hyperscaler compute costs translates directly into tens of millions of dollars per training run.
HBM production requires TSMC (TSM) CoWoS packaging to integrate the HBM stacks with the GPU die — creating a circular dependency in the supply chain: both the GPU and its memory require the same scarce CoWoS capacity. SK Hynix currently holds an estimated 50–55% of the HBM market; Samsung approximately 35–40%; and Micron approximately 8–12%, ramping aggressively. NVIDIA has publicly qualified all three suppliers for HBM3E, but SK Hynix retains a technology generation lead: Hynix began HBM3E production in Q3 2024; Micron qualified HBM3E in Q4 2024; Samsung's HBM3E yield issues delayed their qualification into Q1 2026. This means Micron enters 2026 in an unusual position — a qualified second supplier in a market where the lead supplier cannot meet demand and the third supplier has quality problems.
Micron Technology — The High-Conviction AI Memory Investment Case
Micron Technology Inc. (NASDAQ: MU) is A.L. Capital Advisory's fifth High Conviction position — and the one with the widest gap between current consensus expectations and structural opportunity. Micron's investment case rests on three independent pillars that compound simultaneously through 2028.
First, the HBM revenue inflection. Micron's HBM revenue was negligible in FY2024 (ending August 2024). The company has guided to HBM becoming a multi-billion dollar revenue line in FY2025, with HBM3E production ramping throughout 2025 and into 2026. At SK Hynix's disclosed HBM gross margins (50–55%), HBM is transformative for Micron's blended gross margin profile, which has historically averaged 25–35% across the DRAM/NAND cycle. A single HBM3E 8-Hi stack carries an ASP of approximately $20–25 per GB — versus $3–4 per GB for commodity DDR5 DRAM. The six HBM3E stacks in a single B200 GPU generate more revenue per chip for Micron than an entire DDR5 memory kit for a standard server.
Second, the DRAM pricing cycle. The 2022–2023 DRAM oversupply — driven by consumer PC and smartphone demand collapse — has fully resolved. Industry-wide DRAM bit output growth has been deliberately constrained by all three major producers (Micron, SK Hynix, Samsung), with capital expenditure redirected to HBM production, which is DRAM capacity that becomes unavailable for standard DRAM supply. The structural result: DDR5 server DRAM pricing is rising through 2026 as data center demand (each AI server rack contains $100,000–$400,000 of standard DRAM alongside the GPU) grows faster than supply can respond. Micron is a direct beneficiary of both the volume increase and the ASP uplift.
Third, the NAND recovery. Western investors often model Micron as a pure DRAM company, but NAND flash (used for SSDs, storage arrays, and training data pipelines) represents approximately 35% of Micron's revenue. The NAND cycle bottomed in Q3 2023 at price levels that forced all producers, including Western Digital (NASDAQ: WDC), into EBITDA-negative operations. The recovery is now well established: enterprise SSD pricing has recovered 60–80% from the trough as AI training datasets, model checkpoints, and inference caches drive unprecedented enterprise NVMe demand. Western Digital is the pure-play NAND recovery thesis — its Flash segment directly captures the enterprise SSD pricing cycle without the HBM exposure that makes Micron the more complete AI memory story.
The Memory Wall — AI's Hidden Bottleneck
The "memory wall" is the gap between the growth rate of AI model complexity (parameters, context window, batch size) and the growth rate of memory bandwidth. A GPT-4-scale model requires feeding approximately 140GB of weights to GPU cores during each forward pass. At current HBM3E bandwidth of 1.2 TB/s per stack and 6 stacks per B200, the peak sustainable throughput is approximately 7.2 TB/s per GPU. Projected model sizes for 2026–2027 training runs (estimated 1–10 trillion parameter models) would require 8–12 HBM3E stacks per GPU — beyond the current B200 physical architecture. The implication: every new GPU generation (NVIDIA's Rubin/R100, AMD's MI400) will require more HBM stacks, more advanced packaging, and higher memory bandwidth — a structural demand escalator that benefits Micron, SK Hynix, and the HBM memory ecosystem indefinitely.
The per-rack memory content of an AI server cluster illustrates the scale: a single NVIDIA DGX H100 system (8× H100 GPUs) contains 640GB of HBM2e, plus 2TB of DDR5 system DRAM, plus 30TB of NVMe SSD storage. At blended memory content pricing, the memory stack in a single DGX H100 represents approximately $80,000–$120,000 of the system's total cost — comparable to one H100 GPU. At the 40 GW US capacity shortfall Goldman Sachs documents, each gigawatt of AI data center capacity contains approximately 10,000 racks. The memory content of that capacity: 10,000 × $120,000 = $1.2 billion per gigawatt in memory spend alone. The 40 GW US shortfall implies $48 billion in cumulative incremental memory demand — before Europe, Asia, and Project Stargate are counted.
Geopolitical Risk — China Memory and Export Controls
Micron's most significant risk is regulatory: China's Cyberspace Administration of China (CAC) banned Micron products from "critical information infrastructure" operators in May 2023 — a retaliatory measure following US restrictions on Chinese advanced chip imports. China represented approximately 16% of Micron's FY2023 revenue. The ban's impact has been partially absorbed by Micron redirecting China supply to other markets (particularly India, Southeast Asia, and European data center expansion), but approximately $800M–$1.2B of annualised revenue remains displaced. A full resolution — or further escalation — of the US-China technology trade war is the key binary risk in the Micron investment case.
Chinese domestic memory is a structural headwind, but further than commonly believed from competitive parity. CXMT (ChangXin Memory Technologies), China's domestic DRAM producer, is shipping DDR4 and early DDR5 at estimated 25–35% yield versus industry standard 85–90%. Yangtze Memory Technologies (YMTC), China's NAND producer, reached 232-layer NAND in 2023 but faces ASML EUV import restrictions that will prevent progression to sub-10nm node technology required for next-generation HBM. The Chinese memory industry is not a 2026 threat to Micron's HBM business. Chinese domestic memory represents a 2029–2032 risk for commodity NAND market share.
Exhibit B1 A.L.C. Original Analysis · April 2026 · Memory Supply Chain — Bull / Base / Bear Scenario Analysis
| Metric | Bull Case | Base Case | Bear Case | Key Variable |
|---|---|---|---|---|
| HBM3E ASP (per GB, blended) | $28 | $22 | $16 | Samsung HBM3E yield recovery timeline |
| MU Total Revenue FY2026E | ~$45B | ~$38B | ~$28B | HBM ASP + DRAM cycle + NAND recovery pace |
| MU HBM Revenue FY2026E | ~$12B | ~$7.5B | ~$3.5B | Micron HBM3E yield ramp + NVDA allocation share |
| MU Gross Margin FY2026E | 42–46% | 35–40% | 24–28% | HBM mix shift; DRAM/NAND blended ASP |
| Enterprise NAND ASP ($/GB blended) | $0.12 | $0.09 | $0.07 | AI inference SSD demand vs supply discipline |
| WDC Revenue FY2026E (Flash segment) | ~$18B | ~$15B | ~$11B | NAND ASP recovery + enterprise SSD mix |
| DDR5 Server DRAM ASP ($/GB) | $6.50 | $5.20 | $3.80 | HBM capacity cannibalisation of standard DRAM supply |
| Memory content per AI rack ($000s) | $160K | $120K | $85K | HBM stack count + DDR5 + NVMe per DGX-class system |
| BULL: Samsung HBM3E yield issues persist through 2026; Micron gains 18–22% HBM share; NAND supply discipline holds; DDR5 data center demand outpaces supply. BASE: Samsung partially recovers by mid-2026; Micron stabilises at 12–15% HBM share; NAND pricing recovery continues at measured pace; DDR5 balanced. BEAR: Samsung full HBM3E yield recovery by Q2 2026 compresses ASPs; AI efficiency gains reduce per-model memory footprint; NAND supply grows ahead of AI storage demand; China export risk widens for Micron. Not investment advice. A.L. Capital Advisory analytical framework, April 2026. | ||||
Projections & Outlook
| Asset / Sector | Phase 1: Build (2024–26) | Phase 2: Deploy (2026–28) | Phase 3: Compound (2028–30) | A.L.C. View |
|---|---|---|---|---|
| AI Semiconductors NVDA, AMD |
↑ Accelerating. Backlog extends 12–18 months. Pricing power at peak. | ► Elevated but normalising. Efficiency gains may compress unit economics. | ↑ Next-gen inference demand drives new cycle. Moat compounds. | High Conviction Long |
| Power & Cooling VRT, CEG |
↑ Rapid growth as rack density escalates. Power PPAs being locked in now. | ↑ Continued deployment of liquid cooling. Nuclear PPAs extending. | ↑ Structural beneficiary of all three phases. Most durable earnings quality. | High Conviction Long |
| Data Center REITs EQIX, DLR |
↑ Vacancy tightening. Premium pricing in core markets. Land value accruing. | ↑ Expansion of AI-optimised facilities. Interconnect moats widen. | ↑ Long-term lease revenue compounds. REIT dividend yield supported. | High Conviction Long |
| Hyperscalers MSFT, GOOGL, AMZN |
↓ Capex absorbs free cash flow. Market questions ROI discipline. | ► Cloud revenue inflection as AI workloads monetise. Watch margins. | ↑ AI-driven cloud revenue compounds. CapEx declining as % of revenue. | Selective. Monitor capex |
| Construction / Builders | ↑ Labour and materials in high demand. Early-cycle beneficiary. | ► Growth but margins compress as capacity builds. | ↓ Cycle matures. Commodity dynamics. No moat. | Tactical only. Not core. |
| GPU Rental / Thin-Margin Ops | ► Works during scarcity. Business model intact for now. | ↓ Hyperscalers self-build eliminates demand for rented compute. | ↓ Model collapses. Structural shake-out. Avoid. | Avoid |
Exhibit 9 A.L.C. Original Analysis · ROI Bridge: What AI Revenue Must Materialise to Justify $725B in 2026 Capex?
Portfolio Construction Framework — Five principles for building the AI infrastructure position without getting burned:
Conclusion
The AI capex cycle is not a theme. The AI capex cycle is a decade-long structural reallocation of capital — from consumption to physical infrastructure — at a scale not seen since the electrification of the United States economy in the 1920s. Q1 2026 earnings confirmed the Big-5 hyperscalers will spend approximately $725 billion on AI infrastructure in 2026 alone, nearly tripling the $256 billion deployed in 2024. Goldman Sachs documents a US capacity shortfall already exceeding 11 gigawatts, growing to 40 GW by 2028. North American colocation vacancy has fallen to 2.3% — a level at which pricing power is structural and durable.
This paper has mapped the full capital flow across the cycle's three phases. In Phase 1 (Build, 2024–2026), semiconductor procurement and thermal management are the primary value-capture layer — hence NVIDIA and Vertiv as peak-conviction positions. In Phase 2 (Deploy, 2026–2028), contracted infrastructure operators with physical scarcity moats begin to compound: Equinix's interconnect estate and Constellation Energy's 20-year nuclear power purchase agreements. In Phase 3 (Compound, 2028–2030), all five positions benefit simultaneously as AI-driven cloud revenue scales and contracted revenue compounds across multi-year agreements. A static "AI basket" allocation misses the Phase 2–3 rotation entirely.
The structural bull case rests on three pillars that the bear case cannot dislodge without a fundamental change in physical reality: construction lead times of 18–30 months mean supply cannot respond to short-term sentiment shifts; AI accelerator refresh cycles of 3–4 years mean overcapacity converts to obsolescence faster than it becomes stranded; and the contracts-first structure of the build-out means virtually every dollar of hyperscaler guidance is committed before a shovel enters the ground. These are not financial projections — they are engineering constraints.
The honest risks are equally structural. If enterprise AI deployment fails to materialise at scale by 2027, the demand curve reverts to the constrained scenario ($3.7T vs $5.2T in AI-specific capex). If semiconductor efficiency gains (in the tradition of DeepSeek V3) suppress training demand, NVIDIA's backlog clears faster than consensus models. If hyperscaler ROI discipline breaks down under debt pressure, the capex/revenue ratio remains above 50% indefinitely — compressing free cash flow precisely when patient capital needs a return. These risks are real. The risks are precisely why A.L. Capital Advisory distinguishes High Conviction infrastructure suppliers (NVDA, VRT, EQIX, CEG) from Selective hyperscaler equity positions (MSFT, GOOGL) — infrastructure suppliers are paid regardless of which ROI scenario resolves; hyperscaler equity positions are not.
The monitoring framework is straightforward: vacancy below 3% is healthy; above 6% is the early warning. Capex/revenue ratios declining from 2026 onwards signal the ROI inflection the market is waiting for. Enterprise AI deployment rate — currently in the early phase — is the key leading indicator for whether the McKinsey base case ($6.7T by 2030) is achieved or exceeded. A.L. Capital Advisory updates these readings quarterly.
High Conviction Long: NVIDIA (NASDAQ: NVDA) · Vertiv Holdings (NYSE: VRT) · Equinix (NASDAQ: EQIX) · Constellation Energy (NASDAQ: CEG) · Micron Technology (NASDAQ: MU) — five positions across the AI infrastructure stack, covering Technology Developers, Power & Thermal Management, Data Center REITs, and Nuclear Baseload. Model Bridge weighted scores: 24.0 (NVDA), 22.5 (VRT), 22.0 (EQIX), 21.5 (MU), and 21.0 (CEG) out of 25.0. All five positions benefit across all three phases of the AI capex cycle with varying peak intensities. Selective: MSFT · GOOGL · AMD — monitor capex/revenue ratio and ROI discipline quarterly before adding or increasing exposure.
Data Appendix
| Figure | Value | Primary Source | Date Verified | Methodology Note |
|---|---|---|---|---|
| Global data center capex by 2030 (base case) | $6.7 trillion | McKinsey & Company, "The Cost of Compute" | Apr 2025 | Base-case of three modelled scenarios; 125 GW incremental AI capacity |
| Global data center capex by 2030 (accelerated scenario) | $7.9 trillion | McKinsey & Company, "The Cost of Compute" | Apr 2025 | Accelerated scenario: 160 GW incremental AI capacity; AI deployment rate outpaces base |
| Global data center capex by 2030 (constrained scenario) | $5.2 trillion | McKinsey & Company, "The Cost of Compute" | Apr 2025 | Constrained scenario: 100 GW incremental AI capacity; enterprise deployment slower than modelled |
| AI share of total data center demand by 2030 | ~70% | McKinsey & Company, "The Cost of Compute" | Apr 2025 | AI workloads: $5.2T of $6.7T total; remaining $1.5T traditional IT |
| Global data center capacity, 2025 (baseline) | 82 GW | McKinsey & Company | Apr 2025 | Installed capacity; includes hyperscaler, colo, and enterprise |
| Global data center capacity, 2030 (base case) | 207 GW | McKinsey & Company | Apr 2025 | Base-case projection; approximately 2.5× 2025 baseline |
| North American colo vacancy rate, H1 2025 | 2.3% | JLL Research, North America Colocation Vacancy | Jun 2025 | Published H1 2025 survey; down from 9.8% in 2020 |
| US data center capacity shortfall (current) | >11 GW | Goldman Sachs, "Powering the AI Era" | 2025 | Gap between demand and committed supply in US markets |
| US data center capacity shortfall by 2028 | >40 GW | Goldman Sachs, "Powering the AI Era" | 2025 | Cumulative projected gap at current construction pace |
| Big-5 hyperscaler capex, 2024 actual | ~$256B | CreditSights, individual earnings filings | Nov 2025 | Amazon + Alphabet + Meta + Microsoft + Oracle; FY2024 |
| Big-5 hyperscaler capex, 2025 estimate | ~$443B (+73%) | CreditSights, Futurum Group | Feb 2026 | Estimated based on Q1–Q3 2025 actuals + Q4 guidance |
| Big-5 hyperscaler capex, 2026 guidance | $660–690B (+36%) | Futurum Group, individual Q4 2025 earnings calls | Feb 2026 | Amazon $200B, Alphabet $175–185B, Meta $115–135B, MSFT $120B+, Oracle $50B |
| AI Data Center GPU ASP (B200/H100-class, wholesale) | $25,000–$35,000 | A.L. Capital Advisory estimate; Bloomberg, earnings disclosures | Apr 2026 | Blended H100/B200 allocation; wholesale hyperscaler pricing; retail premium 20–40% above |
| NVIDIA (NVDA) B200 GPU lead time (hyperscaler allocation) | 12–18 months | A.L. Capital Advisory primary research; industry sources | Apr 2026 | CoWoS advanced packaging constraint; not GPU die fab capacity |
| TSMC (TSM) CoWoS capacity 2026E (wafers/month) | ~28,000 | TSMC investor day; A.L. Capital Advisory estimate | Apr 2026 | Up from ~9,000 wpm in 2024; demand tracking above build rate through mid-2026 |
| ASML EUV machine unit price (standard EUV) | ~€200M | ASML investor relations; public disclosures | Apr 2026 | High-NA EUV: ~€380M per unit; only supplier of EUV globally |
| ASML EUV annual shipment volume (2025 actual) | ~50 units | ASML annual report 2025 | Mar 2026 | 18–24 month lead times; order book extends to 2027 |
| HBM3E ASP (per GB, blended, 2026E) | ~$22/GB | TrendForce memory pricing; A.L. Capital Advisory estimate | Apr 2026 | vs $3–4/GB for standard DDR5; premium from stacking complexity and CoWoS packaging |
| HBM3E bandwidth per stack | 1.2 TB/s | SK Hynix, Micron (MU) product specifications | Apr 2026 | vs GDDR6 ~0.3 TB/s; 4–5× bandwidth advantage for AI training workloads |
| NVDA B200 GPU HBM3E content per chip | 192 GB (6 stacks) | NVIDIA Blackwell architecture whitepaper | Mar 2025 | Each stack 8-Hi, 32GB; 6 stacks × 32GB = 192GB total per B200 die |
| Memory content per DGX H100 rack system ($) | ~$120,000 | A.L. Capital Advisory analysis; NVIDIA DGX specifications | Apr 2026 | 640GB HBM2e + 2TB DDR5 DRAM + 30TB NVMe SSD at blended market ASPs |
| Micron HBM market share (2026E) | 12–15% | A.L. Capital Advisory estimate; TrendForce | Apr 2026 | Ramping from ~8% in 2025; SK Hynix ~50%, Samsung ~35% |
| Enterprise NAND ASP recovery from trough (2023–2026) | +60–80% | TrendForce; Western Digital earnings | Apr 2026 | Trough Q3 2023; AI training storage & inference cache driving enterprise SSD demand recovery |
| AMD MI300X estimated data center GPU market share | 5–8% | A.L. Capital Advisory estimate; IDC, Bloomberg | Apr 2026 | CUDA moat limits adoption; MI300X deployed by MSFT Azure and Meta for inference workloads |
| Amazon AWS 2026 capex guidance | $200B | Amazon Q4 2025 earnings call | Feb 2026 | Full-year 2026 guidance; predominantly AWS data center and AI infrastructure |
| Alphabet 2026 capex guidance | $180–190B | Alphabet Q1 2026 earnings call | Apr 2026 | Raised from $175–185B; includes Intersect acquisition closed March 2026. Google Cloud Q1: $20B (+63% YoY) |
| Meta 2026 capex guidance | $125–145B | Meta Q1 2026 earnings call | Apr 2026 | Raised from $115–135B; CFO cited higher component pricing (memory inflation) as primary driver |
| Microsoft 2026 capex guidance | $120B+ | Microsoft Q2 FY2026 earnings call | Feb 2026 | Azure AI infrastructure and data center build-out; exact figure not specified |
| Oracle 2026 capex guidance | $50B | Oracle Q3 FY2026 earnings call | Feb 2026 | OCI cloud and AI infrastructure; part of broader $100B multi-year commitment |
| AI-specific share of 2026 hyperscaler capex | ~75% (~$450B) | CreditSights | Nov 2025 | Excludes traditional cloud, logistics, and non-AI infrastructure |
| Goldman Sachs 2025–2027 hyperscaler capex projection | $1.15T | Goldman Sachs | 2025 | More than double the $477B deployed across 2022–2024 |
| Hyperscaler capex as % of revenue | 45–57% | Introl / CreditSights analysis | Dec 2025 | Ratio previously seen only in industrial utilities and telcos |
| New debt issuance needed — tech sector, 2025–2027 | ~$1.5T | Morgan Stanley / J.P. Morgan | 2025 | Projected total; bridges gap between FCF and capex commitments |
| Project Stargate programme value | $500B | White House / OpenAI announcement | Jan 2026 | OpenAI, SoftBank, Oracle; initial $100B committed within 4 years |
| NVIDIA GPU market share in AI accelerators | ~90% | Introl / CreditSights | Dec 2025 | Share of AI accelerator spend; approximately 6M GPUs at ~$30K avg |
| AI chip power density vs CPU | 10–15× | A.L. Capital Advisory / Vertiv technical documentation | Apr 2026 | H100/B200 clusters vs. equivalent CPU rack power draw |
| Equinix data centers globally | 260+ | Equinix investor relations, Q4 2025 | Dec 2025 | Operational IBX data centers across 70+ metropolitan markets |
| Constellation Energy US nuclear capacity share | ~5% | Constellation Energy investor relations | Apr 2026 | Approximately 5% of total US electricity generation capacity from nuclear |
| Fiber overbuild vacancy rate (post-2001) | >20% | McKinsey & Company / JLL Research | Apr 2025 | Telecom infrastructure vacancy after the dot-com collapse; cited for structural comparison |
Model Bridge
The conviction ratings in this paper are produced by scoring each security across five criteria, each weighted by its relative importance to long-term AI infrastructure returns. The criteria and weights are stated below. Scoring is on a 1–5 scale (5 = strongest). A score above 19 qualifies for High Conviction; 14–18 for Selective; below 14 for Avoid.
| Company | AI Capex Exposure 30% weight |
Moat Durability 25% weight |
Revenue Visibility 20% weight |
Valuation 15% weight |
Geo / Reg Risk 10% weight |
Weighted Score | Rating |
|---|---|---|---|---|---|---|---|
| NVIDIA NVDA | 5 — ~90% AI accel. share; H100/B200 demand | 5 — CUDA ecosystem lock-in; software moat | 4 — Backlog 12–18mo; some export risk | 4 — Premium warranted; consensus may lag | 3 — Export controls on China material risk | 24.0 / 25 | High Conviction |
| Vertiv VRT | 5 — $1.3T Energizer pool; liquid cooling necessity | 4 — Thermal IP; hyperscaler relationships | 5 — Long-term hyperscaler contracts; backlog | 4 — Less crowded than semis; reasonable valuation | 4 — Low geopolitical exposure; US/EU footprint | 22.5 / 25 | High Conviction |
| Equinix EQIX | 5 — 2.3% vacancy; 70 metros; land scarcity | 5 — Interconnect moat unreplicable by hyperscalers | 5 — REIT long-term leases; contracted revenue | 3 — Premium EV/MW; REIT rate sensitivity | 4 — REIT structure; permitting risk in some markets | 22.0 / 25 | High Conviction |
| Constellation Energy CEG | 4 — Nuclear baseload; ~5% US electricity; AI power spec | 4 — Existing licensed nuclear fleet; new entrants 10+ years | 5 — 20-year PPAs; Microsoft TMI template | 4 — AI premium not fully priced; consensus lag | 3 — Nuclear regulation; political risk in some states | 21.0 / 25 | High Conviction |
| Micron Technology MU | 5 — HBM3E sole Western supplier; AI memory wall beneficiary | 4 — 3-supplier HBM oligopoly; DRAM/NAND cycle expertise | 4 — HBM backlog; DRAM cycle pricing power; NAND recovery | 4 — Consensus underestimates HBM mix shift; re-rating potential | 3 — China revenue ban risk; geopolitical semiconductor exposure | 21.5 / 25 | High Conviction |
| Microsoft MSFT | 4 — Azure cloud + Copilot; but also capex risk | 4 — Enterprise cloud moat; Office lock-in | 3 — Revenue building but $120B+ capex weighs | 3 — Fairly valued; ROI discipline key variable | 3 — Low geopolitical risk; some EU regulatory | 17.0 / 25 | Selective |
| Alphabet GOOGL | 4 — Google Cloud + Search AI; $180–190B capex (Q1 2026 raised) | 4 — Search moat; TPU custom silicon | 3 — Ad revenue stable; cloud inflecting | 3 — Reasonable; capex/FCF tension | 2 — DOJ antitrust; Search disruption risk | 16.0 / 25 | Selective |
| AMD AMD | 3 — MI300X challenger; 5–10pp NVDA share thesis | 3 — ROCm maturing; CUDA stickiness is real | 3 — Growing but no backlog visibility | 4 — Asymmetric if share shift materialises | 4 — Lower export control exposure than NVDA | 15.0 / 25 | Selective |
Exhibit 10 A.L.C. Framework · Conviction Scorecard: Weighted Model Scores Across 7 Securities
Sensitivity Analysis
The following scenario tables show how the investment thesis for each high-conviction position varies under different assumptions. The base case is used throughout the main paper. The bull and bear cases are not price targets — they define the range of outcomes that would cause a material re-rating of the conviction.
Sensitivity Table A · NVIDIA (NASDAQ: NVDA) — Bull / Base / Bear Scenario Analysis
| Scenario | Key Assumption | GPU Demand | Market Share | Revenue Growth (FY2026) | Conviction Impact |
|---|---|---|---|---|---|
| Bull | Blackwell B200 ramp exceeds expectations; export controls stable; inference workloads accelerate faster than efficiency gains | Sustained; backlog extends to 18+ months | 90%+ maintained | >80% YoY | Upgrade to maximum position weight |
| Base ★ | Healthy B200 ramp; moderate export restrictions; CUDA stickiness intact; AMD ROCm gains modest 3–5pp share | Strong; 12–18 month backlog | 85–90% | 40–60% YoY | Maintain High Conviction; current weight |
| Bear | Efficiency gains (DeepSeek-style) suppress training GPU demand; China export controls tighten materially; AMD gains 10pp+ share | Slowing; backlog clears faster than orders refill | <80% | 10–25% YoY | Reduce to Selective; monitor quarterly |
| Ticker | Scenario | Key Variable | Assumption | Revenue Growth (FY2026E) | Conviction Impact |
|---|---|---|---|---|---|
| VRT | Bull | Liquid cooling adoption rate | 60%+ of new AI racks by 2027; immersion cooling accelerates | 40%+ YoY | Upgrade weighting |
| Base ★ | Liquid cooling adoption rate | 35–40% of new AI racks adopt liquid cooling; air cooling holds in legacy deployments | 25–35% YoY | Maintain High Conviction | |
| Bear | Liquid cooling adoption rate | Air cooling innovation delays adoption; hyperscaler in-house thermal IP competes | 10–15% YoY | Reduce to Selective | |
| CEG | Bull | Nuclear PPA pricing & volume | >$100/MWh on new PPAs; 3+ hyperscaler agreements signed in 2026 | 20%+ YoY earnings | Upgrade weighting |
| Base ★ | Nuclear PPA pricing & volume | $80–100/MWh; 1–2 new hyperscaler PPAs on Microsoft TMI template | 10–15% YoY earnings | Maintain High Conviction | |
| Bear | Nuclear PPA pricing & volume | Regulatory delays on nuclear permits; PPA pricing <$75/MWh; no new agreements in 2026 | 0–5% YoY earnings | Reduce to Selective |
| Scenario | Key Variable | N. America Vacancy | Lease Rate Δ (Renewals) | Revenue Growth | Conviction Impact |
|---|---|---|---|---|---|
| Bull | Vacancy tightens further; pricing power accelerates | <1.5% | +15%+ on renewals in VA, London, Singapore | >15% YoY | Upgrade to maximum weight; dividend growth 10%+ |
| Base ★ | Vacancy stable at historic lows; pricing power maintained | 1.5–3.0% | +8–12% on renewals | 10–12% YoY | Maintain High Conviction; steady dividend growth |
| Bear | Hyperscaler self-build reduces tier-1 colo demand; vacancy rises | >5.0% | Flat to –5% on renewals | 3–6% YoY | Reduce to Selective; monitor vacancy quarterly |
Exhibit S4 · Micron Technology MU — HBM3E & DRAM Cycle Sensitivity Analysis April 2026
| Scenario | HBM3E ASP ($/GB) | MU Revenue FY2026E | Gross Margin | HBM Revenue | Key Trigger |
|---|---|---|---|---|---|
| Bull | $28 | ~$45B | 42–46% | ~$12B | Samsung HBM3E yield issues persist through 2026; Micron gains 18–22% share; NAND supply discipline holds |
| Base | $22 | ~$38B | 35–40% | ~$7.5B | Samsung partially recovers by mid-2026; Micron stabilises at 12–15% HBM share; DDR5 balanced |
| Bear | $16 | ~$28B | 24–28% | ~$3.5B | Samsung full HBM3E yield recovery Q2 2026; AI efficiency compresses memory demand; China ban risk widens |
| Not investment advice. A.L. Capital Advisory framework, April 2026. See §09 Memory section for full HBM methodology. | |||||
Capex Efficiency & Quarterly Watch List
The AI capex cycle investment thesis is straightforward to track. Five metrics, updated each earnings quarter, determine whether the base case is intact, accelerating, or showing early bear-case signals. A.L. Capital Advisory monitors each figure below against the thresholds defined in the Model Bridge. The most critical single variable is the hyperscaler capex/revenue ratio — when this begins declining, it signals the AI ROI inflection that re-rates cloud infrastructure equities.
Exhibit 11 A.L.C. Original Analysis · Hyperscaler AI Capex Efficiency: Revenue vs. Spend, 2024–2026E
| Company | 2024 Revenue | 2024 Capex | Cap/Rev 2024 | 2026E Capex | Cap/Rev 2026E | Signal | |
|---|---|---|---|---|---|---|---|
| Amazon (AWS) NASDAQ: AMZN |
~$590B | ~$75B | ~13% | $200B | ~34% | ↑ Rising — watch Q3 2026 | |
| Alphabet NASDAQ: GOOGL |
~$350B | ~$52B | ~15% | $180B | ~51% | ↑ Rising — highest ratio of four | |
| Meta Platforms NASDAQ: META |
~$190B | ~$44B | ~23% | $125B | ~75% | ↑ Highest absolute — pure internal spend | |
| Micron Technology MU | 5 — HBM3E sole Western supplier; AI memory wall beneficiary | 4 — 3-supplier HBM oligopoly; DRAM/NAND cycle expertise | 4 — HBM backlog; DRAM cycle pricing power; NAND recovery | 4 — Consensus underestimates HBM mix shift; re-rating potential | 3 — China revenue ban risk; geopolitical semiconductor exposure | 21.5 / 25 | High Conviction |
| Microsoft NASDAQ: MSFT |
~$245B | ~$56B | ~23% | $120B | ~49% | ↑ Rising — Azure ROI key watchpoint |
Exhibit 12 A.L.C. Quarterly Framework · Quarterly Watch List: Five Metrics That Determine Whether the AI Capex Cycle Stays on Track
| Metric | April 2026 Reading | Base-Case Range | Bull Signal | Bear Trigger | Source · Cadence | Position Impact |
|---|---|---|---|---|---|---|
| N. America colo vacancy | 2.3% | <3% healthy · <6% neutral | <1.5% — pricing power maximum | >6% — oversupply entering market | JLL Research · Quarterly | EQIX · CEG land value |
| Hyperscaler capex/revenue | 45–57% | Declining from 2027 = base | Ratio declining = ROI inflection | Rising >60% into 2027 = discipline breakdown | Earnings calls · Quarterly | MSFT · GOOGL rating |
| Enterprise AI deployment | Early stage | Scale deployment by end-2027 | Fortune 500 AI ROI disclosures >20% | Enterprise pilots cancelled at scale | Earnings · Industry surveys · Q | Demand curve scenario |
| NVIDIA GPU lead times | 12–18 months | 8–18 months = healthy demand | >18 months — demand acceleration | <4 months — demand slowdown signal | NVDA earnings · Analyst checks · Q | NVDA conviction level |
| Nuclear PPA pricing | $80–100/MWh | $80–100/MWh = base case | >$100/MWh — power scarcity premium | <$60/MWh — regulatory or gas competition | CEG earnings · DOE data · Q | CEG earnings upgrade/downgrade |
The most underappreciated dynamic in the AI capex cycle is the asymmetry between Energizer positions and Technology Developer positions. NVIDIA's revenue depends on whether hyperscalers keep buying GPUs — a decision driven by enterprise AI monetisation, competition, and export controls. Vertiv's and Constellation Energy's revenue depends on whether data centers keep consuming power and cooling — a physical requirement that exists regardless of which AI model wins, which cloud platform dominates, or which semiconductor generation is current. Power consumption does not have a "DeepSeek moment." The Energizer archetype's structural durability explains why VRT and CEG carry the highest conviction durability score in the A.L. Capital Advisory Model Bridge, despite being less widely owned than NVDA in institutional AI baskets.
Frequently Asked Questions
Update History
- Apr 30 2026 Version 2 — Q1 2026 earnings update. Big-5 hyperscaler capex revised to ~$725B (up from $660–690B pre-earnings consensus). Alphabet raised to $180–190B (Intersect acquisition), Meta raised to $125–145B (memory inflation), Microsoft guided ~$190B CY2026. Google Cloud Q1: $20B (+63% YoY). AWS Q1: $37.6B (+28% YoY). Breaking Intelligence section added. Micron elevated to fifth High Conviction position. All exhibits and data appendix updated.
- Feb 01 2026 Version 1 — Initial publication. AI capex cycle analysis based on pre-Q1 2026 guidance ($660–690B consensus). Four High Conviction positions: NVDA, VRT, EQIX, CEG. Full McKinsey demand model, Goldman Sachs capacity shortfall, JLL vacancy data, Project Stargate programme analysis.
References
- 1.McKinsey & Company. "The cost of compute: A $7 trillion race to scale data centers." Jesse Noffsinger, Mark Patel, Pankaj Sachdeva. TMT Practice, April 2025.
- 2.JLL Research. North America Colocation Vacancy, H1 2025. Published June 2025.
- 3.KKR Global Infrastructure. "Beyond the Bubble: Why We Think AI Infrastructure Will Compound Long after the Hype." November 2025.
- 4.Goldman Sachs. "Powering the AI Era." 2025. Cited via Empower Investment Insights, 2025.
- 5.CreditSights. "Technology: Hyperscaler Capex 2026 Estimates." November 25, 2025.
- 6.Futurum Group (Nick Patience). "AI Capex 2026: The $690B Infrastructure Sprint." February 12, 2026. Updated post-Q1 2026 earnings: consensus revised to ~$725B (FT, April 30, 2026).
- 7.Morgan Stanley / J.P. Morgan. AI Infrastructure Debt Issuance Projections, 2025. Cited via Introl Blog, December 2025.
- 8.Bain & Company. AI Infrastructure Capital Intensity Research, 2025. Cited via Empower Investment Insights.
- 9.White House / OpenAI. Project Stargate Announcement. January 2026.
- 10.Morningstar. "AI Arms Race: How Tech's Capital Surge Will Reshape the Investment Landscape in 2026." December 12, 2025.
- 11.State Street Global Advisors (SSGA). "Why the AI CapEx Cycle May Have More Staying Power Than You Think." November 17, 2025.
- 12.U.S. Bureau of Labor Statistics. GDP and capex share data. Bloomberg terminal data as of June 30, 2025 (cited via KKR GMAA).
- 13.DeepSeek V3 efficiency claims: TechCrunch January 27, 2025; Artificial Analysis January 27, 2025.
- 14.All stock-specific analysis, conviction ratings, and projections represent independent views of A.L. Capital Advisory. Not investment advice.
- 15.A.L. Capital Advisory Historical Infrastructure Cycles Analysis. Peak capex as % of US GDP: Railroads 1880s (BLS, Federal Reserve historical data); Electrification 1920s (BLS, NBER Macrohistory Database); Fiber & Telecom 2000 peak (BLS, KKR GMAA, Bloomberg); AI Infrastructure 2026E (CreditSights, Futurum Group). GDP denominator: US nominal GDP at each cycle peak, Federal Reserve Economic Data (FRED). Methodology and calculations original to A.L. Capital Advisory, April 2026.
- 16.A.L. Capital Advisory Capex Efficiency Analysis. Revenue figures sourced from company-reported annual results (Amazon FY2024 $590B, Alphabet FY2024 $350B, Meta FY2024 $165B, Microsoft FY2024 $245B). Capex figures: CreditSights November 2025, Futurum Group February 2026. Capex/Revenue ratio and AI-specific revenue estimates are A.L. Capital Advisory calculations, April 2026. Not investment advice.
- 17.A.L. Capital Advisory Quarterly Watch List Framework. Vacancy threshold methodology derived from JLL Research historical data. GPU lead-time ranges sourced from NVIDIA earnings calls and analyst channel checks. Nuclear PPA pricing ranges from Constellation Energy investor relations and DOE Energy Information Administration. Enterprise deployment assessment is A.L. Capital Advisory qualitative judgement based on public earnings disclosures. Framework original to A.L. Capital Advisory, April 2026.
Translate research into portfolio decisions
The Strategic Session is where we take research like this and build concrete allocation decisions — position sizing, archetype exposure, phase timing — tailored to your risk profile.
Book a Strategic Session →