Perspective: The current AI supercycle will last 15 years, but most are still buying stocks in the first FOMO stage

marsbitPublished on 2026-05-09Last updated on 2026-05-09

Abstract

This article outlines a 15-year AI supercycle, segmented into four investment stages. It argues that while most investors are still focused on the first stage, smart money is already moving to the third. **Stage 1: The Foundation (2023-2025) - Priced In** The semiconductor layer (e.g., NVIDIA, AMD) is complete. While growth continues, the historic entry opportunity is over as risk/reward has compressed. **Stage 2: The Build-Out (2025-2027) - In Progress** This phase involves building the necessary physical infrastructure: power/utilities (CEG), cooling (VRT), networking (ANET), and nuclear SMRs (OKLO, SMR). Significant upside remains, but obvious names have already moved. **Stage 3: The Asymmetric Bet (2026-2028) - Positioning Window** AI moves into the physical world. Key areas include robotics/autonomy (Tesla Optimus), space/defense/drones (Rocket Lab, LUNR), and critical materials. This stage presents the best asymmetric risk/reward and is where positioning should occur now. **Stage 4: The Endgame (2028+) - Software Dominance** The mega-cap cloud platforms (Microsoft, Alphabet, Amazon, Meta), with their massive capital expenditure, will build the AI software layer and AGI infrastructure, aiming to win the entire cycle. **Core Conclusion:** The cycle is confirmed in Stage 2. Stage 3 (robotics, space, defense, nuclear SMRs) is where capital is currently rotating for maximum opportunity, while the majority of investors are expected to be 12 months behind this shift.

Author: Rand Group (@cryptorand)

Compiled by: Deep Tide TechFlow

Deep Tide Intro: Crypto KOL Rand Group breaks down the AI supercycle into four stages, from chips to infrastructure to robots to platform software, marking the core targets and risk-reward ratios for each stage. His judgment is: Stage 1 (Semiconductors) is over, Stage 2 (Power/Cooling/Networks) is being priced, and the true asymmetric opportunity lies in Stage 3 — robotics, space, defense, nuclear energy.

The AI supercycle will last 15 years. This is year three.

Most investors are still buying Stage 1 stocks, but smart money is already rotating into Stage 3.

I've broken the entire cycle into four stages, with the most important tickers labeled for each.

The AI supercycle is the biggest investment theme of this generation. Bigger than mobile internet, bigger than cloud computing. A 15-year structural shift that will reshape every industry in the global economy. Hyperscale cloud providers just committed $725 billion in capex for 2026, nearly double last year's. Microsoft, Google, Amazon, Meta — each over $100 billion individually.

This is not speculation.

🔴 Stage 1: Over (2023-2025)

The foundation layer is complete. AMD was up 78% in 2025, NVDA up 39%, Intel just delivered a blowout Q1, pushing the Philadelphia Semiconductor Index above 10,000 for the first time. Chips still drive every stage, but the historic entry opportunity is gone; the risk-reward has compressed.

Tickers: NVDA, AMD, ARM, INTC, AVGO, MU, GLW

Sectors: Semiconductors, Memory, Photonics/Optics

Status: Foundation complete, still growing, but priced in.

🟡 Stage 2: Buildout Peak (2025-2027)

The stage most investors are just waking up to. CEG acquiring Calpine to become the largest private U.S. power producer at 55 GW. GEV up over 200% in a year. VRT co-designing cooling for NVIDIA's Rubin architecture. GLW up 74% YTD on fiber demand. Nuclear SMR is the biggest dark horse — OKLO, SMR, BWXT are laying direct power lines for data centers.

Still upside, but the most obvious names have moved.

Tickers: CEG, GEV, VRT, VST, TLN, ANET, GLW, MOD, EQIX, OKLO, SMR, BWXT, NNE

Sectors: Power/Grid, Cooling, Networking, Nuclear SMR Buildout Peak

Note: Nuclear SMR is the hidden major opportunity.

🟡 Stage 3: Positioning Window (2026-2028)

The stage where AI leaves the data center and enters the physical world. Most will be late.

Tesla is converting its Fremont factory into an Optimus robot production line — $25 billion capex, targeting mass production in H2 2026. Rocket Lab posted a record $602M revenue, backlog at $1.85B. LUNR up 47% YTD with $943M in contracts. KTOS's Valkyrie drone selected by the Marine Corps.

The positioning window is open now.

Tickers: TSLA, RKLB, LUNR, KTOS, AVAV, PATH, ISRG, MP, FCX, ALB, ASTS

Sectors: Robotics/Autonomy, Space/Defense/Drones, Rare Earths

Judgment: The asymmetric risk-reward is here.

🟢 Stage 4: Endgame (2028+)

The endgame. Microsoft capex $190B, Alphabet $190B, Amazon $200B, Meta $145B. Google Cloud backlog exceeds $460B. They are building AI software dominance and AGI infrastructure. Quantum computing is early, but IONQ and D-Wave are laying the groundwork.

The platforms controlling the software layer win the entire supercycle.

Tickers: MSFT, GOOGL, AMZN, META, ORCL, IONQ

Sectors: AI Software Dominance, AGI Infrastructure, Decade-long thesis

Strategy: Buy the dips.

Key Conclusions

  • Stage 2 is confirmed (hyperscale $725B capex)
  • Stage 3 is where smart money is positioning — robotics, space, defense, nuclear
  • SMR is the core trade from 2026 to 2028
  • Most will rotate into these names 12 months late

A 15-year supercycle. Not a single trade. Stage 1 is over, Stage 2 is being priced, Stage 3 is where you should be.

Related Questions

QAccording to the article, what is the author's view on the duration and current stage of the AI supercycle?

AThe author believes the AI supercycle will last 15 years and is currently in its third year. The first stage (2023-2025) is already over, the second stage (2025-2027) is in progress, and the third stage (2026-2028) is where the most asymmetric opportunities lie.

QWhat sectors or 'stages' does the author identify as having the best asymmetric risk/reward opportunity right now?

AThe author identifies the third stage as having the best asymmetric risk/reward opportunity. This stage includes robotics/autonomous systems, space/defense/drones, and rare earths, with specific mentions of companies like Tesla, Rocket Lab, and Intuitive Machines.

QWhich specific sector within the 'construction boom' (second stage) is highlighted as a major hidden opportunity?

AWithin the second stage (the construction boom), the author highlights nuclear energy, specifically Small Modular Reactors (SMRs), as the major hidden opportunity. Companies mentioned include Oklo, NuScale Power (SMR), and BWX Technologies.

QWhat key metric is cited as evidence confirming the transition to the second stage of the AI supercycle?

AThe author cites the $725 billion in committed capital expenditure for 2026 by hyperscale cloud providers (Microsoft, Google, Amazon, Meta) as the key metric confirming the transition to the second stage. This amount is nearly double that of the previous year.

QWhat does the author suggest is the strategy for investing in the 'Endgame' (fourth stage) companies?

AFor the 'Endgame' or fourth stage companies (like Microsoft, Alphabet, Amazon, Meta), which control the AI software platform, the author's suggested strategy is to 'buy the dips,' indicating a long-term, patient accumulation approach.

Related Reads

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbit1h ago

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbit1h ago

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

Corning, a 175-year-old glass company, is experiencing a dramatic revival as a key player in AI infrastructure, driven by surging demand for high-performance optical fiber in data centers. AI data centers require vastly more fiber than traditional ones—5 to 10 times as much per rack—to handle high-speed data transmission between GPUs. This structural demand shift, coupled with supply constraints from the lengthy expansion cycle for fiber preforms, has created a significant supply-demand gap. Nvidia has invested in Corning, along with Lumentum and Coherent, in a $4.5 billion total commitment to secure the optical supply chain for AI. Corning's competitive edge lies in its expertise in producing ultra-low-loss, high-density, and bend-resistant specialty fiber, which is critical for 800G+ and future 1.6T data rates. Its deep involvement in co-packaged optics (CPO) with partners like Nvidia further solidifies its position. While not the largest fiber manufacturer globally, Corning's revenue from enterprise/data center clients now exceeds 40% of its optical communications sales, and it has secured multi-year supply agreements with major hyperscalers including Meta and Nvidia. Financially, Corning's optical communications revenue has surged, doubling from $1.3 billion in 2023 to over $3 billion in 2025. Its stock price has risen nearly 6-fold since late 2023. Key future catalysts include the rollout of Nvidia's CPO products and the scale of undisclosed customer agreements. However, risks include high current valuations and potential disruption from next-generation technologies like hollow-core fiber. The company's long-term bet on light over electricity, maintained even through the telecom bubble crash, is now being validated by the AI boom.

marsbit1h ago

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

marsbit1h ago

In the Age of AI, the Organization Itself Is the Moat

In the AI era, where products, interfaces, and narratives are easily replicated, a company's true moat is its organizational structure. The article argues that exceptional companies like OpenAI, Anthropic, and Palantir differentiate themselves not merely through technology but by inventing new organizational forms that allow a specific type of talent to thrive and become a version of themselves they couldn't elsewhere. These companies compete on identity, offering ambitious individuals a sense of being special, chosen, close to power, and part of a historic mission. However, this emotional commitment must be matched by structural commitment—real power, ownership, status, and economic participation. For founders, the key question is not how to tell a better story, but what kind of person can only truly realize their potential within their specific company structure. For individuals evaluating opportunities, the distinction between "being chosen" (an emotional feeling) and "being seen" (a structural reality of tangible power and rewards) is crucial. The most dangerous promises are those priced in future time. While AI makes copying visible elements easy, it does not make building a great, novel organization any easier. The next frontier of competition is creating organizational vessels that attract, structure, and compound the judgment of the right people—those whom traditional boxes cannot contain. The company itself becomes the moat.

marsbit2h ago

In the Age of AI, the Organization Itself Is the Moat

marsbit2h ago

Turing Award Laureate Sutton's New Work: Using a Formula from 1967 to Solve a Major Flaw in Streaming Reinforcement Learning

New research titled "Intentional Updates for Streaming Reinforcement Learning" (arXiv:2604.19033v1), involving Turing Award laureate Richard Sutton, addresses a core challenge in deep reinforcement learning (RL): the "stream barrier." Current deep RL methods typically rely on replay buffers and batch training for stability, failing catastrophically when learning online from single data points (streaming). The authors propose a fundamental shift: instead of prescribing how far to move parameters (a fixed step size), their "Intentional Updates" method specifies the desired change in the function's output (e.g., a 5% reduction in value prediction error). It then calculates the step size needed to achieve that intent. This idea is inspired by the Normalized Least Mean Squares (NLMS) algorithm from 1967. Applied to value and policy learning, this yields algorithms like Intentional TD(λ) and Intentional AC. The method inherently stabilizes learning by adapting the step size based on the local gradient landscape, preventing overshooting/undershooting. In experiments on MuJoCo continuous control and Atari discrete tasks, Intentional AC achieved performance rivaling batch-based algorithms like SAC in a streaming setting (batch size=1, no replay buffer), while being ~140x more computationally efficient per update. The work demonstrates significant robustness, reducing reliance on numerous stabilization tricks. A remaining challenge is bias in policy updates due to action-dependent step sizes. Overall, this approach advances efficient, online, "learn-as-you-go" RL, enabling adaptive systems without massive data buffers or compute clusters.

marsbit3h ago

Turing Award Laureate Sutton's New Work: Using a Formula from 1967 to Solve a Major Flaw in Streaming Reinforcement Learning

marsbit3h ago

Trading

Spot
Futures

Hot Articles

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of AI (AI) are presented below.

活动图片