By|Sleepy.txt
In February 2026, hedge fund Situational Awareness LP filed its quarterly holdings report, showing that as of the end of the fourth quarter of 2025, the total market value of the fund's U.S. stock holdings was $5.517 billion.
Wall Street manages tens of trillions of dollars in assets, and $5.5 billion is just a drop in the bucket. But this fund had less than $400 million in assets under management just 12 months ago, and its founder and chief investment officer is a young man born in 1999.
His name is Leopold Aschenbrenner. 24 years old.
In 12 months, he grew this fund from $383 million to $5.517 billion, an increase of over 14 times. During the same period, the S&P 500 gained only single digits.
What's even more surprising is his holdings. Opening the quarterly holdings report, you won't find any of the AI star companies you always see in financial news headlines. Instead, there are companies making fuel cells, Bitcoin miners that have just crawled back from the brink of bankruptcy, and a chip giant being abandoned by the entire market.
He says his fund invests in AI, but this doesn't look like an AI fund's portfolio at all. It looks more like a madman's shopping list.
But this madman happens to be one of the earliest and deepest thinkers in the world on how AI will change the world. Before joining Wall Street, he was a researcher at OpenAI, responsible for thinking about how to ensure AI doesn't go rogue when it becomes smarter than humans; later, he was kicked out for saying the wrong things and wrote a 165-page manifesto predicting a future that most people find absurd.
Later, he went all-in with his entire net worth.
Deconstructing $5.5 Billion: What Did He Actually Buy?
The most direct way to understand how brilliant Leopold Aschenbrenner is at investing is to open his holdings report and read it line by line.
His largest holding is Bloom Energy. Holding value: $876 million, representing 15.87% of the portfolio.
This company makes fuel cells. More precisely, it makes something called "solid oxide fuel cells," which convert natural gas directly into electricity with extremely high efficiency. Founder KR Sridhar was formerly an engineer on NASA's Mars exploration program and was named by Fortune magazine as "one of the five top futurists creating the future today."
An AI fund placed its biggest bet on a power generation company.
According to Gartner's predictions, the global electricity consumption of AI-optimized servers will skyrocket from 93 terawatt-hours in 2025 to 432 terawatt-hours in 2030, nearly quintupling in five years. The U.S. data center grid power demand will nearly triple by 2030, reaching 134.4 gigawatts. And the average age of the U.S. power infrastructure is already over 25 years, with many components between 40 and 70 years old, far exceeding their design lifespan.
In other words, AI needs more electricity than the entire grid can provide. And the grid itself is so old it's falling apart.
The scarcest resource in the AI era is not chips; it's electricity.
Bloom Energy's fuel cells恰好 bypass this bottleneck. They don't need to connect to the grid; they generate power directly next to data centers, 24/7. In 2025, Bloom Energy secured a contract from CoreWeave to provide fuel cells for its AI data center in Illinois.
Speaking of CoreWeave, this is恰好 Leopold's second-largest holding.
He holds $774 million worth of CoreWeave call options, plus $437 million in common stock, totaling over $1.2 billion, representing 22% of the portfolio. CoreWeave is a GPU cloud service provider that transitioned from a cryptocurrency mining operation.
In 2017, Mike Intrator and Brian Venturo got together to mine Bitcoin. In 2018, the crypto market crashed, and mining became unsustainable. But they had a bunch of GPUs. In 2019, they had a brainstorm: GPUs aren't just for mining; they can also run AI.
So the company pivoted, from a mining operation to an AI compute arms dealer. On March 27, 2025, CoreWeave IPO'd on the Nasdaq, raising $1.5 billion at $40 per share. A company that crawled out of a mining operation became a core supplier of AI infrastructure.
Leopold was attracted to CoreWeave's massive GPU holdings and its deep ties to Nvidia. In an era where compute is productivity, whoever has GPUs is king.
But what's truly baffling is his third-largest holding: Intel. Holding value: $747 million,全部是 call options, representing 13.54% of the portfolio.
In 2025, Intel was one of Wall Street's least favored companies. Its stock price had halved from its 2024 high, its market share was being eroded by AMD and Nvidia, and CEOs were being replaced one after another. Almost every analyst was saying Intel was finished.
But Leopold偏偏 chose this moment to go heavily long with call options. This is an extremely aggressive move—soar if right, zero if wrong.
What is he betting on? Two words: foundry.
In November 2024, the U.S. Department of Commerce announced that Intel would receive up to $7.86 billion in direct funding support through the CHIPS and Science Act. The purpose of this money is singular: to make Intel a domestic U.S. chip foundry, competing with TSMC.
In the context of U.S.-China tech decoupling, the U.S. needs an "insider" to make chips. Intel, though落后, is the only choice. Leopold isn't betting on Intel's technology; he's betting on U.S. national will.
The following holdings are even more interesting. Core Scientific, holding $419 million; IREN, $329 million; Cipher Mining, $155 million; Riot Platforms, $78 million; Hut 8, $39.5 million.
These companies share a common characteristic: they are all Bitcoin mining companies.
Why would an AI fund invest in a bunch of Bitcoin miners?
Simple: because Bitcoin mining companies possess the cheapest electricity and the largest data center sites in the U.S.
Core Scientific has over 1,300 megawatts of power capacity. IREN plans to expand its capacity by 1.6 gigawatts in Oklahoma. To survive the intense computing competition, these miners have long since secured the world's cheapest electricity resources, signing long-term power purchase agreements.
And now, what AI data centers lack most is precisely electricity and sites.
In 2022, Core Scientific filed for bankruptcy due to the crypto crash. It completed restructuring in January 2024, reducing its debt by about $1 billion, and relisted on the Nasdaq. Then, it signed a 12-year contract with CoreWeave worth over $10.2 billion to convert its mining facilities into AI data centers. To focus fully on this shift, Core Scientific even plans to sell all its Bitcoin holdings.
IREN (formerly Iris Energy) signed a $9.7 billion AI contract with Microsoft, receiving a $1.9 billion prepayment. Cipher Mining signed a 15-year lease agreement with Amazon. Riot Platforms signed a 10-year, $311 million contract with AMD.
Overnight, Bitcoin miners became the landlords of the AI era.
Now, let's complete this puzzle.
Bloom Energy provides power, CoreWeave provides GPU compute, Bitcoin miners provide sites and cheap power, Intel provides domestic U.S. chip manufacturing capability. Add to that the fourth-largest holding Lumentum ($479 million, makes optical components, core for interconnecting AI data centers), the ninth-largest holding SanDisk ($250 million, data storage), and the eleventh-largest holding EQT Corp ($133 million, natural gas producer, provides fuel for fuel cells).
This is a complete AI infrastructure supply chain.
From power generation, to power transmission, to chip manufacturing, to GPU compute, to data storage, to fiber optic interconnection. He bought every link.
And the other thing he did simultaneously makes this logic even clearer. In Q4 2025, he completely sold off his positions in Nvidia, Broadcom, and Vistra. These three companies were恰恰 the biggest star performers of the 2024 AI rally.
He also shorted Infosys, one of India's largest IT outsourcing companies.
Sell the hottest AI chip stocks, buy the unwanted power plants and mining sites. Short traditional IT outsourcing because AI programming tools are making programmers more efficient, compressing outsourcing demand.
Every trade points to the same judgment: AI's bottleneck is not in software, but in hardware; not in algorithms, but in electricity; not in cloud models, but in the physical world.
So the question is: How did a 24-year-old form this set of beliefs?
From Son of an East German Doctor to OpenAI Rebel
Leopold Aschenbrenner was born in Germany; both his parents were doctors. His mother grew up in former East Germany, his father was from former West Germany; they met after the fall of the Berlin Wall. The family itself carries the imprint of a historical fracture—the Cold War, division, reunion. His later obsession with geopolitical competition might find its earliest seed here.
But Germany couldn't keep him. He later said in an interview: "I really wanted to leave Germany. If you're the most curious kid in class, wanting to learn more, the teachers don't encourage you; they get jealous and try to suppress you."
He called this phenomenon "tall poppy syndrome"—whoever stands tall gets cut down.
At age 15, he convinced his parents and flew alone to the U.S., entering Columbia University.
Attending university at 15 is an outlier anywhere. But Leopold's performance at Columbia turned "outlier" into "legend." He majored in a double degree in Economics and Mathematics-Statistics, won every possible award, like the Albert Asher Green Memorial Prize, the Romine Prize in Economics, and Junior Phi Beta Kappa.
At 17, he wrote a paper on economic growth and existential risk. Prominent economist Tyler Cowen read it and said: "When I read it, I couldn't believe it was written by a 17-year-old. I would have been impressed if it were an MIT PhD thesis."
At 19, he graduated from Columbia as Valedictorian. This is the highest honor for undergraduates at the university. In 2021, with the world still in the shadow of the pandemic, a 19-year-old German kid stood at Columbia's graduation ceremony, delivering the address on behalf of all graduates.
Tyler Cowen gave him a piece of advice: don't get a PhD in economics.
Cowen felt economics academia had become somewhat "decadent" and encouraged him to do bigger things. Cowen also introduced him to Silicon Valley's "Twitter weirdo" culture circle, a group obsessed with AI, effective altruism, and humanity's long-term fate.
After graduation, Leopold first went to the Forethought Foundation, researching long-term economic growth and existential risks. Then he joined the FTX Future Fund founded by SBF, working with core figures of the effective altruism movement, Nick Beckstead and William MacAskill. His title was "Economist affiliated with the University of Oxford's Global Priorities Institute."
This experience was important. It meant that before entering the AI industry, Aschenbrenner had spent years systematically thinking about one question: what kind of events can fundamentally alter the course of human civilization.
Then, he joined OpenAI.
The exact timing is unclear, but he joined a special team—the "Superalignment" team. This team was formed on July 5, 2023, co-led by OpenAI co-founder Ilya Sutskever and alignment team lead Jan Leike. The goal was to solve the superintelligence alignment problem within four years, i.e., ensuring an AI vastly smarter than humans would still listen to them.
OpenAI had承诺 to dedicate 20% of its compute to this team. But there was a chasm between promise and reality.
Leopold saw things inside OpenAI that made him uneasy. He submitted a security memo to the board, warning that the company's security measures were "grossly inadequate" to prevent foreign governments from stealing critical algorithmic secrets. The company's reaction surprised him. HR spoke to him, saying his concerns about espionage were "racist" and "unconstructive." Company lawyers grilled him on his views about AGI and his team's loyalty.
In April 2024, OpenAI fired him for "leaking confidential information."
The alleged "leak" was sharing a brainstorming document on AGI safety measures with three external researchers. Leopold said the document contained no sensitive information and that sharing such documents internally for feedback was standard practice.
A month later, Ilya Sutskever left OpenAI. Three days after that, Jan Leike left too. The Superalignment team was dissolved. OpenAI's承诺 20% compute was never delivered.
The irony of a team researching "how to control superintelligence" being disbanded by the company creating superintelligence cannot be overstated. But for Leopold, being fired became a form of liberation. He was no longer employed by anyone, no longer needed to carefully phrase internal memos. He could say what he truly wanted to say to the whole world.
On June 4, 2024, he published a 165-page article on a website called situational-awareness.ai. The title was simply "Situational Awareness: The Decade Ahead."
The 165-Page Prophecy
To understand Leopold's investment logic, you must first read this manifesto. Because that $5.5 billion portfolio is the financial translation of these 165 pages.
The core thesis of the manifesto can be summarized in one sentence: AGI (Artificial General Intelligence) has a very high probability of being realized by 2027.
This sounded like madness in June 2024. But Leopold's method of argumentation is direct: count the orders of magnitude.
From GPT-2 to GPT-4, AI's capabilities made a qualitative leap, from a preschooler to a smart high school student. Behind this leap was roughly a 100,000-fold (5 orders of magnitude) increase in effective computation. This growth came from stacking physical compute, improving algorithmic efficiency, and capability release through model "unbinding."
His prediction is that by 2027, growth of the same scale will happen again. In physical compute, the computational resources for training cutting-edge models will be 100 times greater than for GPT-4. In algorithmic efficiency, improvement is about 0.5 orders of magnitude per year, accumulating to about 100 times over four years. Add the gain from "unbinding," turning AI from a chatbot into a tool-using, autonomous agent, another order of magnitude jump.
Three 100-fold increases叠加在一起 is another 100,000-fold, another qualitative leap. From a smart high school student to surpassing humans.
What truly made people sit up in this article were the series of consequences he derived from this prediction.
First consequence: Trillion-dollar compute clusters.
He wrote that in the past year, talk in Silicon Valley had shifted from $10 billion compute clusters to $100 billion clusters, and recently to trillion-dollar clusters. Every six months, the board's plans add a zero. By the end of this decade, hundreds of millions of GPUs will be in operation.
This prediction sounded exaggerated in June 2024. But in January 2025, the Trump administration announced the Stargate project, jointly invested in by SoftBank, OpenAI, Oracle, and MGX, planning to invest $500 billion over four years to build AI infrastructure in the U.S. The first deployment of funds was immediately $100 billion. Construction has already begun in Texas.
The "trillion-dollar clusters" he wrote about in the manifesto became an official White House plan half a year later.
Second consequence: Power crisis.
How much electricity do hundreds of millions of GPUs need? Leopold's answer: it requires increasing U.S. electricity production capacity by tens of percentage points.
Data confirms his judgment. In 2024, the combined capital expenditure of Amazon, Microsoft, Google, and Meta exceeded $200 billion, a 62% increase from 2023. Amazon alone spent $85.8 billion, a 78% year-on-year increase. In 2025, Amazon's capex is expected to exceed $100 billion.
Most of this money was spent on data centers and power infrastructure.
Microsoft even did something unimaginable a decade ago: it signed a 20-year power purchase agreement with Constellation Energy to restart the Three Mile Island nuclear power plant.
Yes, that Three Mile Island, the site of the worst nuclear accident in U.S. history in 1979.
This nuclear plant will reopen in 2028, renamed the Crane Clean Energy Center, dedicated to powering Microsoft's data centers. Constellation Energy's CEO Joe Dominguez said: "Powering critical industries, including data centers, requires ample, carbon-free, and reliable energy every hour of every day, and nuclear power plants are the only ones that can consistently deliver on this promise."
When a software company starts restarting nuclear power plants, you know electricity has shifted from an infrastructure issue to a strategic resource issue.
Third consequence: Geopolitical competition.
The most controversial part of the manifesto is where Leopold, in near-Cold War language, defines the AGI race as a struggle for the survival of the "free world." He严厉批评 the security measures of top U.S. AI labs as mere formalities. He urgently called for AI algorithms and model weights to be treated as state secrets of the highest order.
He even predicted that the U.S. government would ultimately have to launch a national AGI project similar to the "Manhattan Project."
These arguments sparked fierce debate. Critics argued he oversimplified geopolitical complexity, using alarmist narratives to justify unconstrained acceleration.
But others felt he spoke the truth. Anthropic's Dario Amodei and OpenAI's Sam Altman also believe AGI will arrive soon.
The true value of the manifesto lies not in whether its predictions are 100% accurate, but in providing a complete, actionable thinking framework.
If AGI truly arrives around 2027, what does the world need before that? Massive amounts of compute.
What does compute need? GPUs.
What do GPUs need? Electricity.
Where does electricity come from? Power plants, nuclear plants, Bitcoin mining sites with cheap electricity.
Where are chips made? At TSMC.
But what if the U.S. and China decouple? Then you need Intel.
How do data centers interconnect? Need optical components—Lumentum.
Where is data stored? Need storage—SanDisk.
See, this is the logic of that holdings report.
The manifesto is the map; the holdings are the route. Leopold translated this 165-page macro prediction into an investment portfolio that could be bet on with real money. Every buy corresponds to a point in the manifesto. Every sell corresponds to an assumption he believes the market has mispriced.
But having a map isn't enough. In the real market, you need one more thing: the ability to keep believing you are right when everyone says you are wrong.
This ability was put to the most severe test on January 27, 2025.
The DeepSeek Shock
On January 27, 2025, the release of DeepSeek's DeepSeek-R1 model sent Wall Street into a panic. This model's performance was close to OpenAI's o1, but its usage cost was 20 to 50 times cheaper. Even more shocking, its predecessor model DeepSeek-V3 reportedly cost less than $6 million to train, using Nvidia H800 chips that were sanctioned and performance-limited by the U.S.
The market's logic instantly collapsed.
If the Chinese can train a top-tier model with $6 million and crippled chips, what does the hundreds of billions of dollars U.S. tech giants pour in every year count for? Are those trillion-dollar compute cluster plans still meaningful? Will GPU demand plummet?
Panic spread like plague. Nvidia's stock plummeted nearly 17%, losing $593 billion in market value in a single day, the largest single-day market cap loss in Wall Street history. The Philadelphia Semiconductor Index crashed 9.2%, its biggest single-day drop since the pandemic panic of March 2020. Broadcom fell 17.4%, Marvell fell 19.1%, Oracle fell 13.8%.
The decline started in Asia, spread to Europe, and finally exploded in the U.S. Nasdaq 100 index components alone lost nearly a trillion dollars in market value in one day.
Silicon Valley venture capital godfather Marc Andreessen called DeepSeek AI's "Sputnik moment" on Twitter, saying: "This is one of the most amazing and impressive breakthroughs I've ever seen, and as an open-source project, a gift to the world."
For Leopold's fund, this day should have been a disaster. His holdings were all AI infrastructure stocks, and the market was questioning the entire logic of AI infrastructure.
But according to a Fortune magazine report, an investor in Situational Awareness LP revealed that day, during the market's panic selling, a large tech fund called to inquire. The answer they got was five words:
"Leopold says it's fine."
Why was Leopold so calm? Because in his view, the emergence of DeepSeek did not推翻 his logic; it confirmed it.
His manifesto had a core argument: AI progress will not slow down; it will only accelerate.
Improving algorithmic efficiency is one of the three engines driving AI development. DeepSeek training a stronger model with less money and weaker chips恰恰 proves algorithmic efficiency is improving rapidly. And the higher the algorithmic efficiency, the stronger the AI that can be produced with the same compute, which will stimulate more compute demand, not reduce it.
Using the framework of his manifesto: DeepSeek did not prove "we don't need that many GPUs"; it proved "every GPU becomes more valuable." When you can train a better model with less money, you don't stop; you train more, larger, stronger models.
Panic stems from the fear that "demand will disappear." But those who truly understand AI know that cost reduction never destroys demand; it only creates greater demand.
Leopold bought against the trend during the panic. The market soon proved him right. Nvidia and the entire AI sector quickly rebounded in the following weeks, returning to levels higher than before the crash.
In the world of investing, conviction is the scarcest asset. Not because forming conviction is hard, but because坚持 conviction when everyone says you're wrong is almost anti-human.
The End of the Physical World
The story of Leopold Aschenbrenner can, of course, be simplified into a爽文 about a天才少年 getting rich. But if you only see the money, you waste the true value of this story.
What he truly did right was to shift his gaze from the code and model parameters on the screen to the smokestacks of power plants, the substations of mining sites, and the fiber optic cables spanning the continent, while everyone else was staring at the screen.
In 2024, the whole world was discussing how powerful GPT-5 would be, how realistic Sora's videos would be, when AI would replace programmers. These discussions are important, of course. But Leopold asked a more fundamental question: how much electricity do these things need? Where does the electricity come from?
This question sounds too simple, but恰恰 this simple question points to the biggest investment opportunity of the AI era.
AI is growing at an exponential rate, while the physical infrastructure supporting it remains stuck in the last century. Leopold saw this crack. Then he traced along this crack all the way to the end of the physical world. Every step started from a physical bottleneck, found the company solving that bottleneck, and placed a bet.
The essence of this methodology isn't new. During the 19th-century California Gold Rush, the people who made the most money weren't the gold prospectors, but those selling shovels and jeans. Levi Strauss made his fortune then.
But knowing this道理 is one thing; executing it in the AI era is another.
Because to execute it, you need two abilities simultaneously: a deep understanding of technological trends, knowing AI's development path and resource needs; and a concrete认知 of the physical world, knowing where electricity comes from, how data centers are built, how fiber is laid.
The former requires you to have been in an OpenAI lab; the latter requires you to be willing to蹲下来 and study the power contracts of a bankrupt mining company.
Technical people understand AI but not electricity markets. Finance people understand markets but not AI's physical constraints. Leopold恰好 had both.
But more important than ability is perspective.
There's a line in his manifesto that is often quoted: "You can see the future first in San Francisco." The subtext of this sentence is: the future is not evenly distributed.
The essence of investing is finding price mismatches in a future that has arrived but is not yet evenly distributed.
Leopold亲眼 saw the AI capability curve in OpenAI's labs. He knew GPT-4 was not the end but the beginning. He knew there would be larger models, more compute, crazier capital investment. And the market was still discussing "is AI a bubble?"
This is the mismatch. What he did was turn this mismatch into $5.5 billion.












