John Collison, Elad Gil, and Pichai
Author: Su Yang, Tencent Technology
Editor | Xu Qingyang
Recently, on the occasion of his tenth anniversary as CEO, Google's Sundar Pichai participated in a joint interview with John Collison, co-founder of payments giant Stripe, and tech angel investor Elad Gil.
In the interview, Pichai reviewed Google's journey in the AI wave from being behind to leading. He directly addressed that piece of history that left Googlers feeling 'unresolved': although the Transformer architecture was born at Google, it ultimately became the foundation for OpenAI's ChatGPT, which disrupted the search industry.
He admitted there is "some misunderstanding" about this externally. Transformer was created from the start to solve translation quality, it wasn't just theoretical research. The reason for not releasing it promptly was partly because Google had a "higher threshold" for search quality, and the early internal version was 'too toxic' to release.
Facing the current AI competition, Pichai believes the market is far from a zero-sum game, with a "value growth curve that is extremely steep." He also revealed that he personally approves compute allocation for at least one hour every week, calling it "the most important thing right now."
In Pichai's view, Google's full-stack vertical integration is a core advantage, spanning from the seventh-generation TPU to models and applications. He disclosed that capital expenditure for 2026 will reach $175 to $185 billion.
Regarding resource bottlenecks, he identified wafer capacity as the "fundamental constraint," warning that 2026 will be a "year of supply crunch." But the US must learn to "build physical infrastructure at 10x speed."
He also confirmed that Google is exploring space data centers, calling it "the Waymo of 2010"—seemingly distant, but already starting with small teams and small budgets.
Pichai firmly believes that search functionality will not die but will evolve into an "agent manager." You just need to give a command, and AI agents will help you complete tasks. He even boldly predicted: by 2027, internal business forecasting at Google will be entirely automated by AI, with no human intervention needed.
Below is a refined version of Pichai's interview:
01. "We Weren't Slow, Our Bar Was High"
Q: People always bring up that history: Transformer was invented at Google, but it became the foundation for ChatGPT. How do you look back on that now?
Pichai: This is actually a bit misunderstood. Transformer didn't appear out of thin air. At the time, we had a very practical need: to make translation better. The TPU was the same. Speech recognition technology existed, but the problem was, we had to serve two billion users, and the existing chips couldn't handle it. We had to solve the inference efficiency problem first.
Q: So Transformer was product-oriented from the beginning?
Pichai: Yes, our research team was focused on solving practical problems from the start. As soon as Transformer came out, we immediately used it in search. Later, we did BERT (Bidirectional Encoder Representations) and MUM (Multitask Unified Model), and search quality made huge leaps during that period. Actually, we also built products similar to LaMDA (Language Model for Dialogue Applications) internally, we just didn't rush to be first to market.
Q: In other words, you did the research and saw the returns, just didn't use it to conquer the world.
Pichai: It's more than that. That product form factor, like ChatGPT, we also researched internally—that was LaMDA. Remember? There was an engineer who thought LaMDA had become sentient (and was later suspended and fired, lol), that was essentially the prototype of an early ChatGPT. We had an internal product version long ago, we just released it about nine months after ChatGPT.
Actually, as early as the 2022 I/O大会 (I/O conference), we launched AI Test Kitchen, which was running LaMDA behind the scenes. But we placed many restrictions on it because that version hadn't undergone RLHF (Reinforcement Learning from Human Feedback) and was quite "toxic" in its speech, we didn't dare release it directly.
Additionally, Google's requirements for search quality have always been extremely high, and the bar for product release is also higher. Even when OpenAI released ChatGPT, their partnership with Microsoft had just been finalized not long before. So looking back, ChatGPT's success wasn't such a "foregone conclusion" or "inevitable."
I think OpenAI had one very lucky aspect: they saw the opportunity first in the programming context through GitHub. That signal, we might have missed at the time.
In programming, the progress in model capability is much more pronounced than in pure language scenarios. From GPT-2 to GPT-3, to GPT-4, each leap is more significant for code writing than for chat. These factors combined led to the later situation. So I think it has less to do with "research not translating into products" and more with other factors coming together.
Q: I remember someone saying that when ChatGPT was released, it was actually quite low-key, chosen for the Thanksgiving week, and no one thought it would become what it did. It was just an interesting experiment.
Pichai: This is the norm for the consumer internet; there are always surprises. At Google, we made Google Video Search, and then YouTube came out. It was the same with Facebook; Instagram just popped up. No one looks at these things with that dramatic feeling of "I'm about to be disrupted"; Facebook's approach was to just buy Instagram.
My point is, there are always three or five people huddled together building prototypes, throwing out millions of ideas every day. I'm not belittling anyone, but this will definitely happen. You can't casually build the next iPhone in a garage, but that's the consumer internet for you. The key is to realize this and truly internalize it into the organization's DNA.
02. Search Won't Die
Q: Google has always been known for being "fast." The earliest search showed response time on the results page; Gmail and Chrome were noticeably faster than competitors. Now Gemini runs on TPUs and is still incredibly fast. Is this a deliberate product strategy, or is there a more complex reason?
Pichai: Speed actually comes in two types. One is response speed, the latency perceived by the user; the other is iteration speed, how fast we launch new features and improve the product. Both are important.
You just asked about latency. The difficulty is, we have to keep adding new features while maintaining fast response. The search team now has a millisecond-level latency budget. For example, if you save 3 milliseconds, 1.5 milliseconds go to user experience, and the other 1.5 milliseconds count as the quota you've earned for yourself.
Q: The latency humans can perceive is only a few hundred milliseconds, right?
Pichai: True. But over the past five years, while adding a bunch of features, we've also reduced search latency by 30%. Gemini is the same; the Flash model has 90% of the Pro model's capability but is much faster and cheaper. Vertical integration plays a crucial role here.
Q: Do you think search will still be here in 10 years? Some say chat is the new interface, others say everyone will have their own agent, and you can just command it to perform tasks without searching yourself.
Pichai: With every technological change, search can do more. User expectations change, and you have to change with them. In the future, many "lookups" will become agent-based—you give a task, and the agent completes it for you. Search will become an agent manager. The Antigravity I use now already has a bunch of agents working inside it.
Q: Will that form of typing a line of keywords and getting a list of links still exist?
Pichai: In the current search AI overviews, some people are already doing deep research on them; it's different from what you said, but that's how people are using it. There will be more and more long-running tasks in the future, and they can be asynchronous.
Q: You just said search will become an agent manager. But in ten years, will that search box still be there, just that people don't pay much attention to it anymore?
Pichai: The form factor will change, and the ways of input and output will change. But honestly, thinking ten years ahead now can be paralyzing. We are lucky to be at a moment where just looking one year ahead is exciting enough. The curve is so steep; the models will be completely different a year from now. Just keeping up with the curve is thrilling enough.
And many people don't realize this is an expansive moment, not a zero-sum game. Look at YouTube; TikTok and Instagram grew, and we're still doing fine, right? The more you think others' rise means your death, the more it becomes a zero-sum game. But as long as you are innovating yourself, it won't.
We are doing both search and Gemini now; they overlap and will gradually differentiate. Having both, I think, is beneficial.
Q: Around spring/summer 2025, the market was extremely pessimistic about Google's future, saying search was finished, and your stock price fell to around $150. Looking back now, that was clearly a misunderstanding. Google's performance across the entire stack—applications, models, TPUs, as well as Waymo, YouTube, and all those cool bets—has been excellent. What do you think investors got wrong at that time?
Pichai: At that time, everyone's attention was entirely on the "reversal," the so-called "OpenAI comeback." But for me, that moment actually made me feel that Google was made for this moment. This vertical integration wasn't accidental or arbitrary. In 2016, we announced the TPU at I/O and committed to building AI data centers; now we're on the seventh generation. That year, the company also set the "AI First" direction; it wasn't just a slogan.
We were indeed one step behind on the frontier large models, but we had all the necessary capabilities internally; the rest was execution. What excites me is that, looking at the full stack, we have research teams, infrastructure teams, and various business platforms. And AI happens to accelerate all these businesses simultaneously, including Search, YouTube, Cloud, Waymo—they are all on the same curve. This is very efficient leverage.
I never thought it was a zero-sum game back then. Everything will expand tenfold, and there will be room for others too. After Google rose, didn't Amazon and Facebook also do well? We always underestimate the space created by growth. So my focus was simple: execute better.
Q: Was there a defining moment that made the outside world feel "Google is back"? Was it Gemini 3?
Pichai: People really started noticing this trend around Gemini 2.5. Especially the multimodal capabilities, which directly placed us at the forefront. Credit goes to the Google DeepMind team. We invested significant fixed costs in multimodality from the start; Gemini was designed for this direction from day one. By Gemini 2.5, the advantages began to show. For example, with Nano Banana, you could see the effect of everything integrated.
But this field changes too fast. The top two or three labs push each other forward. One month you think, 'Great, we're ahead here,' the next month it's 'Oops, we're behind there.' The landscape might be different again in a few months. The frontier is that competitive.
03. Spending $180 Billion a Year to Explore AGI
Q: Some external researchers feel that Google and other top labs have a difference: Google is less "AGI-obsessed." In other words, Google doesn't seem to believe AGI is imminent nor is it accelerating frantically around that idea. Do you think this observation is accurate? If so, does it affect your judgment on future direction?
Pichai: Look at our capital expenditure, it grew from $30 billion to $180 billion. Who would spend money like that without truly believing in this curve?
I think this is largely a semantic issue. We are a large company, our products cover too many people, too many layers, so our way of speaking might be different. But to say Google doesn't get AGI doesn't make sense. Many of the founders themselves are AGI enthusiasts—Demis Hassabis, Jeff Dean, Ilya Sutskever, and Dario Amodei—these people all worked at Google at one point.
I think the reason it might seem like we have differences externally is partly due to geography, like San Francisco聚集 (gathering) more young companies and research labs. But these are just surface appearances. At the core, there is no fundamental difference in our judgment of the technology curve or how to understand and apply AI.
The real gap is whether you have witnessed the changes firsthand. In our company, there is a group of people running at the forefront every day, personally deploying and testing AI agents, watching them gradually acquire new skills and handle complex tasks. Then you look back at what they could do just three months ago, and you can tangibly feel the impact of exponential growth.
Q: I'm curious, when was the last time you felt an AGI moment was approaching?
Pichai: The first time I had that feeling was in 2012. At that time, Dean demonstrated the earliest version of Google Brain; this neural network recognized a cat. Later, Larry Page and I went to the DARPA Challenge to see self-driving cars. Demis demonstrated early models, and the models showed what we call "imagination."
There have been many such moments since. Recently, the most直观的 (intuitive) is the rapid progress in programming. You give a programming agent a complex task, and you don't even need to open an IDE (Integrated Development Environment) from start to finish; just watch it complete the task in the manager. That feeling, you can call it an AGI moment.
Q: I was working on a small project myself the other day, and after it ran, I realized I didn't even know what programming language it used; I had to specifically ask it. It feels like magic.
Pichai: Exactly. The slope of the curve (the speed of improvement) is what's truly astonishing. You look back three months, and you know how much progress has been made.
Q: Speaking of firsthand experience, I'm curious how you maintain a real feel for the products. Tech products are too abstract; you can't just rely on reports and PPTs. Besides daily use of Gmail and such常规操作 (routine operations), how do you ensure you don't lose touch with users?
Pichai: I use internal versions, specifically scheduling time for intensive use. Two weeks ago, I was working out at the gym, my phone was on Gemini Live, and for the next 30 minutes, I drilled down on one topic with it. Some experiences were good, some frustrating, but you learn things. I force myself to use it in a "super user" mode. I force myself to use them in that "super user" mode to stay in touch. X (Twitter) also helps because sometimes you get the most direct feedback there.
Also, I now go into Antigravity (our internal version) and directly ask the AI: "We launched this feature, what does everyone think? Show me the top five worst and top five best comments." It pulls them out directly. Has my life become easier? Definitely.
In the past, I spent a lot of time trying to understand the situation; now AI agents do that part for me. Of course, I still need to spend the time experiencing it myself; it's a learning process. I'm also trying to adapt to this future.
Q: You said this isn't a zero-sum game, and productivity gains are real. But looking back at previous technology cycles—internet, mobile, SaaS—it took a long time to be reflected in GDP. With AI, we already see data center construction boosting GDP growth. Do you think the US economy will be significantly larger because of AI in the next three to five years? How much growth?
Pichai: For these returns to be meaningful, they must manifest somewhere. I remember someone from Sequoia wrote an article saying that with so much money invested, the returns must justify it.
Of course, that was two and a half years ago. Some said it was illogical because the return rate must reach a certain level to be reasonable. But now, the investment scale has probably grown 10x, and we need to re-examine these numbers. At some point, the math must work out. What is very clear is that we are supply-constrained now; we see strong compute demand across all application areas.
Q: I have no doubt it's a huge market. The problem is, many people might be calculating wrong. For example, they compare token budgets to engineer salaries. I think the software engineering market is larger than anyone thought, and increased supply will expand the market tenfold. I'm not questioning the relationship between capex and returns; I'm just curious, how big do you think the growth can actually be?
Pichai: Looking back at the development of the internet, the GDP growth numbers didn't fully reflect the change we felt. Maybe without the internet, GDP growth would have been negative. It's hard to make precise predictions; there are natural dampening mechanisms at all levels of society.
The most obvious example: the compute construction curve and the model improvement curve are截然不同 (completely different), the former is slower. Then you also have to consider, how do you diffuse the technology into society? Waymo is an example. It's safer than a human driver, but you still have to roll it out cautiously; there are constraints at all these levels. The US economy is much larger than ten years ago; even if the growth rate increases by just half a percentage point, it's a huge contribution. I think it will move in that direction.
04. Supply Chain Alerts: Memory, Electricians
Q: You mentioned supply constraints, which are indeed a defining feature of 2026. You said Google's capex is around $180 billion?
Pichai: Between $175 and $185 billion.
Q: The interesting thing is, even if Google wanted to spend $400 billion, it couldn't, because there isn't enough memory, not enough power, not enough various components. Can you talk about these bottlenecks?
Pichai: You can't even find the electricians you need.
Q: Tell us about the bottlenecks.
Pichai: Ultimately, it comes back to wafer capacity; that's the fundamental constraint. Power and energy are relatively easier to solve, but the permitting and regulatory environment is a big problem; it slows down how fast you can do things.
Q: States like Texas, Nevada, Montana have plenty of land, but it's still not enough?
Pichai: We are making huge progress, but the US确实需要学学怎么建得更快 (really needs to learn how to build faster). Look at China's construction speed; it's astounding. We need to shift our mindset and think about how to increase the speed of building in the physical world by ten times. This will be the real constraining factor. And the friction will only increase; it's not something a few people saying "we need to build faster" can solve.
Q: There are also issues like data center moratoriums.
Pichai: Wafer capacity, permits and approvals, construction speed—these are all bottlenecks. The government has done a lot, and people realize improvements are needed. Then there are key components in the supply chain; memory is a classic example. In the short term, everyone is stuck here.
For us companies, no matter how "AGI-obsessed" you are, you have to face a practical problem: your judgment cannot be 100% accurate; there's always a margin of error. You have to figure out, how bullish are you really on future development? How much profit compression can you withstand? Because external factors can always go wrong. Everyone is making adjustments based on these uncertainties.
Q: So memory is the biggest component bottleneck you see?
Pichai: Absolutely one of the most critical right now.
Q: You said this is short-term. Will the market stimulate supply through price increases?
Pichai: Leading memory manufacturers are unlikely to expand production significantly. It will be constrained short-term but will ease slowly. And this constraint forces innovation—we will improve efficiency by 30x. These things happen simultaneously.
Q: Doesn't this reinforce an oligopoly? Models improve themselves, write their own code, label their own data; compute becomes a game of musical chairs. Whoever has more compute can go further. But if everyone's compute is allocated proportionally, it essentially sets a cap for people. Do you think this argument holds?
Pichai: There's some truth to it. But we just released Gemma 4, a very good open-source model. Chinese models are very good, but I think outside of China, this is also a very good open-source model. Gemma 4's frontier level, compared to Gemini 3's architecture, has a very large gap, but looking at the release timing, the two are not that far apart. It's not like a SpaceX rocket, a behemoth.
Q: I've always found it震撼 (astounding): you run a data center for months, and what comes out is essentially a flat file, a thing like a Word document, that's your model. It's amazing!
Pichai: The特殊性 (peculiarity) of this makes me want to challenge that framework. At least from an inference perspective, what you say makes sense. But everyone is trying to use capital to break through these constraints; the incentive is huge.
Q: But you just said there's only so much memory in the world. The supply issues in 2026, 2027 can't be solved by capital incentives alone. This might be when models start to differentiate more.
Pichai: Yes, but it must be considered together with factors like wafer capacity and permits. Overall, the constraints might not be as severe as imagined. You have to consider everything together, including capital.
Q: In theory, people are willing to invest more money, but they hit the现实瓶颈 (real bottlenecks) of 2026, 2027. It's like the Strait of Hormuz; you can set the oil price as high as you want, but if supply is reduced by 20 million barrels per day, then 20 million barrels of demand must be destroyed. It's the same with memory; in the end, someone必定拿不到 (will definitely not get it).
Pichai: Of course, there are other constraints like safety. But the key point is, these models will soon突破 (break through) the limits of almost all existing software—maybe they already have, and we're sitting here, completely unaware.
Q: So supply constraints force you to optimize and become more efficient.
Pichai: Yes, it forces you to have necessary conversations. Take safety, for example; we need more coordination, but today that coordination is far from enough. There will be a moment—and it might come suddenly. You can't wish these problems away.
05. Three "Hidden Gems"
Q: Speaking of which, Google's investment portfolio is indeed impressive. You invested in SpaceX, I remember it was about 10% a long time ago? And Anthropic, also around 10%. Waymo is majority-owned. Internally, there are TPUs, quantum computing... are there other "hidden gems" that people might not know about or underestimate?
Pichai: We are always working on various long-term projects; when first announced, even the slightly边缘的 (marginal) ones seem a bit absurd. Like space data centers; we are in the very early stages right now. You just said constraints spark creativity, and that's exactly the point.
From a 20-year long-term perspective, where are you going to build these data centers? This question is difficult, but it's what we are thinking about today, just like when we started Waymo in 2010. Quantum computing is another one; we are pushing forward坚定地 (steadily), and I'm excited about it.
Q: Where do you think quantum computing will have the biggest impact? People mainly talk about molecular modeling and cryptography. But some are developing post-quantum cryptography (referring to new cryptographic techniques resistant to quantum computing attacks), and in molecular modeling, deep learning is already very strong, AlphaFold is an example. Will quantum really be important? If so, where will its biggest impact be?
Pichai: On an abstract level, I think quantum computers are more suitable for simulating nature. Because nature itself follows the laws of quantum mechanics, simulating it with quantum systems would be more direct and efficient. Of course, classical computers with sufficient compression algorithms could theoretically also do it, but my intuition is that quantum will have the advantage.
An example: we still don't fully understand the "Haber process" in fertilizer production, and there are many other complex natural phenomena. My intuition is that in simulating weather, simulating reality, quantum computing will ultimately prevail.
Technology history teaches us one thing: once you make something usable, people will find all sorts of applications you never imagined initially. I always like to give this example:手机加上GPS (mobile phones plus GPS) later enabled Uber. The people making phones back then could never have imagined that. So I believe that once quantum computers are truly built, their applications will be far beyond anyone's imagination.
Q: Sorry to interrupt, please continue talking about those超前项目 (forward-looking projects) you just mentioned.
Pichai: The Google DeepMind team is deeply involved in robotics. Google actually ventured into robotics very early, but it was too early. Looking back now, AI was the missing piece of the puzzle back then. The Gemini Robotics model is already top-tier in spatial reasoning. Interestingly, we are now collaborating with Boston Dynamics, Agile, and other companies to push forward together.
There's also Wing, drone delivery. We are scaling up; soon, 40 million Americans will be able to use Wing's services. This isn't years away; it's happening very soon. These long-term projects are built up bit by bit.
Also, there's Isomorphic.
Q: Isomorphic is indeed very exciting.
Pichai: Yes, we are focused on using models to improve every环节 (step) of drug discovery. Although there are still Phase III clinical trials and other procedures later, AI assistance gives us more confidence走向成功 (to move towards success).
06. Regret Not Investing in Waymo Earlier
Q: How is Google's capital actually allocated? Textbooks say capital allocation is about putting money where the returns are highest. Boeing is a classic example: defense contracts have an internal rate of return (IRR) of 16%, new airliners 19%, everyone would choose the latter. But Google's projects can't be calculated that way. Invest more in YouTube, optimize the algorithm, user dwell time increases, revenue goes up. Invest more in Waymo, accelerate expansion, but don't know when it will make money at scale. Invest in an AI research project, might not see results for five years. The return curves of these three projects are completely different. How do you compare them?
Pichai: This is a good question. Ironically, we encounter this question more often now than ever because of TPU allocation. To some extent, even Waymo needs TPUs; compute makes the capital allocation issue particularly prominent.
By the way, I特别期待 (especially look forward to) AI helping me with this. Once we unlock all the data, the models can actually handle it; we're stuck on data unlocking right now. I think this will help soon.
Looking back, Google has a big advantage: we often make decisions at a very early stage. This has a lot to do with the company's technical DNA.
For long-term projects, the early stage is actually easier because it doesn't require much capital initially. The real difficulty is sustaining long-term investment and continuously assessing the progress of the underlying technology. Take quantum computing as an example, how do we decide whether to keep investing? We look at logical qubit error rates, when we can reach the threshold for stable, large-scale logical qubits, whether the team can break through these technical hurdles.
One very important lesson I've learned is: bet deeply on technology early.
In the long run, you are essentially using intuition to judge a project's option value and potential market size 5 to 10 years out. You first assume a very aggressive growth curve, then work backwards: does this decision actually make sense?
TPU investment was done this way; we have been steadily investing. Waymo too; about two or three years ago, when the world was extremely pessimistic about self-driving, we反而加大了投资 (instead increased investment). Others retreated, we doubled down.
Q: Back to the capital allocation you mentioned. Google does kill projects; Loon (balloon network project) was shut down, but Waymo熬了那么久 (endured for so long) and you never gave up. What did you see back then? Was it a qualitative or quantitative judgment? How do you decide which project to kill and which to keep?
Pichai: We do have some quantitative metrics. For example, looking at Waymo's driving system, how its safety and reliability are improving. It's a long-term curve; you set goals and then monitor execution continuously. Our team has always been outstanding. Progress was indeed slow in some phases, but you have to believe the team can break through. The more you can assess at the deep technical level, the more accurate your decisions. At least that's how I do it.
Q: I've heard a saying: Waymo early on relied on hand-drawn maps and heuristic rules, which could handle very limited situations. The real breakthrough was switching to end-to-end deep learning a few years ago,正好赶上 (just catching) the Transformer wave. If Waymo had started five years ago, would it be about the same now? Or was that over-a-decade of积累 (accumulation) actually essential?
Pichai: You can think of Waymo as a robot. Theoretically, people who started doing robotics just three years ago should progress faster. But Waymo is different; it's a highly integrated system, not like TSMC or SpaceX, which compete on technical sophistication in a single dimension. For this kind of system integration, timing and the accumulation of craftsmanship are very critical. That said, the end-to-end approach will indeed be an accelerator.
Q: So continuously nurturing a team is itself a huge advantage. You kept investing, and when the technology took off, it paid off. That's smart. How does this extend to other areas? Like robotics, will you重新自己搞硬件 (re-start doing your own hardware), or rely mainly on partners?
Pichai: We keep an open mind. But from Waymo and TPU, I learned one thing: in areas involving safety and regulation, you need first-hand product feedback loops. Owning first-party hardware will ultimately become very important.
07. Personally Evaluating Compute Allocation Weekly
Q: In the past, R&D spending was mainly on personnel salaries, and technology costs were secondary. Now TPU compute has become a major part of the budget. How does it work specifically inside Google? Is there an overall TPU budget? When allocating to projects, was it previously based on headcount, and now it's "headcount + compute" budget? How do quarterly reviews work?
Pichai: We have always had compute budgets, but now compute is truly severely constrained. I spend at least one hour every week looking very carefully at how much compute each project and team is using, evaluating how to allocate it. This matter is now the top priority.
Q: So compute has become a scarce resource, and you need to ensure it's spent on the most worthwhile places.
Pichai: Exactly.
Q: What about Google Cloud? You need compute for yourselves on one hand, and you also sell it to customers on the other. How do you handle this conflict?
Pichai: Through advance planning. The cloud team does forward-looking planning, and our commitments to customers must be fulfilled坚决地 (resolutely). Everyone is operating in a constrained world; the cloud team also always says compute is insufficient, but advance planning solves most problems.
Q: Speaking of Google Cloud, GCP/MCP (AI Assistant interaction protocol with Google Cloud) is very easy to use; your AI can directly call Google Cloud programmatically, able to do almost anything,就差核心权限设置了 (just short of core permission settings). Previously, the biggest pain point of Google Cloud was having too many features, too messy; after logging in, you had to create organizations, create projects, find services—very troublesome. None of that matters now; you just say "add this feature." The AI understands all the API documentation and becomes a navigation layer. This experience is excellent.
Pichai: AI as an orchestration layer can handle anything you can think of. It's the same inside enterprises; CEOs don't lack data, they lack the method to put data together. In the past, you had to do a big ERP project; now AI is that orchestration layer.
Q: The more complex the product, the greater the benefit of AI navigation. Stripe has also experienced this, but the effect should be more pronounced for GCP.
Pichai: We can do even better, but you are right, the opportunity is huge.
Q: What interests me about products like OpenClaw is that they allow consumers to use stateful AI. For example, "summarize the news I'm interested in and send it to me every morning"—this kind of thing requiring persistent memory—mainstream AI apps can't do it yet. Is this functionality coming soon?
Pichai: The direction is affirmative. Users need to run persistent, long-term tasks in a reliable and secure manner. Issues like identity and permissions need to be figured out. But this is the future of AI agents; bringing this capability is an exciting frontier we are exploring for consumers.
Q: This is also what I wanted to mention. Dreamer, the company of the former Stripe CTO, was just acquired by Meta; they are particularly good at Stateful AI. You can make small applications yourself, and the experience is very smooth. It gives a sense of surprise. (Note: Stateful AI refers to AI systems that can retain and utilize historical context, memory, and state information across multi-step interactions or complex workflows.)
Pichai: Underneath the consumer-grade interface, there will be a full coding model,加上合适的工具和技能 (plus appropriate tools and skills), plus the ability to run securely and persistently in the cloud. These foundational components are converging. Today, maybe only 0.1% of people are living in this future, building things for themselves. But pushing it to the mass market is an exciting frontier.
Q: The companies I'm involved with, even those founded recently, have completely changed product development, engineering practices, even the positioning of design teams. Is Google rethinking these too? Have workflows changed significantly?
Pichai: You can understand it with concentric circles. Some teams have already transformed deeply; my task is to扩散 (diffuse) this change outward. Early on, many things were半残废的 (half-baked); you couldn't push them even if you wanted to. But this year the curve is shifting急剧地 (drastically). Google DeepMind and some software engineering teams are already living in the agent manager; their internal tool is called Jet Ski, which is essentially Antigravity. Last week we just rolled it out to the search team. In a large company, change management is the biggest hurdle for technology diffusion; small companies switch much faster.
Q: I want to add a few issues encountered in AI practical implementation. First, engineers need time to learn how to prompt AI effectively, and each company还有自己特定的知识 (also has its own specific knowledge). Second, AI-generated codebases are hard to share collaboratively because the scope of changes is large, code changes fast, making multi-person collaboration complex. Third, beyond engineering, data permissions are a big problem—you want the agent to answer "what's the status of this deal," the company knows this information, but the permission engine needs rewriting. Fourth, role definitions are also changing; roles like engineering, product, design might need to merge. In short, model capability has reached the level, but we are far from using it enough. What's your view?
Pichai: The issues you mentioned, the Gemini Enterprise team and the Antigravity team are solving them one by one. This is our roadmap. We use internally, encounter obstacles, overcome obstacles, and then turn them into products to push out. Identity and access control are real challenges, and our security requirements are particularly high, so we must be cautious. But precisely because of this, when we solve problems, the things we release are more robust. We are in this fixed-cost stage right now.
08. AI Taking Over Human Timelines
Q: Google does formal business forecasting several times a year. Theoretically, you could have AI completely automate this without any human involved. In which quarter do you think Google will first achieve forecasting done entirely by AI agents?
Pichai: I predict 2027 will be a significant turning point. Initially, there will still be people负责核查 (responsible for checking), but it will gradually transition. In 2027, these changes will happen very noticeably.
Q: So besides engineering processes, those non-engineering processes, you think will truly start AI-ification in 2027?
Pichai: Yes. This is also an advantage for startups; they can hire AI-native teams and play this way from the start. We have to do retraining, do transformation. Young companies indeed have an advantage here; we must drive this transformation ourselves.
Q: Are there any small projects inside Google that excite you right now?
Pichai: It might surprise people to hear this. Space data centers; we started with a small team of a few people, with a very small budget to achieve the first milestone. Big ideas also start small.









