Pichai's 10-Year Tenure as Google CEO: Lows, Reversals, and Regrets

marsbitPubblicato 2026-04-10Pubblicato ultima volta 2026-04-10

Introduzione

In a wide-ranging interview marking his 10-year anniversary as Google CEO, Sundar Pichai reflects on the company's journey in AI, from being an early innovator with the Transformer architecture to its current leadership position. Pichai addresses the "missed opportunity" narrative, explaining that internal versions of models like LaMDA (a precursor to ChatGPT) existed but were not released due to higher safety thresholds and early "toxicity" issues. He emphasizes that its research was always product-driven, and attributes OpenAI's success to a fortunate combination of factors, including identifying the coding use case early. Looking forward, Pichai asserts that search will not die but will evolve into an "agent manager," where users command AI to complete tasks. He reveals Google's massive capital expenditure, projected to reach $175-185 billion in 2026, is a testament to its belief in the AGI curve. However, he warns of a major supply crunch in 2026, citing critical bottlenecks in wafer capacity, memory, and even a shortage of electricians as fundamental constraints. Pichai also discusses Google's "hidden gems," including early-stage projects like space-based data centers, quantum computing (which he believes will excel at simulating nature), and robotics. He shares a regret: not investing more aggressively in Waymo earlier. Internally, Pichai reveals he personally spends at least an hour each week allocating scarce computing resources (TPU time), which has become the co...

John Collison, Elad Gil, and Pichai

Author: Su Yang, Tencent Technology

Editor | Xu Qingyang

Recently, on the occasion of his tenth anniversary as CEO, Google's Sundar Pichai participated in a joint interview with John Collison, co-founder of payments giant Stripe, and tech angel investor Elad Gil.

In the interview, Pichai reviewed Google's journey in the AI wave from being behind to leading. He directly addressed that piece of history that left Googlers feeling 'unresolved': although the Transformer architecture was born at Google, it ultimately became the foundation for OpenAI's ChatGPT, which disrupted the search industry.

He admitted there is "some misunderstanding" about this externally. Transformer was created from the start to solve translation quality, it wasn't just theoretical research. The reason for not releasing it promptly was partly because Google had a "higher threshold" for search quality, and the early internal version was 'too toxic' to release.

Facing the current AI competition, Pichai believes the market is far from a zero-sum game, with a "value growth curve that is extremely steep." He also revealed that he personally approves compute allocation for at least one hour every week, calling it "the most important thing right now."

In Pichai's view, Google's full-stack vertical integration is a core advantage, spanning from the seventh-generation TPU to models and applications. He disclosed that capital expenditure for 2026 will reach $175 to $185 billion.

Regarding resource bottlenecks, he identified wafer capacity as the "fundamental constraint," warning that 2026 will be a "year of supply crunch." But the US must learn to "build physical infrastructure at 10x speed."

He also confirmed that Google is exploring space data centers, calling it "the Waymo of 2010"—seemingly distant, but already starting with small teams and small budgets.

Pichai firmly believes that search functionality will not die but will evolve into an "agent manager." You just need to give a command, and AI agents will help you complete tasks. He even boldly predicted: by 2027, internal business forecasting at Google will be entirely automated by AI, with no human intervention needed.

Below is a refined version of Pichai's interview:

01. "We Weren't Slow, Our Bar Was High"

Q: People always bring up that history: Transformer was invented at Google, but it became the foundation for ChatGPT. How do you look back on that now?

Pichai: This is actually a bit misunderstood. Transformer didn't appear out of thin air. At the time, we had a very practical need: to make translation better. The TPU was the same. Speech recognition technology existed, but the problem was, we had to serve two billion users, and the existing chips couldn't handle it. We had to solve the inference efficiency problem first.

Q: So Transformer was product-oriented from the beginning?

Pichai: Yes, our research team was focused on solving practical problems from the start. As soon as Transformer came out, we immediately used it in search. Later, we did BERT (Bidirectional Encoder Representations) and MUM (Multitask Unified Model), and search quality made huge leaps during that period. Actually, we also built products similar to LaMDA (Language Model for Dialogue Applications) internally, we just didn't rush to be first to market.

Q: In other words, you did the research and saw the returns, just didn't use it to conquer the world.

Pichai: It's more than that. That product form factor, like ChatGPT, we also researched internally—that was LaMDA. Remember? There was an engineer who thought LaMDA had become sentient (and was later suspended and fired, lol), that was essentially the prototype of an early ChatGPT. We had an internal product version long ago, we just released it about nine months after ChatGPT.

Actually, as early as the 2022 I/O大会 (I/O conference), we launched AI Test Kitchen, which was running LaMDA behind the scenes. But we placed many restrictions on it because that version hadn't undergone RLHF (Reinforcement Learning from Human Feedback) and was quite "toxic" in its speech, we didn't dare release it directly.

Additionally, Google's requirements for search quality have always been extremely high, and the bar for product release is also higher. Even when OpenAI released ChatGPT, their partnership with Microsoft had just been finalized not long before. So looking back, ChatGPT's success wasn't such a "foregone conclusion" or "inevitable."

I think OpenAI had one very lucky aspect: they saw the opportunity first in the programming context through GitHub. That signal, we might have missed at the time.

In programming, the progress in model capability is much more pronounced than in pure language scenarios. From GPT-2 to GPT-3, to GPT-4, each leap is more significant for code writing than for chat. These factors combined led to the later situation. So I think it has less to do with "research not translating into products" and more with other factors coming together.

Q: I remember someone saying that when ChatGPT was released, it was actually quite low-key, chosen for the Thanksgiving week, and no one thought it would become what it did. It was just an interesting experiment.

Pichai: This is the norm for the consumer internet; there are always surprises. At Google, we made Google Video Search, and then YouTube came out. It was the same with Facebook; Instagram just popped up. No one looks at these things with that dramatic feeling of "I'm about to be disrupted"; Facebook's approach was to just buy Instagram.

My point is, there are always three or five people huddled together building prototypes, throwing out millions of ideas every day. I'm not belittling anyone, but this will definitely happen. You can't casually build the next iPhone in a garage, but that's the consumer internet for you. The key is to realize this and truly internalize it into the organization's DNA.

02. Search Won't Die

Q: Google has always been known for being "fast." The earliest search showed response time on the results page; Gmail and Chrome were noticeably faster than competitors. Now Gemini runs on TPUs and is still incredibly fast. Is this a deliberate product strategy, or is there a more complex reason?

Pichai: Speed actually comes in two types. One is response speed, the latency perceived by the user; the other is iteration speed, how fast we launch new features and improve the product. Both are important.

You just asked about latency. The difficulty is, we have to keep adding new features while maintaining fast response. The search team now has a millisecond-level latency budget. For example, if you save 3 milliseconds, 1.5 milliseconds go to user experience, and the other 1.5 milliseconds count as the quota you've earned for yourself.

Q: The latency humans can perceive is only a few hundred milliseconds, right?

Pichai: True. But over the past five years, while adding a bunch of features, we've also reduced search latency by 30%. Gemini is the same; the Flash model has 90% of the Pro model's capability but is much faster and cheaper. Vertical integration plays a crucial role here.

Q: Do you think search will still be here in 10 years? Some say chat is the new interface, others say everyone will have their own agent, and you can just command it to perform tasks without searching yourself.

Pichai: With every technological change, search can do more. User expectations change, and you have to change with them. In the future, many "lookups" will become agent-based—you give a task, and the agent completes it for you. Search will become an agent manager. The Antigravity I use now already has a bunch of agents working inside it.

Q: Will that form of typing a line of keywords and getting a list of links still exist?

Pichai: In the current search AI overviews, some people are already doing deep research on them; it's different from what you said, but that's how people are using it. There will be more and more long-running tasks in the future, and they can be asynchronous.

Q: You just said search will become an agent manager. But in ten years, will that search box still be there, just that people don't pay much attention to it anymore?

Pichai: The form factor will change, and the ways of input and output will change. But honestly, thinking ten years ahead now can be paralyzing. We are lucky to be at a moment where just looking one year ahead is exciting enough. The curve is so steep; the models will be completely different a year from now. Just keeping up with the curve is thrilling enough.

And many people don't realize this is an expansive moment, not a zero-sum game. Look at YouTube; TikTok and Instagram grew, and we're still doing fine, right? The more you think others' rise means your death, the more it becomes a zero-sum game. But as long as you are innovating yourself, it won't.

We are doing both search and Gemini now; they overlap and will gradually differentiate. Having both, I think, is beneficial.

Q: Around spring/summer 2025, the market was extremely pessimistic about Google's future, saying search was finished, and your stock price fell to around $150. Looking back now, that was clearly a misunderstanding. Google's performance across the entire stack—applications, models, TPUs, as well as Waymo, YouTube, and all those cool bets—has been excellent. What do you think investors got wrong at that time?

Pichai: At that time, everyone's attention was entirely on the "reversal," the so-called "OpenAI comeback." But for me, that moment actually made me feel that Google was made for this moment. This vertical integration wasn't accidental or arbitrary. In 2016, we announced the TPU at I/O and committed to building AI data centers; now we're on the seventh generation. That year, the company also set the "AI First" direction; it wasn't just a slogan.

We were indeed one step behind on the frontier large models, but we had all the necessary capabilities internally; the rest was execution. What excites me is that, looking at the full stack, we have research teams, infrastructure teams, and various business platforms. And AI happens to accelerate all these businesses simultaneously, including Search, YouTube, Cloud, Waymo—they are all on the same curve. This is very efficient leverage.

I never thought it was a zero-sum game back then. Everything will expand tenfold, and there will be room for others too. After Google rose, didn't Amazon and Facebook also do well? We always underestimate the space created by growth. So my focus was simple: execute better.

Q: Was there a defining moment that made the outside world feel "Google is back"? Was it Gemini 3?

Pichai: People really started noticing this trend around Gemini 2.5. Especially the multimodal capabilities, which directly placed us at the forefront. Credit goes to the Google DeepMind team. We invested significant fixed costs in multimodality from the start; Gemini was designed for this direction from day one. By Gemini 2.5, the advantages began to show. For example, with Nano Banana, you could see the effect of everything integrated.

But this field changes too fast. The top two or three labs push each other forward. One month you think, 'Great, we're ahead here,' the next month it's 'Oops, we're behind there.' The landscape might be different again in a few months. The frontier is that competitive.

03. Spending $180 Billion a Year to Explore AGI

Q: Some external researchers feel that Google and other top labs have a difference: Google is less "AGI-obsessed." In other words, Google doesn't seem to believe AGI is imminent nor is it accelerating frantically around that idea. Do you think this observation is accurate? If so, does it affect your judgment on future direction?

Pichai: Look at our capital expenditure, it grew from $30 billion to $180 billion. Who would spend money like that without truly believing in this curve?

I think this is largely a semantic issue. We are a large company, our products cover too many people, too many layers, so our way of speaking might be different. But to say Google doesn't get AGI doesn't make sense. Many of the founders themselves are AGI enthusiasts—Demis Hassabis, Jeff Dean, Ilya Sutskever, and Dario Amodei—these people all worked at Google at one point.

I think the reason it might seem like we have differences externally is partly due to geography, like San Francisco聚集 (gathering) more young companies and research labs. But these are just surface appearances. At the core, there is no fundamental difference in our judgment of the technology curve or how to understand and apply AI.

The real gap is whether you have witnessed the changes firsthand. In our company, there is a group of people running at the forefront every day, personally deploying and testing AI agents, watching them gradually acquire new skills and handle complex tasks. Then you look back at what they could do just three months ago, and you can tangibly feel the impact of exponential growth.

Q: I'm curious, when was the last time you felt an AGI moment was approaching?

Pichai: The first time I had that feeling was in 2012. At that time, Dean demonstrated the earliest version of Google Brain; this neural network recognized a cat. Later, Larry Page and I went to the DARPA Challenge to see self-driving cars. Demis demonstrated early models, and the models showed what we call "imagination."

There have been many such moments since. Recently, the most直观的 (intuitive) is the rapid progress in programming. You give a programming agent a complex task, and you don't even need to open an IDE (Integrated Development Environment) from start to finish; just watch it complete the task in the manager. That feeling, you can call it an AGI moment.

Q: I was working on a small project myself the other day, and after it ran, I realized I didn't even know what programming language it used; I had to specifically ask it. It feels like magic.

Pichai: Exactly. The slope of the curve (the speed of improvement) is what's truly astonishing. You look back three months, and you know how much progress has been made.

Q: Speaking of firsthand experience, I'm curious how you maintain a real feel for the products. Tech products are too abstract; you can't just rely on reports and PPTs. Besides daily use of Gmail and such常规操作 (routine operations), how do you ensure you don't lose touch with users?

Pichai: I use internal versions, specifically scheduling time for intensive use. Two weeks ago, I was working out at the gym, my phone was on Gemini Live, and for the next 30 minutes, I drilled down on one topic with it. Some experiences were good, some frustrating, but you learn things. I force myself to use it in a "super user" mode. I force myself to use them in that "super user" mode to stay in touch. X (Twitter) also helps because sometimes you get the most direct feedback there.

Also, I now go into Antigravity (our internal version) and directly ask the AI: "We launched this feature, what does everyone think? Show me the top five worst and top five best comments." It pulls them out directly. Has my life become easier? Definitely.

In the past, I spent a lot of time trying to understand the situation; now AI agents do that part for me. Of course, I still need to spend the time experiencing it myself; it's a learning process. I'm also trying to adapt to this future.

Q: You said this isn't a zero-sum game, and productivity gains are real. But looking back at previous technology cycles—internet, mobile, SaaS—it took a long time to be reflected in GDP. With AI, we already see data center construction boosting GDP growth. Do you think the US economy will be significantly larger because of AI in the next three to five years? How much growth?

Pichai: For these returns to be meaningful, they must manifest somewhere. I remember someone from Sequoia wrote an article saying that with so much money invested, the returns must justify it.

Of course, that was two and a half years ago. Some said it was illogical because the return rate must reach a certain level to be reasonable. But now, the investment scale has probably grown 10x, and we need to re-examine these numbers. At some point, the math must work out. What is very clear is that we are supply-constrained now; we see strong compute demand across all application areas.

Q: I have no doubt it's a huge market. The problem is, many people might be calculating wrong. For example, they compare token budgets to engineer salaries. I think the software engineering market is larger than anyone thought, and increased supply will expand the market tenfold. I'm not questioning the relationship between capex and returns; I'm just curious, how big do you think the growth can actually be?

Pichai: Looking back at the development of the internet, the GDP growth numbers didn't fully reflect the change we felt. Maybe without the internet, GDP growth would have been negative. It's hard to make precise predictions; there are natural dampening mechanisms at all levels of society.

The most obvious example: the compute construction curve and the model improvement curve are截然不同 (completely different), the former is slower. Then you also have to consider, how do you diffuse the technology into society? Waymo is an example. It's safer than a human driver, but you still have to roll it out cautiously; there are constraints at all these levels. The US economy is much larger than ten years ago; even if the growth rate increases by just half a percentage point, it's a huge contribution. I think it will move in that direction.

04. Supply Chain Alerts: Memory, Electricians

Q: You mentioned supply constraints, which are indeed a defining feature of 2026. You said Google's capex is around $180 billion?

Pichai: Between $175 and $185 billion.

Q: The interesting thing is, even if Google wanted to spend $400 billion, it couldn't, because there isn't enough memory, not enough power, not enough various components. Can you talk about these bottlenecks?

Pichai: You can't even find the electricians you need.

Q: Tell us about the bottlenecks.

Pichai: Ultimately, it comes back to wafer capacity; that's the fundamental constraint. Power and energy are relatively easier to solve, but the permitting and regulatory environment is a big problem; it slows down how fast you can do things.

Q: States like Texas, Nevada, Montana have plenty of land, but it's still not enough?

Pichai: We are making huge progress, but the US确实需要学学怎么建得更快 (really needs to learn how to build faster). Look at China's construction speed; it's astounding. We need to shift our mindset and think about how to increase the speed of building in the physical world by ten times. This will be the real constraining factor. And the friction will only increase; it's not something a few people saying "we need to build faster" can solve.

Q: There are also issues like data center moratoriums.

Pichai: Wafer capacity, permits and approvals, construction speed—these are all bottlenecks. The government has done a lot, and people realize improvements are needed. Then there are key components in the supply chain; memory is a classic example. In the short term, everyone is stuck here.

For us companies, no matter how "AGI-obsessed" you are, you have to face a practical problem: your judgment cannot be 100% accurate; there's always a margin of error. You have to figure out, how bullish are you really on future development? How much profit compression can you withstand? Because external factors can always go wrong. Everyone is making adjustments based on these uncertainties.

Q: So memory is the biggest component bottleneck you see?

Pichai: Absolutely one of the most critical right now.

Q: You said this is short-term. Will the market stimulate supply through price increases?

Pichai: Leading memory manufacturers are unlikely to expand production significantly. It will be constrained short-term but will ease slowly. And this constraint forces innovation—we will improve efficiency by 30x. These things happen simultaneously.

Q: Doesn't this reinforce an oligopoly? Models improve themselves, write their own code, label their own data; compute becomes a game of musical chairs. Whoever has more compute can go further. But if everyone's compute is allocated proportionally, it essentially sets a cap for people. Do you think this argument holds?

Pichai: There's some truth to it. But we just released Gemma 4, a very good open-source model. Chinese models are very good, but I think outside of China, this is also a very good open-source model. Gemma 4's frontier level, compared to Gemini 3's architecture, has a very large gap, but looking at the release timing, the two are not that far apart. It's not like a SpaceX rocket, a behemoth.

Q: I've always found it震撼 (astounding): you run a data center for months, and what comes out is essentially a flat file, a thing like a Word document, that's your model. It's amazing!

Pichai: The特殊性 (peculiarity) of this makes me want to challenge that framework. At least from an inference perspective, what you say makes sense. But everyone is trying to use capital to break through these constraints; the incentive is huge.

Q: But you just said there's only so much memory in the world. The supply issues in 2026, 2027 can't be solved by capital incentives alone. This might be when models start to differentiate more.

Pichai: Yes, but it must be considered together with factors like wafer capacity and permits. Overall, the constraints might not be as severe as imagined. You have to consider everything together, including capital.

Q: In theory, people are willing to invest more money, but they hit the现实瓶颈 (real bottlenecks) of 2026, 2027. It's like the Strait of Hormuz; you can set the oil price as high as you want, but if supply is reduced by 20 million barrels per day, then 20 million barrels of demand must be destroyed. It's the same with memory; in the end, someone必定拿不到 (will definitely not get it).

Pichai: Of course, there are other constraints like safety. But the key point is, these models will soon突破 (break through) the limits of almost all existing software—maybe they already have, and we're sitting here, completely unaware.

Q: So supply constraints force you to optimize and become more efficient.

Pichai: Yes, it forces you to have necessary conversations. Take safety, for example; we need more coordination, but today that coordination is far from enough. There will be a moment—and it might come suddenly. You can't wish these problems away.

05. Three "Hidden Gems"

Q: Speaking of which, Google's investment portfolio is indeed impressive. You invested in SpaceX, I remember it was about 10% a long time ago? And Anthropic, also around 10%. Waymo is majority-owned. Internally, there are TPUs, quantum computing... are there other "hidden gems" that people might not know about or underestimate?

Pichai: We are always working on various long-term projects; when first announced, even the slightly边缘的 (marginal) ones seem a bit absurd. Like space data centers; we are in the very early stages right now. You just said constraints spark creativity, and that's exactly the point.

From a 20-year long-term perspective, where are you going to build these data centers? This question is difficult, but it's what we are thinking about today, just like when we started Waymo in 2010. Quantum computing is another one; we are pushing forward坚定地 (steadily), and I'm excited about it.

Q: Where do you think quantum computing will have the biggest impact? People mainly talk about molecular modeling and cryptography. But some are developing post-quantum cryptography (referring to new cryptographic techniques resistant to quantum computing attacks), and in molecular modeling, deep learning is already very strong, AlphaFold is an example. Will quantum really be important? If so, where will its biggest impact be?

Pichai: On an abstract level, I think quantum computers are more suitable for simulating nature. Because nature itself follows the laws of quantum mechanics, simulating it with quantum systems would be more direct and efficient. Of course, classical computers with sufficient compression algorithms could theoretically also do it, but my intuition is that quantum will have the advantage.

An example: we still don't fully understand the "Haber process" in fertilizer production, and there are many other complex natural phenomena. My intuition is that in simulating weather, simulating reality, quantum computing will ultimately prevail.

Technology history teaches us one thing: once you make something usable, people will find all sorts of applications you never imagined initially. I always like to give this example:手机加上GPS (mobile phones plus GPS) later enabled Uber. The people making phones back then could never have imagined that. So I believe that once quantum computers are truly built, their applications will be far beyond anyone's imagination.

Q: Sorry to interrupt, please continue talking about those超前项目 (forward-looking projects) you just mentioned.

Pichai: The Google DeepMind team is deeply involved in robotics. Google actually ventured into robotics very early, but it was too early. Looking back now, AI was the missing piece of the puzzle back then. The Gemini Robotics model is already top-tier in spatial reasoning. Interestingly, we are now collaborating with Boston Dynamics, Agile, and other companies to push forward together.

There's also Wing, drone delivery. We are scaling up; soon, 40 million Americans will be able to use Wing's services. This isn't years away; it's happening very soon. These long-term projects are built up bit by bit.

Also, there's Isomorphic.

Q: Isomorphic is indeed very exciting.

Pichai: Yes, we are focused on using models to improve every环节 (step) of drug discovery. Although there are still Phase III clinical trials and other procedures later, AI assistance gives us more confidence走向成功 (to move towards success).

06. Regret Not Investing in Waymo Earlier

Q: How is Google's capital actually allocated? Textbooks say capital allocation is about putting money where the returns are highest. Boeing is a classic example: defense contracts have an internal rate of return (IRR) of 16%, new airliners 19%, everyone would choose the latter. But Google's projects can't be calculated that way. Invest more in YouTube, optimize the algorithm, user dwell time increases, revenue goes up. Invest more in Waymo, accelerate expansion, but don't know when it will make money at scale. Invest in an AI research project, might not see results for five years. The return curves of these three projects are completely different. How do you compare them?

Pichai: This is a good question. Ironically, we encounter this question more often now than ever because of TPU allocation. To some extent, even Waymo needs TPUs; compute makes the capital allocation issue particularly prominent.

By the way, I特别期待 (especially look forward to) AI helping me with this. Once we unlock all the data, the models can actually handle it; we're stuck on data unlocking right now. I think this will help soon.

Looking back, Google has a big advantage: we often make decisions at a very early stage. This has a lot to do with the company's technical DNA.

For long-term projects, the early stage is actually easier because it doesn't require much capital initially. The real difficulty is sustaining long-term investment and continuously assessing the progress of the underlying technology. Take quantum computing as an example, how do we decide whether to keep investing? We look at logical qubit error rates, when we can reach the threshold for stable, large-scale logical qubits, whether the team can break through these technical hurdles.

One very important lesson I've learned is: bet deeply on technology early.

In the long run, you are essentially using intuition to judge a project's option value and potential market size 5 to 10 years out. You first assume a very aggressive growth curve, then work backwards: does this decision actually make sense?

TPU investment was done this way; we have been steadily investing. Waymo too; about two or three years ago, when the world was extremely pessimistic about self-driving, we反而加大了投资 (instead increased investment). Others retreated, we doubled down.

Q: Back to the capital allocation you mentioned. Google does kill projects; Loon (balloon network project) was shut down, but Waymo熬了那么久 (endured for so long) and you never gave up. What did you see back then? Was it a qualitative or quantitative judgment? How do you decide which project to kill and which to keep?

Pichai: We do have some quantitative metrics. For example, looking at Waymo's driving system, how its safety and reliability are improving. It's a long-term curve; you set goals and then monitor execution continuously. Our team has always been outstanding. Progress was indeed slow in some phases, but you have to believe the team can break through. The more you can assess at the deep technical level, the more accurate your decisions. At least that's how I do it.

Q: I've heard a saying: Waymo early on relied on hand-drawn maps and heuristic rules, which could handle very limited situations. The real breakthrough was switching to end-to-end deep learning a few years ago,正好赶上 (just catching) the Transformer wave. If Waymo had started five years ago, would it be about the same now? Or was that over-a-decade of积累 (accumulation) actually essential?

Pichai: You can think of Waymo as a robot. Theoretically, people who started doing robotics just three years ago should progress faster. But Waymo is different; it's a highly integrated system, not like TSMC or SpaceX, which compete on technical sophistication in a single dimension. For this kind of system integration, timing and the accumulation of craftsmanship are very critical. That said, the end-to-end approach will indeed be an accelerator.

Q: So continuously nurturing a team is itself a huge advantage. You kept investing, and when the technology took off, it paid off. That's smart. How does this extend to other areas? Like robotics, will you重新自己搞硬件 (re-start doing your own hardware), or rely mainly on partners?

Pichai: We keep an open mind. But from Waymo and TPU, I learned one thing: in areas involving safety and regulation, you need first-hand product feedback loops. Owning first-party hardware will ultimately become very important.

07. Personally Evaluating Compute Allocation Weekly

Q: In the past, R&D spending was mainly on personnel salaries, and technology costs were secondary. Now TPU compute has become a major part of the budget. How does it work specifically inside Google? Is there an overall TPU budget? When allocating to projects, was it previously based on headcount, and now it's "headcount + compute" budget? How do quarterly reviews work?

Pichai: We have always had compute budgets, but now compute is truly severely constrained. I spend at least one hour every week looking very carefully at how much compute each project and team is using, evaluating how to allocate it. This matter is now the top priority.

Q: So compute has become a scarce resource, and you need to ensure it's spent on the most worthwhile places.

Pichai: Exactly.

Q: What about Google Cloud? You need compute for yourselves on one hand, and you also sell it to customers on the other. How do you handle this conflict?

Pichai: Through advance planning. The cloud team does forward-looking planning, and our commitments to customers must be fulfilled坚决地 (resolutely). Everyone is operating in a constrained world; the cloud team also always says compute is insufficient, but advance planning solves most problems.

Q: Speaking of Google Cloud, GCP/MCP (AI Assistant interaction protocol with Google Cloud) is very easy to use; your AI can directly call Google Cloud programmatically, able to do almost anything,就差核心权限设置了 (just short of core permission settings). Previously, the biggest pain point of Google Cloud was having too many features, too messy; after logging in, you had to create organizations, create projects, find services—very troublesome. None of that matters now; you just say "add this feature." The AI understands all the API documentation and becomes a navigation layer. This experience is excellent.

Pichai: AI as an orchestration layer can handle anything you can think of. It's the same inside enterprises; CEOs don't lack data, they lack the method to put data together. In the past, you had to do a big ERP project; now AI is that orchestration layer.

Q: The more complex the product, the greater the benefit of AI navigation. Stripe has also experienced this, but the effect should be more pronounced for GCP.

Pichai: We can do even better, but you are right, the opportunity is huge.

Q: What interests me about products like OpenClaw is that they allow consumers to use stateful AI. For example, "summarize the news I'm interested in and send it to me every morning"—this kind of thing requiring persistent memory—mainstream AI apps can't do it yet. Is this functionality coming soon?

Pichai: The direction is affirmative. Users need to run persistent, long-term tasks in a reliable and secure manner. Issues like identity and permissions need to be figured out. But this is the future of AI agents; bringing this capability is an exciting frontier we are exploring for consumers.

Q: This is also what I wanted to mention. Dreamer, the company of the former Stripe CTO, was just acquired by Meta; they are particularly good at Stateful AI. You can make small applications yourself, and the experience is very smooth. It gives a sense of surprise. (Note: Stateful AI refers to AI systems that can retain and utilize historical context, memory, and state information across multi-step interactions or complex workflows.)

Pichai: Underneath the consumer-grade interface, there will be a full coding model,加上合适的工具和技能 (plus appropriate tools and skills), plus the ability to run securely and persistently in the cloud. These foundational components are converging. Today, maybe only 0.1% of people are living in this future, building things for themselves. But pushing it to the mass market is an exciting frontier.

Q: The companies I'm involved with, even those founded recently, have completely changed product development, engineering practices, even the positioning of design teams. Is Google rethinking these too? Have workflows changed significantly?

Pichai: You can understand it with concentric circles. Some teams have already transformed deeply; my task is to扩散 (diffuse) this change outward. Early on, many things were半残废的 (half-baked); you couldn't push them even if you wanted to. But this year the curve is shifting急剧地 (drastically). Google DeepMind and some software engineering teams are already living in the agent manager; their internal tool is called Jet Ski, which is essentially Antigravity. Last week we just rolled it out to the search team. In a large company, change management is the biggest hurdle for technology diffusion; small companies switch much faster.

Q: I want to add a few issues encountered in AI practical implementation. First, engineers need time to learn how to prompt AI effectively, and each company还有自己特定的知识 (also has its own specific knowledge). Second, AI-generated codebases are hard to share collaboratively because the scope of changes is large, code changes fast, making multi-person collaboration complex. Third, beyond engineering, data permissions are a big problem—you want the agent to answer "what's the status of this deal," the company knows this information, but the permission engine needs rewriting. Fourth, role definitions are also changing; roles like engineering, product, design might need to merge. In short, model capability has reached the level, but we are far from using it enough. What's your view?

Pichai: The issues you mentioned, the Gemini Enterprise team and the Antigravity team are solving them one by one. This is our roadmap. We use internally, encounter obstacles, overcome obstacles, and then turn them into products to push out. Identity and access control are real challenges, and our security requirements are particularly high, so we must be cautious. But precisely because of this, when we solve problems, the things we release are more robust. We are in this fixed-cost stage right now.

08. AI Taking Over Human Timelines

Q: Google does formal business forecasting several times a year. Theoretically, you could have AI completely automate this without any human involved. In which quarter do you think Google will first achieve forecasting done entirely by AI agents?

Pichai: I predict 2027 will be a significant turning point. Initially, there will still be people负责核查 (responsible for checking), but it will gradually transition. In 2027, these changes will happen very noticeably.

Q: So besides engineering processes, those non-engineering processes, you think will truly start AI-ification in 2027?

Pichai: Yes. This is also an advantage for startups; they can hire AI-native teams and play this way from the start. We have to do retraining, do transformation. Young companies indeed have an advantage here; we must drive this transformation ourselves.

Q: Are there any small projects inside Google that excite you right now?

Pichai: It might surprise people to hear this. Space data centers; we started with a small team of a few people, with a very small budget to achieve the first milestone. Big ideas also start small.

Domande pertinenti

QAccording to the interview, why did Google not release a ChatGPT-like product earlier despite having the technology (LaMDA)?

APichai stated that Google had an internal version of LaMDA, an early ChatGPT-like product, but did not release it earlier due to concerns about its 'toxicity' as it hadn't undergone RLHF (Reinforcement Learning from Human Feedback). He also cited Google's much higher bar for product quality, especially for its search engine, as a reason for the delayed release.

QWhat is Pichai's view on the future of the traditional search box and how it will evolve?

APichai believes the traditional search box will not die but will evolve into an 'agent manager.' Users will be able to give commands, and AI agents will complete tasks for them. He sees search functionality expanding and adapting to new technological changes rather than being replaced.

QWhat does Pichai identify as the single biggest bottleneck for AI infrastructure scaling in 2026?

APichai identifies wafer capacity (semiconductor chip production) as the 'fundamental constraint' and the biggest bottleneck. He also mentions other critical bottlenecks like memory supply, the availability of skilled electricians, and the slow speed of physical infrastructure construction and regulatory approvals.

QWhich long-term 'moonshot' project does Pichai compare to 'Waymo in 2010' in terms of its early, ambitious stage?

APichai compares the project of building 'space data centers' to 'Waymo in 2010.' He mentions it is in its very early stages, starting with a small team and a small budget, but represents a long-term, ambitious bet for the company.

QHow does Pichai personally stay involved in the critical resource allocation decisions for Google's AI development?

APichai revealed that he spends at least an hour every week personally reviewing and approving the allocation of computing power (TPU capacity) across different projects within Google. He considers this his most important task currently, given that compute has become a severely constrained resource.

Letture associate

Trading

Spot
Futures

Articoli Popolari

Cosa è $S$

Comprendere SPERO: Una Panoramica Completa Introduzione a SPERO Mentre il panorama dell'innovazione continua a evolversi, l'emergere delle tecnologie web3 e dei progetti di criptovaluta gioca un ruolo fondamentale nel plasmare il futuro digitale. Un progetto che ha attirato l'attenzione in questo campo dinamico è SPERO, denotato come SPERO,$$s$. Questo articolo mira a raccogliere e presentare informazioni dettagliate su SPERO, per aiutare gli appassionati e gli investitori a comprendere le sue basi, obiettivi e innovazioni nei domini web3 e crypto. Che cos'è SPERO,$$s$? SPERO,$$s$ è un progetto unico all'interno dello spazio crypto che cerca di sfruttare i principi della decentralizzazione e della tecnologia blockchain per creare un ecosistema che promuove l'impegno, l'utilità e l'inclusione finanziaria. Il progetto è progettato per facilitare interazioni peer-to-peer in modi nuovi, fornendo agli utenti soluzioni e servizi finanziari innovativi. Al suo interno, SPERO,$$s$ mira a responsabilizzare gli individui fornendo strumenti e piattaforme che migliorano l'esperienza dell'utente nello spazio delle criptovalute. Questo include la possibilità di metodi di transazione più flessibili, la promozione di iniziative guidate dalla comunità e la creazione di percorsi per opportunità finanziarie attraverso applicazioni decentralizzate (dApps). La visione sottostante di SPERO,$$s$ ruota attorno all'inclusività, cercando di colmare le lacune all'interno della finanza tradizionale mentre sfrutta i vantaggi della tecnologia blockchain. Chi è il Creatore di SPERO,$$s$? L'identità del creatore di SPERO,$$s$ rimane piuttosto oscura, poiché ci sono risorse pubblicamente disponibili limitate che forniscono informazioni dettagliate sul suo fondatore o fondatori. Questa mancanza di trasparenza può derivare dall'impegno del progetto per la decentralizzazione—un ethos che molti progetti web3 condividono, dando priorità ai contributi collettivi rispetto al riconoscimento individuale. Centrando le discussioni attorno alla comunità e ai suoi obiettivi collettivi, SPERO,$$s$ incarna l'essenza dell'empowerment senza mettere in evidenza individui specifici. Pertanto, comprendere l'etica e la missione di SPERO rimane più importante che identificare un creatore singolo. Chi sono gli Investitori di SPERO,$$s$? SPERO,$$s$ è supportato da una varietà di investitori che vanno dai capitalisti di rischio agli investitori angelici dedicati a promuovere l'innovazione nel settore crypto. Il focus di questi investitori generalmente si allinea con la missione di SPERO—dando priorità a progetti che promettono avanzamenti tecnologici sociali, inclusività finanziaria e governance decentralizzata. Queste fondazioni di investitori sono tipicamente interessate a progetti che non solo offrono prodotti innovativi, ma contribuiscono anche positivamente alla comunità blockchain e ai suoi ecosistemi. Il supporto di questi investitori rafforza SPERO,$$s$ come un concorrente degno di nota nel dominio in rapida evoluzione dei progetti crypto. Come Funziona SPERO,$$s$? SPERO,$$s$ impiega un framework multifunzionale che lo distingue dai progetti di criptovaluta convenzionali. Ecco alcune delle caratteristiche chiave che sottolineano la sua unicità e innovazione: Governance Decentralizzata: SPERO,$$s$ integra modelli di governance decentralizzati, responsabilizzando gli utenti a partecipare attivamente ai processi decisionali riguardanti il futuro del progetto. Questo approccio favorisce un senso di proprietà e responsabilità tra i membri della comunità. Utilità del Token: SPERO,$$s$ utilizza il proprio token di criptovaluta, progettato per servire varie funzioni all'interno dell'ecosistema. Questi token abilitano transazioni, premi e la facilitazione dei servizi offerti sulla piattaforma, migliorando l'impegno e l'utilità complessivi. Architettura Stratificata: L'architettura tecnica di SPERO,$$s$ supporta la modularità e la scalabilità, consentendo un'integrazione fluida di funzionalità e applicazioni aggiuntive man mano che il progetto evolve. Questa adattabilità è fondamentale per mantenere la rilevanza nel panorama crypto in continua evoluzione. Coinvolgimento della Comunità: Il progetto enfatizza iniziative guidate dalla comunità, impiegando meccanismi che incentivano la collaborazione e il feedback. Nutrendo una comunità forte, SPERO,$$s$ può affrontare meglio le esigenze degli utenti e adattarsi alle tendenze di mercato. Focus sull'Inclusione: Offrendo basse commissioni di transazione e interfacce user-friendly, SPERO,$$s$ mira ad attrarre una base utenti diversificata, inclusi individui che potrebbero non aver precedentemente interagito nello spazio crypto. Questo impegno per l'inclusione si allinea con la sua missione generale di empowerment attraverso l'accessibilità. Cronologia di SPERO,$$s$ Comprendere la storia di un progetto fornisce preziose intuizioni sulla sua traiettoria di sviluppo e sui traguardi. Di seguito è riportata una cronologia suggerita che mappa eventi significativi nell'evoluzione di SPERO,$$s$: Fase di Concettualizzazione e Ideazione: Le idee iniziali che formano la base di SPERO,$$s$ sono state concepite, allineandosi strettamente con i principi di decentralizzazione e focus sulla comunità all'interno dell'industria blockchain. Lancio del Whitepaper del Progetto: Dopo la fase concettuale, è stato rilasciato un whitepaper completo che dettaglia la visione, gli obiettivi e l'infrastruttura tecnologica di SPERO,$$s$ per suscitare interesse e feedback dalla comunità. Costruzione della Comunità e Prime Interazioni: Sono stati effettuati sforzi attivi di outreach per costruire una comunità di early adopters e potenziali investitori, facilitando discussioni attorno agli obiettivi del progetto e ottenendo supporto. Evento di Generazione del Token: SPERO,$$s$ ha condotto un evento di generazione del token (TGE) per distribuire i propri token nativi ai primi sostenitori e stabilire una liquidità iniziale all'interno dell'ecosistema. Lancio della Prima dApp: La prima applicazione decentralizzata (dApp) associata a SPERO,$$s$ è stata attivata, consentendo agli utenti di interagire con le funzionalità principali della piattaforma. Sviluppo Continuo e Partnership: Aggiornamenti e miglioramenti continui alle offerte del progetto, inclusi partnership strategiche con altri attori nello spazio blockchain, hanno plasmato SPERO,$$s$ in un concorrente competitivo e in evoluzione nel mercato crypto. Conclusione SPERO,$$s$ rappresenta una testimonianza del potenziale del web3 e delle criptovalute di rivoluzionare i sistemi finanziari e responsabilizzare gli individui. Con un impegno per la governance decentralizzata, il coinvolgimento della comunità e funzionalità progettate in modo innovativo, apre la strada verso un panorama finanziario più inclusivo. Come per qualsiasi investimento nello spazio crypto in rapida evoluzione, si incoraggiano potenziali investitori e utenti a ricercare approfonditamente e a impegnarsi in modo riflessivo con gli sviluppi in corso all'interno di SPERO,$$s$. Il progetto mostra lo spirito innovativo dell'industria crypto, invitando a ulteriori esplorazioni delle sue innumerevoli possibilità. Mentre il percorso di SPERO,$$s$ è ancora in fase di sviluppo, i suoi principi fondamentali potrebbero effettivamente influenzare il futuro di come interagiamo con la tecnologia, la finanza e tra di noi in ecosistemi digitali interconnessi.

75 Totale visualizzazioniPubblicato il 2024.12.17Aggiornato il 2024.12.17

Cosa è $S$

Cosa è AGENT S

Agent S: Il Futuro dell'Interazione Autonoma in Web3 Introduzione Nel panorama in continua evoluzione di Web3 e criptovalute, le innovazioni stanno costantemente ridefinendo il modo in cui gli individui interagiscono con le piattaforme digitali. Uno di questi progetti pionieristici, Agent S, promette di rivoluzionare l'interazione uomo-computer attraverso il suo framework agentico aperto. Aprendo la strada a interazioni autonome, Agent S mira a semplificare compiti complessi, offrendo applicazioni trasformative nell'intelligenza artificiale (AI). Questa esplorazione dettagliata approfondirà le complessità del progetto, le sue caratteristiche uniche e le implicazioni per il dominio delle criptovalute. Cos'è Agent S? Agent S si presenta come un innovativo framework agentico aperto, progettato specificamente per affrontare tre sfide fondamentali nell'automazione dei compiti informatici: Acquisizione di Conoscenze Specifiche del Dominio: Il framework apprende in modo intelligente da varie fonti di conoscenza esterne ed esperienze interne. Questo approccio duale gli consente di costruire un ricco repository di conoscenze specifiche del dominio, migliorando le sue prestazioni nell'esecuzione dei compiti. Pianificazione su Lungo Orizzonte di Compiti: Agent S impiega una pianificazione gerarchica potenziata dall'esperienza, un approccio strategico che facilita la suddivisione e l'esecuzione efficiente di compiti complessi. Questa caratteristica migliora significativamente la sua capacità di gestire più sottocompiti in modo efficiente ed efficace. Gestione di Interfacce Dinamiche e Non Uniformi: Il progetto introduce l'Interfaccia Agente-Computer (ACI), una soluzione innovativa che migliora l'interazione tra agenti e utenti. Utilizzando Modelli Linguistici Multimodali di Grandi Dimensioni (MLLM), Agent S può navigare e manipolare senza sforzo diverse interfacce grafiche utente. Attraverso queste caratteristiche pionieristiche, Agent S fornisce un framework robusto che affronta le complessità coinvolte nell'automazione dell'interazione umana con le macchine, preparando il terreno per innumerevoli applicazioni nell'AI e oltre. Chi è il Creatore di Agent S? Sebbene il concetto di Agent S sia fondamentalmente innovativo, informazioni specifiche sul suo creatore rimangono elusive. Il creatore è attualmente sconosciuto, il che evidenzia sia la fase embrionale del progetto sia la scelta strategica di mantenere i membri fondatori sotto anonimato. Indipendentemente dall'anonimato, l'attenzione rimane sulle capacità e sul potenziale del framework. Chi sono gli Investitori di Agent S? Poiché Agent S è relativamente nuovo nell'ecosistema crittografico, informazioni dettagliate riguardanti i suoi investitori e sostenitori finanziari non sono documentate esplicitamente. La mancanza di approfondimenti pubblicamente disponibili sulle fondazioni di investimento o sulle organizzazioni che supportano il progetto solleva interrogativi sulla sua struttura di finanziamento e sulla roadmap di sviluppo. Comprendere il supporto è cruciale per valutare la sostenibilità del progetto e il suo potenziale impatto sul mercato. Come Funziona Agent S? Al centro di Agent S si trova una tecnologia all'avanguardia che gli consente di funzionare efficacemente in contesti diversi. Il suo modello operativo è costruito attorno a diverse caratteristiche chiave: Interazione Uomo-Computer Simile a Quella Umana: Il framework offre una pianificazione AI avanzata, cercando di rendere le interazioni con i computer più intuitive. Mimando il comportamento umano nell'esecuzione dei compiti, promette di elevare le esperienze degli utenti. Memoria Narrativa: Utilizzata per sfruttare esperienze di alto livello, Agent S utilizza la memoria narrativa per tenere traccia delle storie dei compiti, migliorando così i suoi processi decisionali. Memoria Episodica: Questa caratteristica fornisce agli utenti una guida passo-passo, consentendo al framework di offrire supporto contestuale mentre i compiti si sviluppano. Supporto per OpenACI: Con la capacità di funzionare localmente, Agent S consente agli utenti di mantenere il controllo sulle proprie interazioni e flussi di lavoro, allineandosi con l'etica decentralizzata di Web3. Facile Integrazione con API Esterne: La sua versatilità e compatibilità con varie piattaforme AI garantiscono che Agent S possa adattarsi senza problemi agli ecosistemi tecnologici esistenti, rendendolo una scelta attraente per sviluppatori e organizzazioni. Queste funzionalità contribuiscono collettivamente alla posizione unica di Agent S all'interno dello spazio crittografico, poiché automatizza compiti complessi e multi-fase con un intervento umano minimo. Man mano che il progetto evolve, le sue potenziali applicazioni in Web3 potrebbero ridefinire il modo in cui si svolgono le interazioni digitali. Cronologia di Agent S Lo sviluppo e le tappe di Agent S possono essere riassunti in una cronologia che evidenzia i suoi eventi significativi: 27 Settembre 2024: Il concetto di Agent S è stato lanciato in un documento di ricerca completo intitolato “Un Framework Agentico Aperto che Usa i Computer Come un Umano”, mostrando le basi per il progetto. 10 Ottobre 2024: Il documento di ricerca è stato reso pubblicamente disponibile su arXiv, offrendo un'esplorazione approfondita del framework e della sua valutazione delle prestazioni basata sul benchmark OSWorld. 12 Ottobre 2024: È stata rilasciata una presentazione video, fornendo un'idea visiva delle capacità e delle caratteristiche di Agent S, coinvolgendo ulteriormente potenziali utenti e investitori. Questi indicatori nella cronologia non solo illustrano i progressi di Agent S, ma indicano anche il suo impegno per la trasparenza e il coinvolgimento della comunità. Punti Chiave su Agent S Man mano che il framework Agent S continua a evolversi, diversi attributi chiave si distinguono, sottolineando la sua natura innovativa e il potenziale: Framework Innovativo: Progettato per fornire un uso intuitivo dei computer simile all'interazione umana, Agent S porta un approccio nuovo all'automazione dei compiti. Interazione Autonoma: La capacità di interagire autonomamente con i computer attraverso GUI segna un passo avanti verso soluzioni informatiche più intelligenti ed efficienti. Automazione di Compiti Complessi: Con la sua metodologia robusta, può automatizzare compiti complessi e multi-fase, rendendo i processi più veloci e meno soggetti a errori. Miglioramento Continuo: I meccanismi di apprendimento consentono ad Agent S di migliorare dalle esperienze passate, migliorando continuamente le sue prestazioni e la sua efficacia. Versatilità: La sua adattabilità attraverso diversi ambienti operativi come OSWorld e WindowsAgentArena garantisce che possa servire un'ampia gamma di applicazioni. Man mano che Agent S si posiziona nel panorama di Web3 e delle criptovalute, il suo potenziale per migliorare le capacità di interazione e automatizzare i processi segna un significativo avanzamento nelle tecnologie AI. Attraverso il suo framework innovativo, Agent S esemplifica il futuro delle interazioni digitali, promettendo un'esperienza più fluida ed efficiente per gli utenti in vari settori. Conclusione Agent S rappresenta un audace passo avanti nell'unione tra AI e Web3, con la capacità di ridefinire il modo in cui interagiamo con la tecnologia. Sebbene sia ancora nelle sue fasi iniziali, le possibilità per la sua applicazione sono vaste e coinvolgenti. Attraverso il suo framework completo che affronta sfide critiche, Agent S mira a portare le interazioni autonome al centro dell'esperienza digitale. Man mano che ci addentriamo nei regni delle criptovalute e della decentralizzazione, progetti come Agent S giocheranno senza dubbio un ruolo cruciale nel plasmare il futuro della tecnologia e della collaborazione uomo-computer.

420 Totale visualizzazioniPubblicato il 2025.01.14Aggiornato il 2025.01.14

Cosa è AGENT S

Come comprare S

Benvenuto in HTX.com! Abbiamo reso l'acquisto di Sonic (S) semplice e conveniente. Segui la nostra guida passo passo per intraprendere il tuo viaggio nel mondo delle criptovalute.Step 1: Crea il tuo Account HTXUsa la tua email o numero di telefono per registrarti il tuo account gratuito su HTX. Vivi un'esperienza facile e sblocca tutte le funzionalità,Crea il mio accountStep 2: Vai in Acquista crypto e seleziona il tuo metodo di pagamentoCarta di credito/debito: utilizza la tua Visa o Mastercard per acquistare immediatamente SonicS.Bilancio: Usa i fondi dal bilancio del tuo account HTX per fare trading senza problemi.Terze parti: abbiamo aggiunto metodi di pagamento molto utilizzati come Google Pay e Apple Pay per maggiore comodità.P2P: Fai trading direttamente con altri utenti HTX.Over-the-Counter (OTC): Offriamo servizi su misura e tassi di cambio competitivi per i trader.Step 3: Conserva Sonic (S)Dopo aver acquistato Sonic (S), conserva nel tuo account HTX. In alternativa, puoi inviare tramite trasferimento blockchain o scambiare per altre criptovalute.Step 4: Scambia Sonic (S)Scambia facilmente Sonic (S) nel mercato spot di HTX. Accedi al tuo account, seleziona la tua coppia di trading, esegui le tue operazioni e monitora in tempo reale. Offriamo un'esperienza user-friendly sia per chi ha appena iniziato che per i trader più esperti.

835 Totale visualizzazioniPubblicato il 2025.01.15Aggiornato il 2025.03.21

Come comprare S

Discussioni

Benvenuto nella Community HTX. Qui puoi rimanere informato sugli ultimi sviluppi della piattaforma e accedere ad approfondimenti esperti sul mercato. Le opinioni degli utenti sul prezzo di S S sono presentate come di seguito.

活动图片