Zhidongxi, April 8th report - DeepMind founder Demis Hassabis' latest half-hour interview is now available.
In the interview, Hassabis stated that the possibility of achieving AGI within the next five years is very high. He also revealed that over the past decade or even fifteen years, about 90% of the key breakthrough achievements supporting the modern AI industry came from the hands of Google Brain, Google Research, or DeepMind teams. He expressed full confidence: "If there are any missing key breakthroughs in the future, we have the capability to achieve them."
Regarding the commercialization of model capabilities, Demis Hassabis believes that the gap between the leading labs is actually beginning to widen, and it will become increasingly difficult to extract gains from the same ideas. Therefore, labs with the ability to invent entirely new algorithmic ideas will gain a greater advantage in the coming years, as the previous wave of ideas has been "squeezed dry."
In the video, Demis Hassabis had an in-depth conversation with host Harry Stebbings, discussing core topics such as the timeline and technical bottlenecks for AGI, model commoditization, the future of open source, the post-large language model era, and whether AI can truly solve drug R&D problems. He shared the reasons for DeepMind's progress and future plans, and also talked about his first impression of meeting Elon Musk.
The core viewpoints revealed in the interview are as follows:
1. The possibility of achieving AGI within five years is very high, with computing power being the biggest bottleneck.
2. Under the scaling law, the return on computing power investment is decreasing but still considerable.
3. Continuous learning capability is a major current shortcoming of AI. Additionally, AI performs exceptionally well when asked specific questions in a specific way, but if you change the phrasing, they might fail even on very basic things. Demis Hassabis calls this phenomenon "Jagged Intelligence".
4. AGI will ultimately become the most powerful tool in science and medicine. In about five years, we will usher in a golden age of scientific discovery.
5. Future AI regulation should at least establish a set of minimum standards and several benchmarks to test systems for undesirable properties.
6. When the technical and economic issues of AI are handled, what remains are philosophical problems.
Below is a summary of the core content of the interview:
01. AGI Within Five Years, Computing Power is the Biggest Bottleneck
Host: What is your understanding of AGI today? This can be our starting point.
Demis Hassabis: Our definition has been very consistent: AGI is a system that possesses all the cognitive abilities of the human mind. The reason for using this standard is that the human brain is the only proven instance of general intelligence we know of in the universe so far. So for me, this is the benchmark that AGI must reach.
Host: How far are we from AGI? Opinions vary widely in the industry, with some prominent figures even predicting it could happen as early as 2026 or 2027. What do you think?
Demis Hassabis: The possibility of achieving AGI within the next five years is very high.
Host: Is this closer than you originally thought? Has your judgment changed over time?
Demis Hassabis: Not really. My co-founder and DeepMind Chief Scientist Shane Legg, back in 2010 when we just started the company, often predicted on his blog when AGI would arrive. You have to understand, almost no one took AI seriously that year; everyone thought it was a dead end. Those blogs weren't read by many, but they are still on the internet, and anyone can look them up. We made extrapolations based on the progress of computing power and algorithms, basically predicting it would take about 20 years from the start. Looking back now, we are completely on track.
Host: So from today's perspective, what is the biggest technical bottleneck?
Demis Hassabis: I think computing power is the biggest bottleneck. This is not only because of the "scaling law": you need to keep building larger architectures, accommodating more parameters, to get smarter systems. Another area that also requires massive computing power is experimentation. Computers and the cloud are our workbenches. If you have a new idea and want to test it, you must validate it at a reasonable scale. So, if you have many researchers and many new ideas, you need extremely abundant computing power.
Host: You just mentioned the "scaling law." Many people believe we are hitting the limits of the scaling law, and performance improvements are beginning to plateau. Do you agree?
Demis Hassabis: No, I don't think so. I think the reality is more nuanced than that. Of course, when major companies started building large language models, each new generation brought huge performance leaps. This exponential growth will inevitably slow down at some point. But that doesn't mean there isn't a good return on further expanding existing systems. We and other frontier labs are still getting very considerable returns from scaling compute. It's obviously less than in the early days of scaling, but still considerable.
Host: In which areas are we actually behind your initial expectations?
Demis Hassabis: To be honest, in most areas, we are ahead of what I expected. You can look at things like video generation models, even our latest systems, like Genie, which is an interactive world model. If someone had shown me these things five or ten years ago, I would have been shocked. So, in most areas, we are ahead of the field's initial expectations. But there are still some big missing pieces, like "continuous learning," meaning current systems stop learning new things once they are trained and deployed into the real world.
02. Continuous Learning Capability is One of DeepMind's Next Plans
Host: Nowadays, when researching and preparing new shows, DeepMind has become my first choice. But two or three years ago, that wasn't the case. What do you think has driven such acceleration and progress at DeepMind?
Demis Hassabis: We did make some organizational adjustments. In fact, Google and DeepMind have always had the deepest and broadest research reserves in the industry. If you look back over the past decade or even fifteen years, about 90% of the breakthrough achievements that support the modern AI industry came from Google Brain, Google Research, or DeepMind, such as AlphaGo, reinforcement learning, and of course the Transformer architecture. These are all key milestones.
Therefore, I believe if there are any missing key breakthroughs in the future, we have the capability to achieve them. We basically brought all the top talent within the company together, working towards the same direction. Also, we consolidated all computing resources to build the largest models, instead of running two or three different versions in parallel within the company. So I think, to a large extent, we assembled all the elements we already had, advancing with a near-startup focus and speed, thus returning to the technological forefront and maintaining leadership in many areas.
Host: You said if anyone is to make a breakthrough, it should be DeepMind. So, in your view, is continuous learning the next breakthrough you most look forward to?
Demis Hassabis: I think there are quite a few things missing. Continuous learning is one of them. Additionally, researching different memory systems has great potential. Currently, we mainly rely on long context windows, stuffing all information into them, which is a bit "brute force." I think there are many interesting architectures that can be invented in this regard. Also things like long-term planning, hierarchical planning. Existing systems are not good at handling long time-span planning, like things many years into the future. The human mind can do that. So there are many problems to overcome. Perhaps the biggest problem among them is that they perform exceptionally well when asked specific questions in a specific way, but if you change the phrasing, they might fail even on very basic things. General intelligence shouldn't be like that. I call this Jagged Intelligence.
03. "Very Bullish on Open Source Models"
Host: Many in the industry are also discussing the "commercialization" of model capabilities. Do you think we will see that scenario? Or will one or two labs continue to accelerate, leaving other competitors far behind?
Demis Hassabis: I think, among the current three or four leading labs—we are one of them—the gap between them is actually beginning to widen. The reason is that many existing tools (like coding tools, math tools) will help build the next generation of systems. And I think it will become increasingly difficult to extract gains from the same ideas. Therefore, labs with the ability to invent entirely new algorithmic ideas will gain a greater advantage in the coming years, because the previous wave of ideas has been "squeezed dry."
Host: Another question I have is, over the years you have been quite open about much of DeepMind's research, and we have seen many high-quality open-source models. How do you see the future of open source?
Demis Hassabis: I think it will likely be similar to what we see now. We have always been strong supporters of open science and open-source models. From the initial Transformer to AlphaFold, we have done a lot of work sharing these achievements with the world to help the research community. We plan to continue doing this, especially in application areas, like applying AI to science, which is obviously a personal passion of mine. But I also think you will increasingly see that open-source models might be one step behind the most cutting-edge models. Usually, the open-source community needs about six months to re-implement and understand those new ideas. However, we are also strongly promoting a set of open-source models called Gemma, determined to make them the best in their class for their scale. For small developers, academics, or startups just getting started, they are ideal choices, also suitable for edge computing. So for certain types of applications, we are indeed very bullish on open-source models.
04. Future AGI Requires Global Regulation
Host: Next, I'd like to ask you, how do you see the world after large language models? Different scholars have very different views, for example, Yann LeCun holds very different opinions.
Demis Hassabis: Frankly, I disagree with Yann LeCun on some issues. I think there is probably a 50% chance that there are some missing key elements, and we still need breakthroughs in directions like world models. But one thing I am very sure of is that foundation models have proven to be hugely successful. They can perform extremely impressive tasks, and I don't think this capability will disappear. We are still getting continuous returns from the scaling law. So the real question is: when we look at future AGI systems, will the LLM model (large language model) be the only key component, or part of the overall system? My judgment is that it will not be replaced, but will become the foundation for building on top, like what we are doing with world models.
Host: As you said, AGI is likely to emerge by then. So, when we look five years into the future, what will that world look like? Many people have expressed concerns from different angles. Let's start with the positive side first. In your view, what will that world be like?
Demis Hassabis: I think the positive side, and the original intention behind my entire career dedicated to building AGI, is that it will ultimately become the most powerful tool in science and medicine. We desperately need such technology to push scientific discovery and find cures for diseases. So I hope that in a little over five years, we will usher in a golden age of scientific discovery.
I think we can get close to that goal soon. First, after completing the AlphaFold protein folding project, we spun off a company—Isomorphic Labs, which is currently doing very well. Its core idea is: focus on solving the rest of the drug discovery process, including a lot of chemistry work, compound design, toxicity testing, and various property assessments required for drug safety. We expect that within the next five to ten years, the entire drug design engine will be ready.
The next bottleneck is clinical trials, which still take many years. But I believe AI can help, such as simulating certain parts of human metabolism, and precisely stratifying patients to ensure specific patients get the drugs most suitable for their genomic makeup. So AI can add value here as well. But I think the real revolution will likely come after a dozen or so AI-designed drugs successfully go through the entire process. At that time, governments and regulators will see these results and have enough data to retrospectively test the model predictions. Maybe another ten years after that, we can truly trust the predictions of these models, thereby skipping certain steps, like no longer needing animal testing, or escalating doses faster because the model's reliability has been verified. So, I think it must be a two-step process: first conquer the drug design problem, then solve the time issue of the regulatory process.
Host: Speaking of regulation, AI safety is undoubtedly a major topic and has caused widespread concern. I remember Stephen Hawking once said: We must get this right, because we might not get a second chance. Do you agree with him?
Demis Hassabis: I completely agree. I think this is exactly the risk we are facing. I am mainly worried about two things: First, malicious actors misusing these systems. Second, technical problems: in a year or two, when these systems become more embodied, more autonomous, and as we gradually move towards AGI, can we keep them always on the intended safety track. I think appropriate regulation can help ensure all leading providers at least meet minimum safety standards, but ideally, this requires international-level unified standards.
Host: So, what kind of regulation is "appropriate"? Quoting your words in the documentary, you mentioned "We need more global coordination," which worries me because in fact we are doing worse and worse in global coordination.
Demis Hassabis: That's true. We are in an extremely special period. This technology might be the most influential technology humanity has ever had, while at the same time, the international system is highly fragmented. This is obviously not an ideal state. But we must still do our best to at least establish a set of minimum standards and several benchmarks to test systems for undesirable properties, like "deception." No one should build systems with deception capabilities, because that would allow them to bypass other safety measures. If all goes well, we can establish some kind of certification mechanism, similar to a "quality mark," indicating that the model has specific safety protections and performance guarantees, so that consumers and companies can safely build on it. I think this is the ideal direction of development. And, all of this must be international, because these systems are inherently cross-border, cross-regional.
Host: So, who will be the arbiter?
Demis Hassabis: I think the ultimate responsible entity must be governments. But the institutions capable of doing the specific technical work could be organizations like the AI Safety Institute. The UK has a very good AI Safety Institute, established during former Prime Minister Sunak's tenure, and I think it's doing a great job. The US has a similar institution. Perhaps all major countries with top research capabilities should have equivalent institutions, staffed with high-quality researchers, able to evaluate and audit these systems against specific benchmarks, independently verifying whether they meet appropriate standards.
Host: If I could give you a magic wand that only works for AI safety, what idea or plan would you use it to implement?
Demis Hassabis: I think we need some kind of international agency, perhaps similar to the International Atomic Energy Agency. AI safety institutes from various countries could provide input, the research community must also be involved, jointly determining which benchmarks are appropriate, which properties need to be checked, which capabilities.
Additionally, there might be other safety measures, for example, AI systems should not be allowed to output non-human-readable tokens, like some machine language we cannot understand. I think that would introduce new security vulnerabilities. Then, these international agencies would test for the above matters. I believe this would give the public confidence, and academia and civil society could also participate, ensuring that these systems, which will become extremely powerful, are independently checked.
05. AI Field Has Both Excessive Hype and Serious Underestimation
Host: When you see the real capabilities of these systems, how do you view the labor replacement issue? What does this mean for the labor market?
Demis Hassabis: There is no doubt that every revolutionary new technology in history has caused massive disruption to many jobs. This is certain, and I think this time will be no exception. Many old jobs will disappear, or no longer be economically viable. But history also tells us that a whole new set of professions will be born. These professions were previously unimaginable and are often high-quality, high-income. This is a regular evolution process. Of course, we must be very careful to judge "whether this time is really different." People like Marc Andreessen believe this time is no different in essence from the internet, mobile communications, and the other ten major breakthroughs of the past. But I do think the impact this time will be greater than any previous technological breakthrough, its scale is equivalent to ten times the Industrial Revolution, and its speed is also ten times that of the Industrial Revolution. That is, it will unfold within a decade, not a century. I've read quite a few books about the Industrial Revolution; that revolution brought huge turmoil and huge progress. But ideally, this time we will mitigate those negative effects better than during the Industrial Revolution.
Host: Someone told me that we always overestimate what we can do in a year and underestimate what we can do in ten years. Does this judgment still hold here? Or is technological development actually faster than we think?
Demis Hassabis: No, I think this judgment still holds. I mean, perhaps the short-term and long-term time scales are both closer than with other technologies. But I do think, looking at today and the next year, the AI field is somewhat overhyped, from some perspectives, there is no more room for hype. But interestingly, on the other hand, I think on a time scale of about ten years, its revolutionary nature is still seriously underestimated. We can call that the long term. Even in today's AI field, this dichotomy still exists.
Host: Besides concerns about the labor market, there are concerns about income inequality and wealth concentration in the hands of a few players. Combined with your comments on the Industrial Revolution, how do you think this will evolve?
Demis Hassabis: I think there are different possible paths. For example, pension funds should buy shares of all major AI companies, ensuring everyone can share in the gains. Maybe every country should set up a sovereign wealth fund to do this. This is an investment-level solution.
Also, I think we need to think: if this huge productivity gain only happens in a narrow area, how do we redistribute, how do we let everyone benefit from it? I can think of various ways, like using these extra productivity gains to provide infrastructure and other public services. On a five to ten year time scale, there might be incredible breakthroughs, like maybe we solve the nuclear fusion problem; we are working on this with partners like Commonwealth Fusion. I think AI will lead us to全新的 possibilities: excellent superconductors, more efficient batteries, leaps in materials science. All of this will completely change the nature of the economy.
Host: So, how do we solve the energy crisis brought by the AI revolution? Its scale in energy demand is unprecedented.
Demis Hassabis: I think, in the medium to long term, AI will pay for itself in energy costs, and more. We are working on a series of projects to optimize existing infrastructure, like optimizing the power grid. I believe we can probably improve the national grid's efficiency by another 30% to 40%. Additionally, there is climate and weather modeling; we have the world's best weather modeling system, which helps pinpoint where impacts will occur, thus allowing mitigation measures. Finally, the most exciting might be breakthrough technologies like nuclear fusion, new batteries, superconductors, and AI is crucial to helping us achieve these goals. By then, humanity will enter a completely new energy landscape never experienced before, which will certainly help solve climate and environmental problems, and ultimately help us get into space at lower cost. Because if you have incredible energy like nuclear fusion, you almost have unlimited rocket fuel, just by distilling and catalyzing seawater.
Host: I'll take out that magic wand again. What would you do to cultivate a growth mindset, an ability to build trillion-dollar companies that don't exist today?
Demis Hassabis: We are very good at generating startup ideas and bringing them to a certain level, like we did with DeepMind. But if you really want to cross that chasm and become a trillion-dollar global player, where do the multi-billion dollar funding rounds come from? Allowing you to truly challenge the existing established companies. I think this was definitely missing 10 years ago when I was raising funds for DeepMind, and I think it's still somewhat missing today: that height of ambition, and the amount of capital the markets can support.
06. Hit It Off with Musk the First Time We Met
Host: Let's do a quick Q&A. What was it like meeting Elon Musk for the first time?
Demis Hassabis: It was great. It was at a Founders Fund event. At that time, both SpaceX and DeepMind were part of the same investment portfolio. I think we were both invited guests; it was probably my first portfolio meeting, around 2011 or 2012, when we were still insignificant upstarts, only given a very small speaking slot. And Musk was the central figure in that portfolio; he gave the keynote speech. Later we met after the meeting. He joked that we said hello passing each other in the restroom. We hit it off immediately, like people with overly ambitious ideas who also love sci-fi. I really wanted to visit his rocket factory then, and tried to get an invitation to SpaceX. He actually issued the invitation at the end of that meeting. Our second meeting was at the SpaceX factory in Los Angeles.
Host: So, what medical revolution are you most looking forward to?
Demis Hassabis: To be honest, I want to truly cure cancer. I know it sounds cliché, but what we are building at Isomorphic is general. We are trying to build a drug design platform applicable to any therapeutic area. So ideally, it will cover everything from neurodegenerative diseases, cardiovascular diseases, immunology to cancer. These are our current priorities, but ultimately, it should be applicable to every disease.
Host: Is there anything you are thinking about that others haven't read or talked about yet?
Demis Hassabis: Many people worry about the economic problems brought by AGI that we discussed earlier. But I am very worried about the philosophical problems behind it. For example, suppose we get the technology right and handle the economic aspects. Then what remains are philosophical problems: What is meaning? What is purpose? We will explore what consciousness is, and what it means to be human. I think these questions are about to be placed before us. We need some great new philosophers to help us find the direction.
Host: Finally, a somewhat difficult question. There are many ways to describe what you are doing. For what do you most want to be remembered? What legacy do you hope to leave?
Demis Hassabis: I hope my legacy is advancing scientific progress and creating technologies that bring huge well-being to the world, like curing those terrible diseases.
This article comes from the WeChat public account "Zhidongxi" (ID: zhidxcom), author: Jiayang, editor: Yunpeng







