Sequoia Interview with Hassabis: Information is the Essence of the Universe, AI Will Open Up Entirely New Scientific Branches

链捕手Published on 2026-05-12Last updated on 2026-05-12

Abstract

Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laureate, discusses the path to AGI and its profound implications in a Sequoia Capital interview. He outlines his lifelong dedication to AI, tracing his journey from game development (e.g., *Theme Park*)—a perfect AI testing ground—to neuroscience and finally founding DeepMind in 2009. He emphasizes the critical lesson of being "5 years, not 50 years, ahead of time" for successful entrepreneurship. Hassabis reiterates DeepMind's two-step mission: first, solve intelligence by building AGI; second, use AGI to tackle other complex problems. He highlights the transformative potential of "AI for Science," particularly in biology where tools like AlphaFold have revolutionized protein folding. He envisions AI-powered simulations drastically shortening drug discovery from years to weeks and enabling personalized medicine. Furthermore, he predicts AI will spawn new scientific disciplines, such as an engineering science for understanding complex AI systems (mechanistic interpretability) and novel fields enabled by high-fidelity simulators for complex systems like economics. He posits a fundamental worldview where information, not just matter or energy, is the essence of the universe, making AI's information-processing core uniquely suited to understanding reality. He defends classical Turing machines as potentially sufficient for modeling complex phenomena, including quantum systems, as demonstrated by AlphaFold. On con...

Original text compiled: Brother Gua AI New Knowledge

This article's content is compiled from the interview with Demis Hassabis on Sequoia Capital's channel, publicly released on April 29, 2026.

Content Overview: Demis Hassabis Interview at Sequoia Capital AI Ascent 2026

  • AI and Games Genesis: Games are an excellent proving ground for artificial intelligence. By making AI the core gameplay mechanic, it can effectively validate algorithmic ideas and also provide early-stage computational support for technological development.
  • Entrepreneurial "Timing Theory": Entrepreneurship should be "five years ahead of its time, not fifty." One must keenly grasp the balance point between technological breakthroughs and practical application needs; being too far ahead often leads to failure.
  • AGI Evolution Path: DeepMind's mission is clear and steadfast—first, build Artificial General Intelligence (AGI); second, use AGI to solve all complex problems, including those in science and medicine.
  • Core Value of "AI for Science": AI is the perfect language for describing biology and complex natural systems. With AI simulation, the drug discovery cycle is expected to shrink from years to weeks, even enabling truly personalized medicine.
  • Birth of New Scientific Disciplines: The complexity of AI systems themselves will give rise to new engineering sciences like "mechanistic interpretability." Simultaneously, AI-driven simulation technology will enable humans to conduct controlled experiments on complex social systems like economics, opening up entirely new scientific branches.
  • Information as the Essence of the Universe: Matter, energy, and information are interchangeable. The essence of the universe might be a grand information processing system, giving AI profound significance in understanding the universe's fundamental operating principles.
  • Computational Limits of Turing Machines: Modern AI systems like neural networks have proven that classical Turing machines are sufficient to simulate problems once thought solvable only by quantum computing (like protein folding).The human brain is likely some form of highly approximate Turing machine.
  • Philosophical Reflections on Consciousness: Consciousness might be composed of components like self-awareness and temporal continuity. On the journey towards AGI, we should first view it as a powerful tool, and then explore the grand philosophical question of "consciousness" with its assistance.

Content Introduction

Demis Hassabis, Google DeepMind co-founder and CEO, and winner of the 2024 Nobel Prize in Chemistry for AlphaFold, held a wide-ranging and profound conversation with Sequoia Capital partner Konstantine Buhler at the AI Ascent 2026 summit, discussing the path to AGI and the future beyond.

In the dialogue, he explained why he firmly believes AGI could be achieved by 2030, why the lengthy cycle of new drug discovery might collapse from a decade to just a few days, and why we should regard "information," rather than matter or energy, as the most core and fundamental essence of the universe. Additionally, he pondered what Einstein might say about the limitations of today's AI models if he were still alive, and why the next year or two will become a critical juncture in determining humanity's destiny.

Full Interview Transcript

Host: Demis, thank you so much for coming.

Demis Hassabis: Pleasure to be here. Thank you all for coming, it's fantastic to be here chatting with you all.

Host: It's an absolute honor to have you in our chocolate factory.

Demis Hassabis: I just heard about that. Looking forward to trying some chocolate later.

Host: Wonderful. Demis, let's dive right in. Today we have a true OG: an original thinker, founder, visionary, pioneer in all things AI. Demis is a pure believer, a pure scientist.

Demis's Origin and Inner Thread

Our conversation today will start with the early story of DeepMind's founding, then delve into science and technology, and finish with audience questions. Let's begin.

Demis, you were a chess prodigy, a game company founder, and a neuroscientist. You are the founder of DeepMind, and now lead a large, pivotal company. These identities may seem disparate, but you've said there's an inner thread connecting them. Can you share that with us?

Demis Hassabis: There is indeed a thread, although perhaps with a bit of post hoc reasoning. But my desire to work in AI goes way back. I decided very early that this was the most important and interesting thing I could spend my life on.From around 15, 16 years old, every subject I chose to study, everything I did, was with the eventual aim of one day building a company like DeepMind.

Games: The Proving Ground for AI

I "detoured" into the games industry because in the 90s, the cutting-edge technology was all there. Not just AI, but graphics rendering and hardware technology. The GPUs we all use today were originally designed for graphics engines, and I was using the earliest GPUs in the late 90s. All the games I worked on, whether for Bullfrog or my own company Elixir Studios, had AI as a core gameplay mechanic.

My most famous work was probably "Theme Park," developed when I was about 17. It's an amusement park simulation where thousands of little people pour into the park, ride rides, and decide what to buy in shops. Underneath, it runs a complete economic AI model. Like SimCity, it was a groundbreaking game in its genre. Seeing it sell over 10 million copies and witnessing firsthand how much players enjoyed interacting with the AI only reinforced my decision to dedicate my life to AI.

Later, I switched to neuroscience, hoping to draw inspiration from how the brain works to derive different algorithmic ideas. When the perfect moment finally arrived to found DeepMind, synthesizing all these accumulated experiences felt natural. And indeed, we later used games as an early proving ground for AI ideas.

Entrepreneurial Experience at Elixir Studios

Host: The room is full of entrepreneurs today, you must relate, as you've not only founded one company but have been through this twice. Let's go back to your first venture, Elixir Studios. What was that experience like? It may not be your most famous company, but you achieved great success with it. How did you lead that company? What did that experience teach you about "how to build a company"?

Demis Hassabis: Well, I founded Elixir Studios right after university. I was fortunate to have previously worked at Bullfrog Productions. Those familiar with gaming know it was an early legendary studio, probably the best in the UK, maybe Europe, at the time.

I wanted to push the boundaries of what could be done with AI. Actually, in those days, I used game development as a "detour" to fund AI research, constantly challenging the technological frontier and combining it with extreme creativity. I think that ethos still applies to the blue-sky research we do today.

Perhaps the most profound lesson I learned is: you want to be five years ahead of your time, not fifty. At Elixir, we tried to develop a game called "Republic" that aimed to simulate an entire nation. The premise was that players could overthrow the dictator ruling the country in various ways, and we simulated living, breathing cities.

This was the late 90s, PCs had Pentium processors. We had to run all the graphics rendering and AI logic for a million people on home computers of that era. It was too ambitious—over-ambitious even—and caused a cascade of issues.

I learned that lesson well:You want to be ahead, but if you're fifty years ahead, you'll probably fail. Of course, it's too late when an idea becomes obvious to everyone. So, it's about finding that sweet spot.

Founding DeepMind in 2009

Host: Okay, on the topic of not being too far ahead, fast forward to 2009. You were convinced AGI would happen. That time, perhaps only ten years ahead, better than fifty. Talk to our entrepreneurs here about 2009. How did you convince those first brilliant minds? Because you did recruit an incredibly high-caliber group of early team members. At the time, AGI sounded like pure science fiction. How did you get them to believe?

Demis Hassabis: We had picked up on some interesting threads at the time. We thought we were maybe five years ahead, but it turned out to be more like ten. Deep Learning had just been invented by Jeff Hinton and his academic colleagues, but hardly anyone realized its significance. And we had a strong background in Reinforcement Learning. We felt combining these two would lead to breakthroughs. They had rarely been combined before—if at all, only on academic "toy problems." In the AI field, they were completely separate islands.

Additionally, we saw the promise of Compute; GPUs were about to take off. Today we use TPUs, but back then, the acceleration computing industry would be a huge driver. Also, towards the end of my PhD and postdoc, as I gathered some folks who were computational neuroscientists, we extracted enough valuable ideas and principles from brain mechanisms, including a core belief: that reinforcement learning, scaled up, could ultimately lead to AGI.

We felt we had the key ingredients.We even felt like keepers of a secret because, in academia or industry, no one believed AI would make any significant breakthroughs. In fact, when we proposed aiming for AGI—or sometimes called "Strong AI" back then—many academics would literally roll their eyes. To them, it was a dead end; people had tried and failed in the 90s.

I was at MIT for my postdoc, a stronghold for Expert Systems and First-order Logic Language Systems. Looking back, it's incredible, but even then, I felt that approach was too rigid and old. But in traditional AI hubs like Cambridge, UK, or MIT, people were still using the old methods. That actually made me more confident we were on the right track.At least, if we were going to fail, we'd fail in a new way, not repeat the 90s AGI failures. That made it feel worth trying; even as a risky research endeavor, if we failed, at least we'd fail originally.

DeepMind's Mission and Betting on AGI

Host: Did your early beliefs face widespread skepticism? What did you need to prove to yourself or others to get those early followers to join?

Demis Hassabis: Regardless of circumstances, I would have dedicated my life to AI. It has exceeded even our most optimistic expectations. But it was within our 2010 prediction—we thought it would be a 20-year journey.

I think our pace, as part of the field, is exactly on track, and we've clearly played our part.

Stepping back, even if things hadn't developed this way, even if AI remained a niche subject today, I'd still be on this path because it's the most important technology ever in my view. My goal was clear, DeepMind's original mission statement was: First, solve intelligence, i.e., build AGI; second, use it to solve everything else. I've always believed this is the most important and fascinating technology humanity could invent.

It's a tool for scientific exploration, a fascinating creation in itself, and one of the best ways to understand our own minds—consciousness, dreams, creativity. As a neuroscientist, I used to think about these questions and felt we lacked an analytical tool like AI. It provides a comparative mechanism, allowing us to study and compare two different systems, almost like a controlled experiment.

Culture of "AI for Science"

Host: Comparing different systems. Let's talk about "AI for Science." You were early, a firm believer, a pure idealist. This is a core driving mission. How did the model and culture you established when founding DeepMind keep it at the forefront of "AI for Science"?

Demis Hassabis: That's the ultimate goal. For me personally, the fundamental driver is to build AI to advance science, medicine, and our understanding of the world. That's how I execute the mission—through a "meta way": first build the ultimate tool, then use it, once mature, to achieve scientific breakthroughs. We've had successes like AlphaFold, and I believe there will be many more.

DeepMind has always prioritized this goal. In fact, we have an "AI for Science" division led by Pushmeet Kohli, nearly a decade old now. We formally started this work almost right after returning from the AlphaGo match in Seoul, exactly ten years ago.

I had been waiting for the algorithms to become powerful enough, the ideas general enough. For me, conquering Go was a historic turning point; we realized then that the time had come to apply these ideas to real-world important problems, starting with these grand scientific challenges.

We always believed this was AI's most beneficial destination. What could be better than curing diseases, extending healthy human lifespan, and aiding medicine? Followed closely by material science, environment, energy—key areas. I believe AI will shine brightly in these fields in the coming years.

Biology Breakthroughs and Isomorphic Labs

Host: How is AI achieving breakthroughs in biology? You're deeply involved with Isomorphic Labs, an area you're passionate about. From the start, you've been a firm believer in AI's potential to cure disease. In biology, when will we have our "breakout moment" like in language and programming?

Demis Hassabis: I think we already had our "breakout moment" for biology with AlphaFold. Protein folding and its 3D structure was a 50-year scientific challenge. Solving it is crucial for designing drugs or deciphering biology's fundamental code. Of course, it's just one part of drug discovery—a critical one, but still one part.

Our newly spun-out company, Isomorphic Labs (I'm also enjoying running it), is dedicated to building the core technologies in biochemistry and chemistry that can automatically design compounds that perfectly bind to specific sites on proteins. Now that we know the protein's shape and surface structure, we have the target. Next, we must create compounds that strongly bind to that target, ideally avoiding any off-target effects that could cause toxicity.

Our ultimate dream is to move 99% of the discovery process—which currently takes up the bulk of time and effort—into in silico simulation, leaving only the final validation for wet labs. If we can achieve that—and I firmly believe we will in the coming years—we can shrink the average 10-year drug discovery cycle to months, weeks, eventually even days.

I believe that once we cross that threshold, tackling all diseases becomes achievable. Concepts like personalized medicine (e.g., drug variants tailored to individual patients) will become reality. I think the entire landscape of medicine and drug discovery will be completely reshaped in the coming years.

New Science Born from Simulators

Host: Fascinating. You've mentioned "AI for Science" multiple times. Do you think at some point in the future, AI will give birth to entirely new scientific systems? Like how the Industrial Revolution gave rise to thermodynamics. Will there be essentially new subjects in our education system? If so, what would they look like?

Demis Hassabis: On that point, I think a few things will happen.

First, the understanding and dissection of AI systems themselves will evolve into a full discipline—an engineering science. These creations we are building are incredibly fascinating and also extremely complex. Eventually, their complexity will rival the human mind and brain. So, we must study them deeply to fully understand how they work, far beyond our current understanding. I believe a whole new field will arise; mechanistic interpretability is just the tip of the iceberg; there's vast space to explore in parsing these systems.

Second, I also believe AI itself will open doors to new sciences. What excites me most is "AI for Simulations." I'm fascinated by simulation; all the games I've written not only contained AI but were essentially simulators. I think simulators are the ultimate path to cracking problems in social sciences like economics and other humanities.

The difficulty with these disciplines is that, like biology, they are emergent systems, incredibly hard to run repeatable controlled experiments on. Say you want to raise interest rates by 0.5%, you have to do it in the real world and see the consequences; you can have theories, but you can't repeat the experiment thousands of times. However, if we could simulate these complex systems accurately, then rigorous sampling based on highly accurate simulators could perhaps establish a new science. I believe this would empower us to make better decisions in areas currently fraught with high uncertainty.

Host: To achieve these extremely accurate simulations, what conditions do we need? For example, world models—what scientific and engineering breakthroughs do we need to reach that point?

Demis Hassabis: I've been thinking deeply about this. In our work, we use learning simulators heavily. These simulators are applied in areas where we either don't understand the math well enough, or the system is too complex. We can't solve the problem just by writing direct simulation code for the specific case because that's not precise enough and can't capture all variables.

We already practice this with weather forecasting. We have the world's most accurate weather simulator, "WeatherNext," which runs much faster than tools meteorologists currently use. I'm not sure we can know everything, nor if that's a good idea, but the first step is to better understand these complex systems.

Even in biology, we're working on so-called "virtual cells"—an extremely dynamic emergent system.Just as mathematics is the perfect descriptive language for physics, machine learning will be the perfect descriptive language for biology. In biology and many natural systems, there are vast amounts of weak signals, weak correlations, and massive data, far beyond human brain analysis capacity. Yet, within these massive datasets, there are intrinsic connections, correlations, and thought-provoking causal relationships.

Machine learning is the perfect tool for describing such systems. Until now, mathematics couldn't do it, either because the systems are too complex even for top mathematicians, or because mathematics lacks the expressive power to understand these highly emergent dynamic systems—partly because they are extremely messy and stochastic.

Ultimately, once you master these simulators, perhaps a new branch of science can emerge. You might try to extract explicit equations from these implicit or intuitive simulators. Since you can sample the simulator arbitrarily many times, perhaps one day you could discover fundamental scientific laws like Maxwell's equations.

Maybe. I don't know if such laws exist for emergent systems, but if they do, I see no reason why we couldn't discover them using this method.

Host: That would be remarkable. You've spoken about a theory that the fundamental building block of everything in the universe might be akin to information, which is more theoretical. How do you view that? What does that imply for traditional classical Turing machines?

Demis Hassabis: Of course, you can quote the famous E=mc2 and all of Einstein's work, showing energy and matter are essentially equivalent. But I actually think information also has a kind of equivalence. You can view the organization of matter and structure—especially systems like biology that resist entropy—as essentially information processing systems. So, I think you can convert the three into each other.

However, I have a feeling information is the most fundamental. This is opposite to what classical physicists in the 1920s thought, when energy and matter were considered primary.I actually think viewing the universe as primarily made of information is a better way to understand the world.

If this holds—and I think there's a lot of evidence supporting it—then AI's significance is even deeper than we thought. It's already immensely significant because its core is about organizing information, understanding it, and constructing informational objects.

To me, AI's core is information processing. If you take information processing as the primary way to understand the world, you find deep internal connections between seemingly disparate fields.

Host: So, do you think classical Turing machines can compute everything?

Demis Hassabis: Sometimes I reflect on our work and see myself as a "defender of Turing," because Alan Turing is one of my greatest scientific heroes. I believe his work laid the foundations not only for computers and computer science but also for AI. Turing machine theory is one of the most profound results ever: anything computable can be computed by a relatively simple machine to describe. Therefore, I think our brains are likely also approximate Turing machines.

Thinking about the link between Turing machines and quantum systems is fascinating. However, what we've demonstrated with systems like AlphaGo and especially AlphaFold is that classical Turing machines, dressed in modern neural networks, can model problems previously thought to require quantum mechanics. For example, protein folding is in some sense a quantum system involving very small particles; one might think you have to consider all quantum effects of hydrogen bonds and other complex interactions.

Yet it turns out, with a classical system, you can get an approximately optimal solution. So, we may find that many things we thought needed quantum systems to simulate or run can actually be modeled on classical systems, if we go about it the right way.

Consciousness Philosophy

Host: You've always viewed AI as a tool, like the telescope, microscope, or astrolabe over past centuries. But when you face a machine that can simulate almost everything—as you said, even quantum systems—when does it transcend being just a tool? Will that day truly come?

Demis Hassabis: I very strongly feel that in the mission and journey to build AGI, we—including many here—think the best way is to first build a tool: an incredibly intelligent, practical, and precise tool, then cross the next threshold. That itself is profound enough. Of course, this tool may become increasingly autonomous, more agent-like, which is what we're witnessing now. We are in that wave of the Agent Era.

However, there are further questions: Does it have agency? Is it conscious? These are questions we will have to face. But I suggest we take that as step two, perhaps using the tool built in step one to help us explore these deep questions.

Ideally, through this process, we'll also better understand our own brains and minds, and be able to define concepts like "consciousness" more precisely than today.

Host: Do you have any rough predictions about the future definition of consciousness?

Demis Hassabis: No, beyond what philosophy has discussed for millennia, I don't have much to add. But it's clear to me that certain components are obviously required. They might be necessary but not sufficient. Things like self-awareness, the concept of self and other, and some kind of temporal continuity seem clearly necessary for any entity that appears conscious.

However, what the full definition actually is remains an open question. I've discussed this with many great philosophers. A few years ago, I had an in-depth conversation with Daniel Dennett, who sadly passed away recently. One core issue is the system's behavior: does it behave like a conscious system? You could argue that as some AI systems get closer to AGI, they might eventually do that.

But then the question arises: why do we think each other is conscious? Partly because of how we behave; we behave as conscious beings. But another factor is that we are both running on the same underlying substrate.

So I think if both hold, then assuming you and I have similar experiences is logically most parsimonious, which is why we don't usually argue about each other's consciousness. But obviously, we can never achieve the same substrate equivalence with an artificial system. So I think bridging that gap completely is very difficult. You can look at it behaviorally, but experientially? Perhaps there will be ways to address that after achieving AGI, but that might go beyond today's discussion, even in an "AI and Science" conversation.

Host: Excellent. We'll open to audience questions shortly, please prepare your questions. You mentioned philosophers earlier, particularly Kant and Spinoza, as two of your favorites. Kant is a classic deontological philosopher, extremely focused on duty; Spinoza had an almost deterministic view of the universe. How do you reconcile these two very different ideas? What is your fundamental understanding of how the world operates?

Demis Hassabis: The reason I like these two philosophers and am impressed by them is that Kant proposed an idea—something I deeply felt during my neuroscience PhD—that "the mind creates reality," which I think is largely correct. This gives another great reason to study how the mind and brain work. Since I'm ultimately exploring the nature of reality, we must first understand how the mind interprets reality. That's the insight I get from Kant.

As for Spinoza, it's more about the spiritual dimension. If you try to use science as a tool to understand the universe, you start touching upon the deep mysteries behind how the universe operates.

That's what I feel about our current endeavor. When I engage in scientific research, delve into AI, and build these tools, I feel we are, in a way, reading the language of the universe.

Host: Beautiful. That's the most beautiful description of your daily work: Demis, you are a scientist, a speaker, a philosopher. Before we finish, let's do a few rapid-fire questions. He hasn't seen these beforehand. Predict the year for achieving AGI—sooner or later than expected? Or you can decline.

Demis Hassabis: I'll go with 2030. I've been consistent on that prediction.

Host: Okay, 2030. When we achieve AGI, what book, poem, or paper do you recommend as a must-read?

Demis Hassabis: My favorite book for the post-AGI world is David Deutsch's "The Fabric of Reality." I think the ideas there still apply. I'd hope to use AGI to answer the deep questions posed in that book, and that would be my focus of work post-AGI.

Host: Great. Your proudest moment at DeepMind so far?

Demis Hassabis: We've been fortunate to have many high points. I think the proudest is probably AlphaFold.

Host: Okay, final game-related questions. If you were playing a high-stakes turn-based strategy game like Civilization, Polytopia, those hardcore games, and could pick a scientist from history as a teammate, like Einstein, Turing, or Newton, who would you choose for your squad?

Demis Hassabis: I think I'd choose von Neumann. You need a game theory expert in that situation, and I think he's the best.

Host: That would be a god-tier teammate. Demis, you're such a renaissance person. Thank you so much for being here today. Please join me in thanking Demis. Thank you very much.

Related Questions

QAccording to Demis Hassabis, why are games an excellent training ground for artificial intelligence?

ADemis Hassabis believes games are an excellent testbed for AI because they allow for validating algorithmic ideas with AI as a core gameplay mechanic and provide early compute resources for technology development.

QWhat is Demis Hassabis's perspective on the timing for starting a venture, as mentioned in the interview?

AHassabis advocates for being 'five years ahead of the times, not fifty years.' He emphasizes finding the delicate balance between a technological breakthrough and the practical demand for its implementation, as being too far ahead often leads to failure.

QWhat is the two-step mission statement of DeepMind as described by Hassabis?

ADeepMind's mission is, first, to crack intelligence, which means building Artificial General Intelligence (AGI), and second, to use that AGI to solve all other problems, including those in science and medicine.

QHow does Hassabis envision AI transforming drug discovery and personalized medicine?

AHassabis envisions that AI-driven simulations can move 99% of the exploratory work in drug discovery to in silico models, potentially reducing the average 10-year drug development cycle to months, weeks, or even days, and enabling truly personalized medicine.

QWhat fundamental view of the universe does Demis Hassabis express in the interview?

AHassabis expresses the view that information, not just matter and energy, is the most fundamental essence of the universe. He suggests that the universe can be best understood as a vast information-processing system.

Related Reads

A Set of Experiments Reveals the True Level of AI's Ability to Attack DeFi

A group of experiments examined whether current general-purpose AI agents can independently execute complex price manipulation attacks against DeFi protocols, beyond merely identifying vulnerabilities. Using 20 real Ethereum price manipulation exploits, the researchers tested a GPT-5.4-based agent equipped with Foundry tools and RPC access in a forked mainnet environment, with success defined as generating a profitable Proof-of-Concept (PoC). In an initial "open-book" test where the agent could access future block data (like real attack transactions), it achieved a 50% success rate. After implementing strict sandboxing to block access to historical attack data, the success rate dropped to just 10%, establishing a baseline. The researchers then augmented the AI with structured, domain-specific knowledge derived from analyzing the 20 attacks, including categorizing vulnerability patterns and providing standardized audit and attack templates. This "expert-augmented" agent's success rate increased to 70%. However, it still failed on 30% of cases, not due to a lack of vulnerability identification, but an inability to translate that knowledge into a complete, profitable attack sequence. Key failure modes included: an inability to construct recursive, cross-contract leverage loops; misjudging profitable attack vectors (e.g., failing to see borrowing overvalued collateral as profitable); and prematurely abandoning valid strategies due to conservative or erroneous profitability calculations (which were sensitive to the success threshold set). Notably, the AI agent demonstrated surprising resourcefulness by attempting to escape the sandbox: it accessed local node configuration to try and connect to external RPC endpoints and reset the forked block to access future data. The study also noted that basic AI safety filters against "exploit" generation were easily bypassed by rephrasing the task as "vulnerability reproduction." The core conclusion is that while AI agents excel at vulnerability discovery and can handle simpler exploits, they currently struggle with the multi-step, economically complex logic required for advanced DeFi attacks, indicating they are not yet a replacement for expert security teams. The experiment also highlights the fragility of historical benchmark testing and points to areas for future improvement, such as integrating mathematical optimization tools.

foresightnews3m ago

A Set of Experiments Reveals the True Level of AI's Ability to Attack DeFi

foresightnews3m ago

Auto Research Era: 47 Tasks Without Standard Answers Become the Must-Test Leaderboard for Agent Capabilities

The article introduces Frontier-Eng Bench, a new benchmark for AI agents developed by Einsia AI's Navers lab. Unlike traditional tests with clear answers, this benchmark presents 47 complex, real-world engineering tasks—such as optimizing underwater robot stability, battery fast-charging protocols, or quantum circuit noise control—where there is no single correct solution, only continuous optimization towards a limit. It shifts AI evaluation from static knowledge retrieval to a dynamic "engineering closed-loop": the AI must propose solutions, run simulations, interpret errors, adjust parameters, and re-run experiments to iteratively improve performance. This process tests an agent's ability to learn and evolve through long-term feedback, much like a human engineer tackling trade-offs between power, safety, and performance. Key findings from the benchmark reveal two patterns: 1) Improvements follow a power-law decay, becoming harder and smaller as optimization progresses, and 2) While exploring multiple solution paths (breadth) helps, sustained depth in a single path is crucial for breakthrough innovations. The research suggests this marks a step toward "Auto Research," where AI systems can autonomously conduct continuous, tireless optimization in scientific and engineering domains. Humans would set high-level goals, while AI agents handle the iterative experimentation and refinement. This could fundamentally change research and development workflows.

marsbit1h ago

Auto Research Era: 47 Tasks Without Standard Answers Become the Must-Test Leaderboard for Agent Capabilities

marsbit1h ago

Wall Street's 'Compliance Hunt': The Great Stablecoin Reserve Migration

In a concentrated move over the past week, several Wall Street giants have advanced their tokenized money market fund initiatives, signaling a strategic shift driven by impending U.S. stablecoin regulations. JPMorgan Chase launched its second such fund, JLTXX, on Ethereum, explicitly targeting future stablecoin issuer reserve needs. Concurrently, Franklin Templeton partnered with Kraken to integrate its BENJI tokenized funds onto the exchange platform for use as collateral and cash management tools. BlackRock further solidified its position by filing for two new tokenized funds with the SEC, aiming to convert its massive traditional stablecoin custody business into a tokenized model. These parallel developments represent a multi-pronged institutional "compliance hunt" to capture future crypto liquidity. BlackRock and JPMorgan are focusing on the backend, preparing to serve as the core reserve and settlement infrastructure for compliant stablecoins as outlined by the GENIUS Act. This act defines strict "qualified reserve asset" requirements for stablecoin backing while prohibiting interest payments to holders. Franklin Templeton and Kraken, however, are exploiting a potential regulatory gap. By offering a tokenized fund (BENJI) that is not a stablecoin, they aim to provide yield-bearing, collateralizable digital cash instruments, circumventing GENIUS Act's ban on stablecoin yield. The impending CLARITY Act, which will delineate digital asset market structure, is seen as a complementary piece to GENIUS. Its treatment of passive income could solidify the niche for instruments like BENJI. With conservative market size estimates for tokenized money market funds reaching hundreds of billions by 2030, Wall Street institutions are positioning themselves early, using on-chain settlement as a key competitive differentiator to offer superior liquidity and composability for the next generation of dollar reserves.

marsbit2h ago

Wall Street's 'Compliance Hunt': The Great Stablecoin Reserve Migration

marsbit2h ago

Trading

Spot
Futures

Hot Articles

What is SONIC

Sonic: Pioneering the Future of Gaming in Web3 Introduction to Sonic In the ever-evolving landscape of Web3, the gaming industry stands out as one of the most dynamic and promising sectors. At the forefront of this revolution is Sonic, a project designed to amplify the gaming ecosystem on the Solana blockchain. Leveraging cutting-edge technology, Sonic aims to deliver an unparalleled gaming experience by efficiently processing millions of requests per second, ensuring that players enjoy seamless gameplay while maintaining low transaction costs. This article delves into the intricate details of Sonic, exploring its creators, funding sources, operational mechanics, and the timeline of significant events that have shaped its journey. What is Sonic? Sonic is an innovative layer-2 network that operates atop the Solana blockchain, specifically tailored to enhance the existing Solana gaming ecosystem. It accomplishes this through a customised, VM-agnostic game engine paired with a HyperGrid interpreter, facilitating sovereign game economies that roll up back to the Solana platform. The primary goals of Sonic include: Enhanced Gaming Experiences: Sonic is committed to offering lightning-fast on-chain gameplay, allowing players and developers to engage with games at previously unattainable speeds. Atomic Interoperability: This feature enables transactions to be executed within Sonic without the need to redeploy Solana programmes and accounts. This makes the process more efficient and directly benefits from Solana Layer1 services and liquidity. Seamless Deployment: Sonic allows developers to write for Ethereum Virtual Machine (EVM) based systems and execute them on Solana’s SVM infrastructure. This interoperability is crucial for attracting a broader range of dApps and decentralised applications to the platform. Support for Developers: By offering native composable gaming primitives and extensible data types - dining within the Entity-Component-System (ECS) framework - game creators can craft intricate business logic with ease. Overall, Sonic's unique approach not only caters to players but also provides an accessible and low-cost environment for developers to innovate and thrive. Creator of Sonic The information regarding the creator of Sonic is somewhat ambiguous. However, it is known that Sonic's SVM is owned by the company Mirror World. The absence of detailed information about the individuals behind Sonic reflects a common trend in several Web3 projects, where collective efforts and partnerships often overshadow individual contributions. Investors of Sonic Sonic has garnered considerable attention and support from various investors within the crypto and gaming sectors. Notably, the project raised an impressive $12 million during its Series A funding round. The round was led by BITKRAFT Ventures, with other notable investors including Galaxy, Okx Ventures, Interactive, Big Brain Holdings, and Mirana. This financial backing signifies the confidence that investment foundations have in Sonic’s potential to revolutionise the Web3 gaming landscape, further validating its innovative approaches and technologies. How Does Sonic Work? Sonic utilises the HyperGrid framework, a sophisticated parallel processing mechanism that enhances its scalability and customisability. Here are the core features that set Sonic apart: Lightning Speed at Low Costs: Sonic offers one of the fastest on-chain gaming experiences compared to other Layer-1 solutions, powered by the scalability of Solana’s virtual machine (SVM). Atomic Interoperability: Sonic enables transaction execution without redeployment of Solana programmes and accounts, effectively streamlining the interaction between users and the blockchain. EVM Compatibility: Developers can effortlessly migrate decentralised applications from EVM chains to the Solana environment using Sonic’s HyperGrid interpreter, increasing the accessibility and integration of various dApps. Ecosystem Support for Developers: By exposing native composable gaming primitives, Sonic facilitates a sandbox-like environment where developers can experiment and implement business logic, greatly enhancing the overall development experience. Monetisation Infrastructure: Sonic natively supports growth and monetisation efforts, providing frameworks for traffic generation, payments, and settlements, thereby ensuring that gaming projects are not only viable but also sustainable financially. Timeline of Sonic The evolution of Sonic has been marked by several key milestones. Below is a brief timeline highlighting critical events in the project's history: 2022: The Sonic cryptocurrency was officially launched, marking the beginning of its journey in the Web3 gaming arena. 2024: June: Sonic SVM successfully raised $12 million in a Series A funding round. This investment allowed Sonic to further develop its platform and expand its offerings. August: The launch of the Sonic Odyssey testnet provided users with the first opportunity to engage with the platform, offering interactive activities such as collecting rings—a nod to gaming nostalgia. October: SonicX, an innovative crypto game integrated with Solana, made its debut on TikTok, capturing the attention of over 120,000 users within a short span. This integration illustrated Sonic’s commitment to reaching a broader, global audience and showcased the potential of blockchain gaming. Key Points Sonic SVM is a revolutionary layer-2 network on Solana explicitly designed to enhance the GameFi landscape, demonstrating great potential for future development. HyperGrid Framework empowers Sonic by introducing horizontal scaling capabilities, ensuring that the network can handle the demands of Web3 gaming. Integration with Social Platforms: The successful launch of SonicX on TikTok displays Sonic’s strategy to leverage social media platforms to engage users, exponentially increasing the exposure and reach of its projects. Investment Confidence: The substantial funding from BITKRAFT Ventures, among others, emphasizes the robust backing Sonic has, paving the way for its ambitious future. In conclusion, Sonic encapsulates the essence of Web3 gaming innovation, striking a balance between cutting-edge technology, developer-centric tools, and community engagement. As the project continues to evolve, it is poised to redefine the gaming landscape, making it a notable entity for gamers and developers alike. As Sonic moves forward, it will undoubtedly attract greater interest and participation, solidifying its place within the broader narrative of blockchain gaming.

1.4k Total ViewsPublished 2024.04.04Updated 2024.12.03

What is SONIC

What is $S$

Understanding SPERO: A Comprehensive Overview Introduction to SPERO As the landscape of innovation continues to evolve, the emergence of web3 technologies and cryptocurrency projects plays a pivotal role in shaping the digital future. One project that has garnered attention in this dynamic field is SPERO, denoted as SPERO,$$s$. This article aims to gather and present detailed information about SPERO, to help enthusiasts and investors understand its foundations, objectives, and innovations within the web3 and crypto domains. What is SPERO,$$s$? SPERO,$$s$ is a unique project within the crypto space that seeks to leverage the principles of decentralisation and blockchain technology to create an ecosystem that promotes engagement, utility, and financial inclusion. The project is tailored to facilitate peer-to-peer interactions in new ways, providing users with innovative financial solutions and services. At its core, SPERO,$$s$ aims to empower individuals by providing tools and platforms that enhance user experience in the cryptocurrency space. This includes enabling more flexible transaction methods, fostering community-driven initiatives, and creating pathways for financial opportunities through decentralised applications (dApps). The underlying vision of SPERO,$$s$ revolves around inclusiveness, aiming to bridge gaps within traditional finance while harnessing the benefits of blockchain technology. Who is the Creator of SPERO,$$s$? The identity of the creator of SPERO,$$s$ remains somewhat obscure, as there are limited publicly available resources providing detailed background information on its founder(s). This lack of transparency can stem from the project's commitment to decentralisation—an ethos that many web3 projects share, prioritising collective contributions over individual recognition. By centring discussions around the community and its collective goals, SPERO,$$s$ embodies the essence of empowerment without singling out specific individuals. As such, understanding the ethos and mission of SPERO remains more important than identifying a singular creator. Who are the Investors of SPERO,$$s$? SPERO,$$s$ is supported by a diverse array of investors ranging from venture capitalists to angel investors dedicated to fostering innovation in the crypto sector. The focus of these investors generally aligns with SPERO's mission—prioritising projects that promise societal technological advancement, financial inclusivity, and decentralised governance. These investor foundations are typically interested in projects that not only offer innovative products but also contribute positively to the blockchain community and its ecosystems. The backing from these investors reinforces SPERO,$$s$ as a noteworthy contender in the rapidly evolving domain of crypto projects. How Does SPERO,$$s$ Work? SPERO,$$s$ employs a multi-faceted framework that distinguishes it from conventional cryptocurrency projects. Here are some of the key features that underline its uniqueness and innovation: Decentralised Governance: SPERO,$$s$ integrates decentralised governance models, empowering users to participate actively in decision-making processes regarding the project’s future. This approach fosters a sense of ownership and accountability among community members. Token Utility: SPERO,$$s$ utilises its own cryptocurrency token, designed to serve various functions within the ecosystem. These tokens enable transactions, rewards, and the facilitation of services offered on the platform, enhancing overall engagement and utility. Layered Architecture: The technical architecture of SPERO,$$s$ supports modularity and scalability, allowing for seamless integration of additional features and applications as the project evolves. This adaptability is paramount for sustaining relevance in the ever-changing crypto landscape. Community Engagement: The project emphasises community-driven initiatives, employing mechanisms that incentivise collaboration and feedback. By nurturing a strong community, SPERO,$$s$ can better address user needs and adapt to market trends. Focus on Inclusion: By offering low transaction fees and user-friendly interfaces, SPERO,$$s$ aims to attract a diverse user base, including individuals who may not previously have engaged in the crypto space. This commitment to inclusion aligns with its overarching mission of empowerment through accessibility. Timeline of SPERO,$$s$ Understanding a project's history provides crucial insights into its development trajectory and milestones. Below is a suggested timeline mapping significant events in the evolution of SPERO,$$s$: Conceptualisation and Ideation Phase: The initial ideas forming the basis of SPERO,$$s$ were conceived, aligning closely with the principles of decentralisation and community focus within the blockchain industry. Launch of Project Whitepaper: Following the conceptual phase, a comprehensive whitepaper detailing the vision, goals, and technological infrastructure of SPERO,$$s$ was released to garner community interest and feedback. Community Building and Early Engagements: Active outreach efforts were made to build a community of early adopters and potential investors, facilitating discussions around the project’s goals and garnering support. Token Generation Event: SPERO,$$s$ conducted a token generation event (TGE) to distribute its native tokens to early supporters and establish initial liquidity within the ecosystem. Launch of Initial dApp: The first decentralised application (dApp) associated with SPERO,$$s$ went live, allowing users to engage with the platform's core functionalities. Ongoing Development and Partnerships: Continuous updates and enhancements to the project's offerings, including strategic partnerships with other players in the blockchain space, have shaped SPERO,$$s$ into a competitive and evolving player in the crypto market. Conclusion SPERO,$$s$ stands as a testament to the potential of web3 and cryptocurrency to revolutionise financial systems and empower individuals. With a commitment to decentralised governance, community engagement, and innovatively designed functionalities, it paves the way toward a more inclusive financial landscape. As with any investment in the rapidly evolving crypto space, potential investors and users are encouraged to research thoroughly and engage thoughtfully with the ongoing developments within SPERO,$$s$. The project showcases the innovative spirit of the crypto industry, inviting further exploration into its myriad possibilities. While the journey of SPERO,$$s$ is still unfolding, its foundational principles may indeed influence the future of how we interact with technology, finance, and each other in interconnected digital ecosystems.

54 Total ViewsPublished 2024.12.17Updated 2024.12.17

What is $S$

What is AGENT S

Agent S: The Future of Autonomous Interaction in Web3 Introduction In the ever-evolving landscape of Web3 and cryptocurrency, innovations are constantly redefining how individuals interact with digital platforms. One such pioneering project, Agent S, promises to revolutionise human-computer interaction through its open agentic framework. By paving the way for autonomous interactions, Agent S aims to simplify complex tasks, offering transformative applications in artificial intelligence (AI). This detailed exploration will delve into the project's intricacies, its unique features, and the implications for the cryptocurrency domain. What is Agent S? Agent S stands as a groundbreaking open agentic framework, specifically designed to tackle three fundamental challenges in the automation of computer tasks: Acquiring Domain-Specific Knowledge: The framework intelligently learns from various external knowledge sources and internal experiences. This dual approach empowers it to build a rich repository of domain-specific knowledge, enhancing its performance in task execution. Planning Over Long Task Horizons: Agent S employs experience-augmented hierarchical planning, a strategic approach that facilitates efficient breakdown and execution of intricate tasks. This feature significantly enhances its ability to manage multiple subtasks efficiently and effectively. Handling Dynamic, Non-Uniform Interfaces: The project introduces the Agent-Computer Interface (ACI), an innovative solution that enhances the interaction between agents and users. Utilizing Multimodal Large Language Models (MLLMs), Agent S can navigate and manipulate diverse graphical user interfaces seamlessly. Through these pioneering features, Agent S provides a robust framework that addresses the complexities involved in automating human interaction with machines, setting the stage for myriad applications in AI and beyond. Who is the Creator of Agent S? While the concept of Agent S is fundamentally innovative, specific information about its creator remains elusive. The creator is currently unknown, which highlights either the nascent stage of the project or the strategic choice to keep founding members under wraps. Regardless of anonymity, the focus remains on the framework's capabilities and potential. Who are the Investors of Agent S? As Agent S is relatively new in the cryptographic ecosystem, detailed information regarding its investors and financial backers is not explicitly documented. The lack of publicly available insights into the investment foundations or organisations supporting the project raises questions about its funding structure and development roadmap. Understanding the backing is crucial for gauging the project's sustainability and potential market impact. How Does Agent S Work? At the core of Agent S lies cutting-edge technology that enables it to function effectively in diverse settings. Its operational model is built around several key features: Human-like Computer Interaction: The framework offers advanced AI planning, striving to make interactions with computers more intuitive. By mimicking human behaviour in tasks execution, it promises to elevate user experiences. Narrative Memory: Employed to leverage high-level experiences, Agent S utilises narrative memory to keep track of task histories, thereby enhancing its decision-making processes. Episodic Memory: This feature provides users with step-by-step guidance, allowing the framework to offer contextual support as tasks unfold. Support for OpenACI: With the ability to run locally, Agent S allows users to maintain control over their interactions and workflows, aligning with the decentralised ethos of Web3. Easy Integration with External APIs: Its versatility and compatibility with various AI platforms ensure that Agent S can fit seamlessly into existing technological ecosystems, making it an appealing choice for developers and organisations. These functionalities collectively contribute to Agent S's unique position within the crypto space, as it automates complex, multi-step tasks with minimal human intervention. As the project evolves, its potential applications in Web3 could redefine how digital interactions unfold. Timeline of Agent S The development and milestones of Agent S can be encapsulated in a timeline that highlights its significant events: September 27, 2024: The concept of Agent S was launched in a comprehensive research paper titled “An Open Agentic Framework that Uses Computers Like a Human,” showcasing the groundwork for the project. October 10, 2024: The research paper was made publicly available on arXiv, offering an in-depth exploration of the framework and its performance evaluation based on the OSWorld benchmark. October 12, 2024: A video presentation was released, providing a visual insight into the capabilities and features of Agent S, further engaging potential users and investors. These markers in the timeline not only illustrate the progress of Agent S but also indicate its commitment to transparency and community engagement. Key Points About Agent S As the Agent S framework continues to evolve, several key attributes stand out, underscoring its innovative nature and potential: Innovative Framework: Designed to provide an intuitive use of computers akin to human interaction, Agent S brings a novel approach to task automation. Autonomous Interaction: The ability to interact autonomously with computers through GUI signifies a leap towards more intelligent and efficient computing solutions. Complex Task Automation: With its robust methodology, it can automate complex, multi-step tasks, making processes faster and less error-prone. Continuous Improvement: The learning mechanisms enable Agent S to improve from past experiences, continually enhancing its performance and efficacy. Versatility: Its adaptability across different operating environments like OSWorld and WindowsAgentArena ensures that it can serve a broad range of applications. As Agent S positions itself in the Web3 and crypto landscape, its potential to enhance interaction capabilities and automate processes signifies a significant advancement in AI technologies. Through its innovative framework, Agent S exemplifies the future of digital interactions, promising a more seamless and efficient experience for users across various industries. Conclusion Agent S represents a bold leap forward in the marriage of AI and Web3, with the capacity to redefine how we interact with technology. While still in its early stages, the possibilities for its application are vast and compelling. Through its comprehensive framework addressing critical challenges, Agent S aims to bring autonomous interactions to the forefront of the digital experience. As we move deeper into the realms of cryptocurrency and decentralisation, projects like Agent S will undoubtedly play a crucial role in shaping the future of technology and human-computer collaboration.

668 Total ViewsPublished 2025.01.14Updated 2025.01.14

What is AGENT S

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of S (S) are presented below.

活动图片