Sequoia Interview with Hassabis: Information is the Essence of the Universe, AI Will Open Up a New Branch of Science

marsbitPublicado em 2026-05-12Última atualização em 2026-05-12

Resumo

Summary: In an interview at Sequoia Capital's AI Ascent 2026, Demis Hassabis, co-founder and CEO of Google DeepMind, discusses the path to AGI and its future implications. He traces his journey from using games as AI testbeds to founding DeepMind with a clear mission: first, to solve intelligence and build AGI; second, to use it to tackle complex problems like science and medicine. Hassabis highlights AI's transformative potential in biology and drug discovery, where tools like AlphaFold could shorten development cycles from years to days, enabling personalized medicine. He envisions AI enabling high-fidelity simulations of complex systems (e.g., economics, biology), potentially birthing new scientific disciplines. Hassabis posits that information, not matter or energy, may be the universe's fundamental essence, making AI a profound tool for understanding reality. He defends classical Turing machines as sufficient for modeling even quantum-like problems and sees AGI primarily as a powerful instrument, with questions of consciousness to be explored later. Predicting AGI by 2030, he cites David Deutsch's "The Fabric of Reality" as essential post-AGI reading and names AlphaFold as his proudest achievement.

Original Adaptation:瓜哥 AI 新知

This content is adapted from an interview with Demis Hassabis on the Sequoia Capital channel, publicly released on April 29, 2026.

Highlights: Interview with Demis Hassabis at Sequoia Capital AI Ascent 2026

  • The Connection Between AI and Games: Games are an excellent testing ground for artificial intelligence. By making AI a core gameplay element, it not only validates algorithmic ideas effectively but also provides early-stage computational support for technological development.
  • The 'Timing Theory' of Entrepreneurship: Entrepreneurs should aim to be "five years ahead of their time, not fifty." It is crucial to keenly capture the balance between technological breakthroughs and practical implementation needs; being too far ahead often leads to failure.
  • The Path to AGI: DeepMind's mission is clear and unwavering—first, build Artificial General Intelligence (AGI); second, use AGI to solve all complex problems, including those in science and medicine.
  • The Core Value of 'AI for Science': AI is the perfect language for describing biology and complex natural systems. With the aid of AI simulations, the drug discovery cycle could be shortened from years to weeks, even enabling true personalized medicine.
  • The Birth of New Scientific Disciplines: The complexity of AI systems themselves will give rise to new engineering sciences like "mechanistic interpretability." Meanwhile, AI-driven simulation technology will allow humans to conduct controlled experiments on complex social systems like economics, thereby opening up entirely new branches of science.
  • Information as the Essence of the Universe: Matter, energy, and information are interchangeable. The essence of the universe might be a grand information processing system, which imbues AI with profound significance in understanding the universe's fundamental operating principles.
  • The Computational Limits of Turing Machines: Modern AI systems like neural networks have demonstrated that classical Turing machines are sufficient to simulate problems once thought to require quantum computing (e.g., protein folding). The human brain is likely a highly approximate Turing machine.
  • Philosophical Reflections on Consciousness: Consciousness may be composed of components like self-awareness and temporal continuity. On the journey towards AGI, we should first view it as a powerful tool, and then use that tool to explore the grand philosophical question of "consciousness."

Introduction

Demis Hassabis, co-founder and CEO of Google DeepMind and recipient of the 2024 Nobel Prize in Chemistry for AlphaFold, engaged in a wide-ranging and profound conversation with Konstantine Buhler, a partner at Sequoia Capital, at the AI Ascent 2026 summit. They explored the path to AGI and the future landscape beyond AGI.

During the discussion, he explained why he firmly believes AGI could be achieved by 2030, why the lengthy cycle of new drug development might collapse from a decade to just a few days, and why we should consider "information" rather than matter or energy as the most core and fundamental essence of the universe. He also speculated on how Einstein, if he were alive today, would assess the limitations of current AI models, and why the next one or two years will be crucial in determining humanity's fate.

Full Interview

Host: Demis, thank you very much for being here.

Demis Hassabis: It's great to be here. Thanks for having me. It's fantastic to be with everyone here and have this conversation.

Host: It's an honor to welcome you to our chocolate factory.

Demis Hassabis: I just heard about that. I'm looking forward to tasting the chocolate later.

Host: Excellent. Demis, let's dive right in. Today we have a true OG of the industry: an original thinker, founder, visionary, a pioneer across all things AI. Demis is a true believer and a pure scientist.

Demis's Original Drive and Inner Thread

Our conversation today will start with the early stories of DeepMind's founding, then delve into the science and technology, and finish with a Q&A session. So, let's begin.

Demis, you were a chess prodigy, a game company founder, a neuroscientist. You're the founder of DeepMind, and now you lead a large and critical enterprise. These identities seem disparate, but you've said there's an inner thread connecting them all. Could you share that with us?

Demis Hassabis: There is indeed a thread, though perhaps with a bit of post hoc reasoning. But my desire to work in AI has been a long-standing drive. I decided early on that this was the most important and interesting thing I could possibly dedicate my life to. From the age of 15 or 16, every subject I chose to study, everything I did, was with the eventual goal of one day building a company like DeepMind.

Games: The Training Ground for AI

I entered the games industry in a roundabout way because in the 90s, that's where the cutting-edge technology was being incubated. Not just AI, but also graphics rendering and hardware. The GPUs we all use today were originally designed for graphics engines, and I was using the earliest GPUs in the late 90s. Every game I worked on, whether at Bullfrog Productions or my own company Elixir Studios, had AI at the core of its gameplay mechanics.

My most well-known work was probably *Theme Park*, which I developed around age 17. It was an amusement park simulation game where thousands of little people would come in, ride the attractions, and decide what to buy in shops. Underneath the surface, it ran a complete economic AI model. Like *SimCity*, it was a pioneer in its genre. Seeing it sell over 10 million copies and witnessing firsthand how much joy players derived from interacting with the AI further solidified my commitment to dedicating my life to AI.

Later, I moved into neuroscience, hoping to glean inspiration from how the brain works to derive different algorithmic ideas. When the moment was finally right to found DeepMind, it felt natural to bring all these strands together. And naturally, we later used games as early training grounds to validate our AI concepts.

Entrepreneurial Lessons from Elixir Studios

Host: The room is full of entrepreneurs today, and they'll surely relate because you've started not one, but two companies. Let's go back to your first venture, Elixir Studios. What was that experience like? While it's not your most famous company, it was a great success. How did you lead that company? What did it teach you about "how to build a company"?

Demis Hassabis: That's right, I founded Elixir Studios right after university. I was fortunate to have worked at Bullfrog Productions before. Those in the know recognize it as a legendary studio from the early days of the industry, arguably the best in the UK and Europe at the time.

I wanted to do something that would push the boundaries of AI. Actually, in that era, I was using game development as a way to fund AI research, constantly challenging the technological frontier and combining it with extreme creativity. I think this philosophy is still applicable to the exploratory, blue-sky research we do today.

Perhaps the most profound lesson I learned is: you want to be 5 years ahead of your time, not 50 years. At Elixir Studios, we attempted to develop a game called *Republic*, which aimed to simulate an entire nation. The premise was that players could overthrow the country's dictator in various ways, and we were simulating a living, breathing city in great detail.

Remember, this was the late 90s, running on Pentium processors. We had to run all the graphics rendering and AI logic for a million people on home PCs of that time. It was too ambitious—perhaps overly so—and it led to a cascade of problems.

I took that lesson to heart: you want to be ahead, but if you're 50 years ahead, you'll likely fail. Of course, if an idea is obvious to everyone, it's too late. So, the key is finding that delicate balance point.

Founding DeepMind in 2009

Host: Right, speaking of not being too far ahead, let's move to 2009. You were convinced AGI was achievable. Maybe that time you were only 10 years ahead, which is better than 50. Talk to the entrepreneurs here about 2009. How did you convince those first brilliant talents? Because you did assemble a team of incredibly high-caliber early employees and members. At the time, AGI sounded like pure science fiction. How did you get them to believe?

Demis Hassabis: We were picking up on interesting signals. We thought we were maybe 5 years ahead, but it turned out we were perhaps 10. Deep Learning had just been invented by Jeff Hinton and his academic colleagues, but almost no one grasped its significance. We had deep expertise in Reinforcement Learning, and we felt combining these two would be a breakthrough. Before that, they had rarely been combined—if at all, only on academic toy problems. In AI, they were completely separate silos.

Furthermore, we saw the promise of Compute; GPUs were going to shine. Of course, we use TPUs now, but at the time, the acceleration computing industry was going to be a huge driver. Also, towards the end of my PhD and postdoc, as I was gathering colleagues who were computational neuroscientists, we had extracted enough valuable ideas and principles from brain mechanisms, including a core belief that reinforcement learning scaled could ultimately lead to AGI.

We felt we had these key ingredients. We even felt like guardians of a big secret because, whether in academia or industry, no one believed AI could make any significant breakthroughs. In fact, when we proposed working on AGI—or sometimes called Strong AI back then—many academics would literally roll their eyes. They thought it was a dead end; after all, everyone had tried and failed in the 90s.

I did my postdoc at MIT, a stronghold for Expert Systems and First-order Logic Language Systems. It seems incredible in retrospect, but even then, I found that approach too rigid and old-fashioned. However, in traditional AI hubs like Cambridge, UK, or MIT, they were still using that old paradigm. That made us even more confident we were on the right track. At the very least, if we were going to fail, we would fail in a new way, not repeating the 90s AGI failures. That made it feel worth trying; even if it was a speculative research endeavor, if it failed, at least we would fail originally.

DeepMind's Mission and Bet on AGI

Host: Was there general resistance to that early belief? What did you need to prove to yourself or to others to get those early followers on board?

Demis Hassabis: Regardless of the circumstances, I would have dedicated my life to artificial intelligence. It has progressed far beyond our most optimistic expectations. However, it's still within the prediction we made around 2010—we saw it as a 20-year journey.

I think as a field, we are right on schedule, and we've certainly played our part.

Stepping back, even if things hadn't developed this way, if AI remained a niche discipline, I would still be on this path because it's the most important technology in history, in my view. My goal was very clear. DeepMind's original mission statement was: Step 1, solve intelligence, i.e., build AGI; Step 2, use it to solve everything else. I always believed this is the most important and fascinating technology humanity could invent.

It's a tool for scientific exploration, a fascinating creation in itself, and arguably the best way to understand our own minds—things like consciousness, dreams, the nature of creativity. As a neuroscientist, I often felt we lacked an analytical tool like AI when thinking about these questions. It provides a contrastive mechanism, allowing us to study and compare two different systems in depth, almost like a control experiment.

The Culture of "AI for Science"

Host: Comparing different systems. Let's talk about "AI for Science." You were into it early, a true believer, a pure idealist. It's a core mission driver. How did the model and culture you established when founding DeepMind keep it at the forefront of "AI for Science"?

Demis Hassabis: That's the ultimate goal. For me, the fundamental drive has always been to build AI to accelerate science, medicine, and our understanding of the world. That's how I enact the mission—through a meta way: first build the ultimate tool, then, when it's ready, use it to achieve scientific breakthroughs. We've had successes like AlphaFold, and I believe there will be many more.

DeepMind has always prioritized this goal. In fact, we have an "AI for Science" division led by Pushmeet Kohli, which has been running for nearly a decade. We formally started this work almost immediately after returning from the AlphaGo match in Seoul, so it's been exactly ten years.

I had been waiting in the wings for the algorithms to become powerful enough and the ideas general enough. For me, conquering Go was a historic inflection point; that's when we realized the time had come to apply these ideas to real-world important problems, starting with these grand scientific challenges.

We always believed this is the most beneficial destiny for AI. What could be better than using it to cure diseases, extend healthy human lifespan, and assist healthcare? Following closely are critical areas like materials science, environment, and energy. I believe AI will shine brightly in these fields in the coming years.

Breakthroughs in Biology and Isomorphic Labs

Host: How is AI achieving breakthroughs in biology? You are deeply involved with Isomorphic Labs, an area you are passionate about. From the beginning, you were a steadfast believer in AI's potential to cure diseases. In biology, when will we have a "Gutenberg moment" akin to language and programming?

Demis Hassabis: I think AlphaFold was already that "Gutenberg moment" for biology. Protein folding and its 3D structure was a 50-year scientific problem. If you want to design drugs or decipher the foundational code of biology, solving this is crucial. Of course, it's just one piece of the drug discovery process—a critical piece, but only one.

Our newest spin-out, Isomorphic Labs (which I also immensely enjoy managing), is working on building the relevant core technologies in biochemistry and chemistry. These technologies can automatically design compounds that perfectly fit specific sites on proteins. Now that we have the protein's shape and its surface structure, we have the target. The next step is to create a compound that binds strongly to that target, ideally without any off-target effects that could cause toxicity.

The ultimate dream is: to move the entire exploration process, which constitutes 99% of the workload and time in current R&D, entirely into in silico (computer simulation), leaving only the final validation to physical wet lab experiments. If we can achieve that—and I firmly believe we will in the next few years—we could shorten the average 10-year drug discovery cycle to months, weeks, and eventually even days.

I believe once we cross that threshold, curing all diseases becomes within reach. Concepts like personalized medicine (e.g., customized drug variants for individual patients) will become reality. I think the entire landscape of healthcare and drug development will be completely reshaped in the coming years.

New Science Born from Simulators

Host: Fascinating. You've mentioned "AI for Science" multiple times. Do you think at some future point, AI will give birth to entirely new scientific systems? Like how the Industrial Revolution gave rise to thermodynamics. Will there be fundamentally new subjects in our education system? If so, what might they look like?

Demis Hassabis: Regarding this, I think several things will happen.

First, the understanding and dissection of AI systems themselves will evolve into a complete discipline—an engineering science. The creations we are building are incredibly fascinating and also immensely complex. Ultimately, their complexity will rival that of the human mind and brain. Therefore, we must study them deeply to fully understand how these systems work, which is far beyond our current level of comprehension. I believe a new field will inevitably emerge; mechanistic interpretability is just the tip of the iceberg, and there is vast space for exploration in parsing these systems.

Second, I also believe AI itself will open the door to new sciences. What excites me most is "AI for Simulations." I'm obsessed with simulation; every game I've written not only contains AI but is essentially a simulator. I think simulators are the ultimate path to cracking problems in social sciences like economics and other humanities.

The difficulty with these disciplines is that, like biology, they are emergent systems, extremely hard to conduct repeatable, controlled experiments on. If you want to raise interest rates by 0.5%, you have to do it in the real world and observe the consequences; you can have theories, but you can't run the experiment thousands of times. However, if we could simulate these complex systems accurately, then rigorous sampling based on highly accurate simulators could perhaps establish a new science. I believe this would empower us to make better decisions in areas currently fraught with high uncertainty.

Host: To achieve these incredibly accurate simulations, what do we need? For example, world models. What scientific and engineering breakthroughs are required to reach that point?

Demis Hassabis: I've been thinking deeply about this. In our work, we heavily use learning-based simulators. These are applied in areas where we either don't understand the mathematics well enough or the system is too complex. We can't solve the problem by just writing a direct simulation program for a specific case because that approach isn't precise enough and can't cover all variables.

We already practice this with weather forecasting. We have "WeatherNext," the world's most accurate weather simulator, which runs much faster than the tools meteorologists currently use. I'm not sure if we can understand everything, nor if that's even a good idea, but the first step is to better understand these complex systems.

Even in biology, we are working on what we call the "Virtual Cell"—an immensely dynamic emergent system. Just as mathematics is the perfect language for physics, machine learning will be the perfect language for biology. In biology and many natural systems, there are vast amounts of weak signals, weak correlations, and massive datasets, far beyond the analytical capacity of the human brain. Yet, within these massive datasets, there are indeed intrinsic connections, correlations, and thought-provoking causal relationships.

Machine learning is the perfect tool for describing such systems. Until today, mathematics hasn't been able to do this, either because the system is too complex even for top mathematicians, or because mathematical expressiveness is insufficient to understand these highly emergent, dynamic systems—partly because they are extremely messy and have a stochastic nature.

Ultimately, once you master these simulators, perhaps a new branch of science can be derived. You could try to extract explicit equations from these implicit or intuitive simulators. Since you can sample the simulator arbitrarily many times, perhaps one day you could discover fundamental scientific laws like Maxwell's equations.

Maybe. I don't know if such laws exist for these emergent systems, but if they do, I see no reason why we couldn't discover them through this method.

Host: That would be extraordinary. You've talked about a theory that the fundamental building block of everything in the universe might be something like information, on a more theoretical level. How do you view that? What does that imply for traditional classical Turing computers?

Demis Hassabis: Of course, you can cite the famous E=mc² and all of Einstein's work, showing that energy and matter are essentially equivalent. But I actually think information also has a kind of equivalence. You can view the organization of matter and structure—especially systems like biology that resist entropy—as fundamentally information processing systems. Therefore, I think you can convert between these three.

However, I have a sense that information is the most fundamental. This is the opposite view of classical physicists in the 1920s, who thought energy and matter were primary. I actually think viewing the universe as being made of information first is a better way to understand the world.

If that holds true—and I think there's a lot of evidence pointing that way—then AI is even more profound than we think. It's already profoundly significant because its core is organizing information, understanding information, and building informational objects.

To me, AI's core is information processing. If you take information processing as the primary way to understand the world, you find extremely deep connections between these seemingly disparate domains.

Host: So, do you think classical Turing machines can compute everything?

Demis Hassabis: Sometimes I reflect on our work and see myself as a "defender of Turing," because Alan Turing is one of the scientific heroes I most admire in my life. I believe his work laid the foundation not only for computers and computer science but also for artificial intelligence. Turing machine theory is one of the most profound results ever: anything computable can be computed by a machine that is relatively simple to describe. Therefore, I think our brains are likely a kind of approximate Turing machine.

It's fascinating to think about the connection between Turing machines and quantum systems. However, what we've demonstrated with systems like AlphaGo and especially AlphaFold is that classical Turing machines, dressed in the form of modern neural networks, can model problems previously thought to require quantum mechanics. For example, protein folding is, in a sense, a quantum system involving very small particles, and one might think you must consider all quantum effects of hydrogen bonds and other complex interactions.

Yet it turns out that an approximately optimal solution can be found using a classical system. So, we might find that many things we thought required quantum systems to simulate or run can actually be modeled on classical systems, if approached correctly.

Consciousness Philosophy

Host: You've always viewed AI as a tool, like the telescope, microscope, or astrolabe of past centuries. But when facing a machine that can simulate almost everything—even quantum systems, as you said—when does it cease to be just a tool? Will that day truly come?

Demis Hassabis: I feel very strongly that in the mission and journey of building AGI, we, the travelers—including many here—believe the best approach is to first build a tool: an incredibly intelligent, useful, and accurate tool, and then cross the next threshold. That in itself is profound enough. Of course, this tool may become increasingly autonomous, increasingly agent-like, which is what we're witnessing now. We're in the midst of this agent era wave.

However, there are further questions: Does it have agency? Is it conscious? These are questions we will have to confront. But I suggest we treat that as the second step, perhaps using the tool built in the first step to help us explore these profound questions.

Ideally, through this process, we can also better understand our own brains and minds and be able to define concepts like "consciousness" more precisely than we can today.

Host: Do you have any rough predictions for how consciousness might be defined in the future?

Demis Hassabis: Not really, beyond what philosophy has discussed for millennia. But what's clear to me is that certain components are obviously necessary. They might be necessary but not sufficient conditions. Things like self-awareness, the concept of self and other, and some form of temporal continuity are clearly necessary for any entity that appears conscious.

However, what the complete definition is remains an open question. I've discussed this with many great philosophers. Years ago, I had in-depth conversations with Daniel Dennett, who unfortunately passed away recently. One core issue is the system's behavior: Does it behave like a conscious system? You could argue that as some AI systems get closer to AGI, they might eventually do so.

But then the question arises: Why do we think each other is conscious? Partly because of how we behave; we act like conscious beings. But another factor is that we all run on the same underlying substrate.

So I think if both hold, assuming you and I have the same experience is the most logically parsimonious, which is why we typically don't argue about whether the other is conscious. But obviously, we can never achieve the same substrate equivalence with artificial systems. So I think bridging that gap completely is very difficult. You can examine it behaviorally, but experientially? Perhaps after achieving AGI, there will be some ways to approach this, but that might be beyond the scope of today's discussion, even within an "AI and Science" conversation.

Host: Brilliant. We'll open up for audience questions shortly, so get your questions ready. You mentioned philosophers earlier, specifically Kant and Spinoza, as two of your favorites. Kant is a classic deontological philosopher, heavily emphasizing duty, while Spinoza had a nearly deterministic view of the universe. How do you reconcile these two very different ideas? What is your fundamental understanding of how the world operates?

Demis Hassabis: The reason I like and am impressed by these two philosophers is that Kant proposed an idea—which I deeply felt during my neuroscience PhD—that "the mind creates reality," which I think is largely correct. That gives us another excellent reason to study how the mind and brain operate. Since I'm ultimately inquiring about the nature of reality, we must first understand how the mind interprets reality. That's the insight I take from Kant.

As for Spinoza, it's more about the spiritual dimension. If you try to use science as a tool to understand the universe, you're already touching upon the deep mysteries behind how the universe operates.

That's my feeling about our current endeavors. When I engage in scientific research, delve into AI, and build these tools, I feel we are, in a way, reading the language of the universe.

Host: Beautiful. That's a beautiful description of your daily work: Demis, you are a scientist, orator, and philosopher combined. Before we finish, let's do a few rapid-fire questions. He hasn't seen these at all. Predict the year for achieving AGI. Earlier or later than expected? Or you can decline.

Demis Hassabis: I'll go with 2030. I've been quite consistent with that prediction.

Host: OK, 2030. When we achieve AGI, what book, poem, or paper would you recommend as essential reading?

Demis Hassabis: My favorite book for the post-AGI world is David Deutsch's *The Fabric of Reality*. I think its ideas still apply. I hope to use AGI then to answer the profound questions posed in that book, which would also be the focus of my subsequent work in the AGI era.

Host: Excellent. What's your proudest moment at DeepMind so far?

Demis Hassabis: We've been fortunate to have many peak moments. I think probably the birth of AlphaFold.

Host: Good. Finally, a few game-related questions. If you were in a high-stakes turn-based strategy game, like Civ, Polytopia, something hardcore, and could pick one scientist from history to be on your team—like Einstein, Turing, or Newton—who would you choose to join your squad?

Demis Hassabis: I think I'd go with von Neumann. In such a situation, you'd want a game theory expert, and I think he's the best.

Host: That's definitely a god-tier teammate. Demis, you're a true Renaissance person. Thank you so much for being with us today. Please join me in thanking Demis for an incredible session. Thank you very much.

Perguntas relacionadas

QWhat does Demis Hassabis believe is the fundamental essence of the universe, and how does this view relate to AI?

ADemis Hassabis believes that information is the fundamental essence of the universe, potentially even more fundamental than matter and energy. He views the universe as a grand information-processing system. This perspective makes AI deeply significant because its core function is to organize, understand, and process information. AI becomes a key tool for comprehending the universe's underlying mechanisms.

QWhat new scientific disciplines or fields does Demis Hassabis believe AI will give rise to?

AHe foresees the emergence of two main types of new disciplines. First, an engineering science focused on understanding the complex AI systems themselves, such as through Mechanistic Interpretability. Second, new sciences enabled by AI-driven simulations, particularly for complex, emergent systems like economics and biology, where AI will allow for repeatable, controlled experiments that are currently impossible, potentially leading to the discovery of new fundamental laws.

QWhy does Hassabis see games as such an important proving ground for AI development?

AHassabis sees games as an excellent testbed for AI because they provide a rich, challenging, and measurable environment to validate algorithmic concepts. Even his early games in the 90s had AI as a core gameplay mechanic. Games also helped fund and push the boundaries of technology (like GPU usage) needed for AI research, serving as a 'training ground' before tackling real-world scientific problems.

QAccording to the interview, how can AI transform the pharmaceutical drug discovery process?

AAI can drastically transform drug discovery by shifting the majority of the exploratory work into computer simulations. The goal is to move 99% of the R&D effort from physical 'wet lab' experiments to 'in silico' simulations, leaving only final validation to the lab. This could potentially collapse the average drug discovery timeline from 10 years down to months, weeks, or even days, unlocking personalized medicine and accelerating the cure for all diseases.

QWhat is Demis Hassabis's position on whether the human brain and consciousness can be modeled by classical computers like Turing machines?

AHassabis is a 'defender of Turing,' believing classical Turing machines are likely sufficient. He argues that the human brain is probably an approximate Turing machine. As evidence, he cites how modern AI (e.g., AlphaFold) modeled complex problems like protein folding—previously thought to require quantum computation—with classical systems, achieving near-optimal solutions. He views consciousness as a separate, profound philosophical question to be explored later, potentially with the help of AGI.

Leituras Relacionadas

How to Automate Any Workflow with Claude Skills (Complete Tutorial)

This is a comprehensive guide to mastering Claude Skills, a feature for creating permanent, reusable instruction sets that automate specific workflows. Unlike simple saved prompts, Skills function like trained employees, delivering consistent, high-quality outputs by defining the entire task process, standards, error handling, and output format. The guide is structured in four phases: **Phase 1: Installation (5 minutes).** Skills are folders containing a `SKILL.md` file. The user is instructed to find a relevant Skill online, install it, test it on a real task, and compare its performance to one-off prompts. **Phase 2: Building Your First Custom Skill.** Start by rigorously defining the Skill's purpose, trigger phrases, and providing a concrete example of perfect output. The `SKILL.md` file has two parts: a YAML frontmatter with a specific name/description/triggers, and a detailed, step-by-step workflow written in natural language with examples and quality standards. **Phase 3: Testing & Optimization for Production.** Test the Skill in three scenarios: 1) a standard, common task; 2) edge cases with missing or conflicting data; and 3) a pressure test with maximum complexity. Any failure indicates a needed instruction. Implement a weekly optimization cycle to continuously refine the Skill based on real usage. **Phase 4: Building a Complete Skill Library.** The goal is to create a team of Skills for all repetitive tasks. Examples are given for industries like real estate, marketing, finance, consulting, and e-commerce. The user should list their tasks, prioritize them, and build one new Skill per week, maintaining a master document to track their library. The conclusion emphasizes the compounding time savings: ten Skills saving 30 minutes each per week reclaims over 260 hours (6.5 work weeks) per year, fundamentally transforming one's work system.

marsbitHá 11m

How to Automate Any Workflow with Claude Skills (Complete Tutorial)

marsbitHá 11m

Dialogue with Vitalik, Xiao Feng, Aya Miyaguchi, and Joseph Chalom: From the 'Subtraction Principle' to the Agent Economy

Conversation with Vitalik Buterin, Xiao Feng, Aya Miyaguchi, and Joseph Chalom: Highlights from the Ethereum Application Summit on key future directions. Vitalik Buterin discussed the concept of "Full Stack Open Source Security," extending security from the protocol to hardware layers like wallets and chips. He predicted AI will simplify blockchain interaction, enabling natural language commands for complex operations. He emphasized that Ethereum's future focus should be on security, decentralization, and trustless infrastructure—the areas where it holds its core competitive edge. The fusion of AI, Fully Homomorphic Encryption (FHE), and blockchain is seen as crucial for real-world applications requiring privacy, such as healthcare. Xiao Feng underscored the importance of simplifying technology for mass adoption. He drew parallels to the evolution from command lines to GUIs and apps, suggesting that AI-driven natural language interfaces will be key to bringing more users into Web3. He stressed that while performance is important, Ethereum must continue to uphold its foundational principles of decentralization and user sovereignty. Aya Miyaguchi, Chair of the Ethereum Foundation, explained the evolving role of the Foundation through the "Principle of Subtraction." As the ecosystem matures, the EF is stepping back from areas where the community can take the lead, acting as one of many "gardeners" rather than a central driver. She highlighted that real applications are built on Ethereum's core values: censorship resistance, open source, security, and privacy. The concept of "Local-first" initiatives, like the Ethereum Applications Guild (EAG), was also emphasized for leveraging regional strengths to create global impact. Joseph Chalom, CEO of SharpLink, positioned Ethereum as the future infrastructure for global capital markets, differentiating it from Bitcoin through its "productivity" via staking yields. He envisioned the rise of an "Agent Economy" by 2027, where AI agents, powered by Web3 wallets, will autonomously manage financial tasks like yield optimization and RWA investments. The summit concluded that with core infrastructure maturing, the application layer is now the key driver for Ethereum's next phase of growth and real-world adoption.

marsbitHá 13m

Dialogue with Vitalik, Xiao Feng, Aya Miyaguchi, and Joseph Chalom: From the 'Subtraction Principle' to the Agent Economy

marsbitHá 13m

Cerebras IPO: A $48.8 Billion Valuation—Is the 'Nvidia Challenger' a Bubble or a New King?

Cerebras Systems, positioning itself as an NVIDIA challenger, is going public with a $48.8 billion valuation despite several underlying paradoxes revealed in its S-1 filing. While 2025 revenue grew 76% to $510M and GAAP net income was $237.8M, this profitability relies heavily on a one-time, non-cash accounting gain. Adjusting for this, the company's non-GAAP net loss actually widened to $75.7M. Furthermore, customer concentration remains extreme: 86% of 2025 revenue came from two Abu Dhabi-based entities, MBZUAI (62%) and G42 (24%). Its landmark deal with OpenAI, valued at over $20 billion, creates a complex, nested relationship where OpenAI is simultaneously a major customer, lender, warrant holder, and strategic partner with exclusivity clauses. Cerebras's technical edge in latency-sensitive AI inference is real, with its wafer-scale chip outperforming competitors in benchmarks. However, this advantage is confined to a specific niche, not the broader AI training market dominated by NVIDIA's CUDA ecosystem. With a 95x price-to-sales ratio, the valuation demands flawless execution of the OpenAI contract and massive future revenue growth. Key long-term risks include intense competition from giants like NVIDIA and AMD, a dual-class share structure granting insiders near-total voting control, and ongoing geopolitical uncertainties regarding export controls. The IPO is a pivotal capital markets event for AI infrastructure. As an investment, it represents a high-risk, high-reward bet on the "inference-first" narrative and Cerebras's ability to dominate its specialized segment, underpinned by a valuation that highlights the current fervor in the sector.

marsbitHá 52m

Cerebras IPO: A $48.8 Billion Valuation—Is the 'Nvidia Challenger' a Bubble or a New King?

marsbitHá 52m

Trading

Spot
Futuros
活动图片