Source:
This is the latest interview with a16z founder Marc Andreessen on the Latent Space podcast.
He is a renowned American internet entrepreneur and a key figure in the early development of the internet; after founding a16z, he has become a representative of top investors in Silicon Valley.
The entire conversation revolves around the history and latest trends of AI development, making it highly worth reading.
1. This Wave of AI Did Not Emerge Out of Nowhere; It Is the First Comprehensive "Start of Work" After an 80-Year Technological Marathon
· This wave of AI did not emerge out of nowhere; it is the result of an 80-year technological marathon.
· Marc Andreessen directly refers to the present as an "80-year overnight success," meaning that the sudden explosion in the public eye is actually the concentrated release of decades of technological accumulation.
· He traces this technological lineage back to early neural network research and emphasizes that the industry has now accepted the judgment that "neural networks are the correct architecture."
· In his narrative, the key milestones are not single moments but a series of developments: AlexNet, Transformer, ChatGPT, reasoning models, and then agents and self-improvement.
· He particularly emphasizes that this time, it is not just text generation that has improved; four types of capabilities have emerged simultaneously: LLMs, reasoning, coding, and agents/recursive self-improvement.
· The reason he believes "this time is different" is not because the narrative is more compelling, but because these capabilities have already started to work on real-world tasks.
2. The Agent Architecture Represented by Pi and OpenClaw Is a Deeper Change in Software Architecture Than Chatbot
· He describes agents very concretely: essentially, they are "LLM + shell + file system + markdown + cron/loop." In this structure, the LLM is the core for reasoning and generation, the shell provides the execution environment, the file system saves the state, markdown makes the state readable, and cron/loop provides periodic awakening and task progression.
· He believes the importance of this combination lies in the fact that, apart from the model itself being new, all other components are mature, understandable, and reusable parts of the software world.
· The state of the agent is saved in files, allowing it to migrate across models and runtimes; the underlying model can be replaced, but the memory and state remain preserved.
· He repeatedly emphasizes introspection: the agent knows its files, can read its state, and can even rewrite its files and functions, moving toward "extend yourself."
· In his view, the real breakthrough is not just that "the model can answer," but that the agent can leverage existing Unix toolchains to integrate the potential capabilities of the entire computer.
3. Browsers, Traditional GUIs, and the Era of "Human-Clicking Software" Will Gradually Be Replaced by Agent-First Interaction Methods
· Marc Andreessen has explicitly stated that in the future, "you may no longer need a user interface."
· He further pointed out that the primary users of future software may not be humans but "other bots."
· This means that many of today's interfaces designed for human clicking, browsing, and form-filling will degenerate into execution layers called by agents in the background.
· In this world, humans are more like goal-setters: telling the system what they want, and then having agents call services, operate software, and complete processes.
· He connects this change to the broader future of software: high-quality software will become increasingly "abundant," no longer a scarce product handcrafted by a few engineers.
· He also predicts that the importance of programming languages will decline; models will write code across languages, translate between them, and in the future, humans will care more about explaining why the AI organized the code in a certain way rather than rigidly adhering to a specific language.
· He even mentions a more radical direction: conceptually, AI may not only output code but also directly output lower-level binary code or model weights.
4. This AI Investment Cycle Has Similarities to the 2000 Internet Bubble, but the Underlying Supply-Demand Structure Is Different
· Reflecting on 2000, he emphasizes that the crash was largely not because "the internet didn't work," but because of overbuilding in telecommunications and bandwidth infrastructure, with fiber optics and data centers being laid out too early and then taking a long time to be absorbed.
· He believes that today, there are indeed concerns about "overbuilding," but the main investors are large companies with strong cash flows like Microsoft, Amazon, and Google, rather than highly leveraged, fragile players.
· He particularly points out that today, as long as an investment in operable GPUs is made, it can usually be quickly converted into revenue, which is different from the大量闲置 capacity in 2000.
· He also emphasizes that what we are using now is a "sandbagged" version of the technology: due to shortages in GPU, memory, data center, and other supplies, the model's potential has not been fully unleashed.
· In his judgment, the real constraints in the coming years will not only be GPUs but also bottlenecks in the联动 of CPUs, memory, networks, and the entire chip ecosystem.
· He juxtaposes AI scaling laws with Moore's Law, believing that they not only describe规律 but also continuously stimulate capital, engineering, and industry collaboration to advance.
· He mentions a counterintuitive but important phenomenon: as software optimization speeds up, some older-generation chips may even become more economically valuable than when they were first purchased.
5. Open Source, Edge Inference, and Local Operation Are Not Peripheral but Part of the AI Competitive Landscape
· Marc Andreessen clearly believes that open source is very important, not just because it is free, but because it "teaches the world how it is made."
· He describes open-source releases like DeepSeek as a "gift to the world," because code + papers quickly disseminate knowledge and raise the entire industry's baseline.
· In his narrative, open source is not just a technical choice but also a geopolitical and market strategy: different countries and companies will adopt different openness strategies based on their commercial constraints and influence goals.
· He also emphasizes the importance of edge inference: in the coming years, centralized inference costs may not be low enough, and many consumer-level applications cannot bear the long-term high costs of cloud inference.
· He mentions a recurring pattern: what seems "impossible to run on a PC today" often becomes possible on local machines just a few months later.
· Besides cost, factors driving local operation include trust, privacy, latency, and usage scenarios: wearable devices, door locks, portable devices, etc., are more suitable for low-latency, on-site inference.
· His judgment is very direct: almost everything with a chip may come with an AI model in the future.
6. The Real Challenges of AI Lie Not Only in Model Capabilities but Also in Security, Identity, Money Flow, Organizational and Institutional Resistance
· On security, his judgment is very sharp: almost all potential security bugs will be更容易被发现, and there may be a short-term "computer security catastrophe."
· But he also believes that programming intelligences will scale the ability to patch vulnerabilities; in the future, the way to "protect software" may be to have bots scan and fix it.
· On the identity issue, he believes that "proof of bot" is not feasible because bots will become increasingly powerful; the真正可行的方向 is "proof of human," which is the combination of biometrics, cryptographic verification, and selective disclosure.
· He also discusses a frequently overlooked problem: if agents are to truly operate in the real world, they will eventually need money, payment capabilities, and even some form of bank accounts, cards, or stablecoin-like infrastructure. At the organizational level, he uses the framework of managerial capitalism, suggesting that AI may重新强化 founder-led companies, because bots are very good at reports, coordination, paperwork, and大量 "managerial work."
· However, he does not believe that society will quickly and smoothly accept AI: he cites examples like professional licenses, unions, dockworker strikes, government departments, K-12 education, and healthcare to illustrate that there are many institutional speed bumps in the real world.
· His judgment is that both AI utopians and doomsayers tend to overlook one thing: just because something is technologically possible does not mean that 8 billion people will immediately change accordingly.










