a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

marsbitPublicado a 2026-04-25Actualizado a 2026-04-25

Resumen

The article "a16z: AI's 'Amnesia' – Can Continual Learning Cure It?" explores the limitations of current large language models (LLMs), which, like the protagonist in the film *Memento*, are trapped in a perpetual present—unable to form new memories after training. While methods like in-context learning (ICL), retrieval-augmented generation (RAG), and external scaffolding (e.g., chat history, prompts) provide temporary solutions, they fail to enable true internalization of new knowledge. The authors argue that compression—the core of learning during training—is halted at deployment, preventing models from generalizing, discovering novel solutions (e.g., mathematical proofs), or handling adversarial scenarios. The piece introduces *continual learning* as a critical research direction to address this, categorizing approaches into three paths: 1. **Context**: Scaling external memory via longer context windows, multi-agent systems, and smarter retrieval. 2. **Modules**: Using pluggable adapters or external memory layers for specialization without full retraining. 3. **Weights**: Enabling parameter updates through sparse training, test-time training, meta-learning, distillation, and reinforcement learning from feedback. Challenges include catastrophic forgetting, safety risks, and auditability, but overcoming these could unlock models that learn iteratively from experience. The conclusion emphasizes that while context-based methods are effective, true breakthroughs requ...

Original Author: Malika Aubakirova, Matt Bornstein, a16z crypto

Original Compilation: Deep Tide TechFlow

In Christopher Nolan's "Memento," the main character Leonard Shelby lives in a fragmented present. Brain damage has left him with anterograde amnesia, unable to form new memories. Every few minutes, his world resets, trapping him in an eternal "now," unable to remember what just happened or what will happen next. To survive, he tattoos words on his body and takes Polaroids, relying on these external props to replace the memory functions his brain can no longer perform.

Large language models live in a similar eternal present. After training ends, vast amounts of knowledge are frozen in their parameters; the model cannot form new memories or update its parameters based on new experiences. To compensate for this defect, we build a bunch of scaffolding for it: chat history acts as short-term sticky notes, retrieval systems serve as external notebooks, and system prompts are like tattoos on the body. But the model itself never truly internalizes this new information.

More and more researchers believe this is not enough. In-context learning (ICL) can solve problems, provided the answer (or fragments of the answer) already exists somewhere in the world. But for problems that require true discovery (like novel mathematical proofs), adversarial scenarios (like security attacks and defenses), or knowledge that is too implicit to be expressed in language, there is a strong argument that models need a way to directly write new knowledge and experience into their parameters after deployment.

In-context learning is temporary. True learning requires compression. Until we allow models to continuously compress, we might be stuck in the eternal present of "Memento." Conversely, if we can train models to learn their own memory architecture, rather than relying on external custom tools, we might unlock a whole new dimension of scaling.

This field of research is called continual learning. This concept is not new (see McCloskey and Cohen's 1989 paper), but we believe it is one of the most important research directions in AI today. The explosive growth of model capabilities over the past two to three years has made the gap between what models "know" and what they "can know" increasingly apparent. The purpose of this article is to share what we have learned from top researchers in this field, help clarify the different paths of continual learning, and promote the development of this topic within the startup ecosystem.

Note: This article was shaped by in-depth discussions with a group of excellent researchers, PhD students, and entrepreneurs who generously shared their work and insights in the field of continual learning. From theoretical foundations to the engineering realities of post-deployment learning, their insights have made this article much more solid than anything we could have written alone. Thank you for your time and ideas!

First, Let's Talk About Context

Before defending parameter-level learning (i.e., learning that updates model weights), it's necessary to acknowledge a fact: in-context learning does work. And there is a strong argument that it will continue to win.

The essence of a Transformer is a sequence-based next-token predictor conditioned on the input. Give it the right sequence, and you can get surprisingly rich behavior without ever touching the weights. This is why methods like context management, prompt engineering, instruction fine-tuning, and few-shot examples are so powerful. Intelligence is encapsulated in static parameters, and the manifested capabilities change dramatically based on what you feed into the context.

A recent in-depth article by Cursor on the scaling of autonomous programming agents is a good example: the model weights are fixed; what really makes the system run is the careful orchestration of context—what to put in, when to summarize, how to maintain a coherent state over hours of autonomous operation.

OpenClaw is another good example. It went viral not because of special model access (the underlying model is available to everyone), but because it extremely efficiently converted context and tools into a working state: tracking what you're doing, structuring intermediate outputs, deciding when to re-inject prompts, maintaining persistent memory of previous work. OpenClaw elevated the "shell design" of agents to the level of an independent discipline.

When prompt engineering first emerged, many researchers were skeptical that "just prompts" could become a serious interface. It seemed like a hack. But it is a native product of the Transformer architecture, requires no retraining, and automatically upgrades as models improve. As models get stronger, prompts get stronger. "Crude but native" interfaces often win because they are coupled directly to the underlying system, not fighting against it. So far, the trajectory of LLM development has followed this pattern.

State Space Models: Context on Steroids

As mainstream workflows shift from raw LLM calls to agent loops, in-context learning models are under increasing pressure. In the past, it was relatively rare for the context window to be completely filled. This usually happened when an LLM was asked to perform a long series of discrete tasks, and the application layer could trim and compress chat history in a straightforward way.

But for agents, a single task can consume a large portion of the total available context. Each step of an agent loop relies on the context passed from previous iterations. And they often fail after 20 to 100 steps because they "lose the thread": the context gets filled, coherence degrades, and they fail to converge.

Therefore, major AI labs are now investing significant resources (i.e., large-scale training runs) to develop models with ultra-long context windows. This is a natural path because it builds on what already works (in-context learning) and aligns with the industry's broader shift towards inference-time computation. The most common architecture involves interleaving fixed memory layers between standard attention heads, namely State Space Models (SSMs) and linear attention variants (collectively referred to as SSMs below). SSMs offer fundamentally better scaling curves in long-context scenarios.

Figure Caption: Scaling comparison of SSM vs. traditional attention mechanism

The goal is to help agents increase the number of coherent run steps by several orders of magnitude, from about 20 steps to about 20,000 steps, without losing the broad skills and knowledge provided by traditional Transformers. If successful, this would be a major breakthrough for long-running agents.

You could even view this approach as a form of continual learning: although the model weights aren't updated, an external memory layer that rarely needs resetting is introduced.

So, these non-parametric methods are real and powerful. Any evaluation of continual learning must start here. The question isn't whether today's context systems work—they do. The question is: have we already seen the ceiling, and can new methods take us further?

What Context Omits: The "Filing Cabinet Fallacy"

"What happened with AGI and pre-training is that, in a sense, they overshot... Humans are not AGI. Yes, humans do have a skill base, but humans lack a vast amount of knowledge. We rely on continual learning.

If I create a super-smart 15-year-old, he knows nothing. A good student, very eager to learn. You could say, go be a programmer, go be a doctor. Deployment itself would involve a process of learning, trial and error. It's a process, not throwing the finished product out there. — Ilya Sutskever"

Imagine a system with infinite storage space. The world's largest filing cabinet, every fact perfectly indexed, instantly retrievable. It can look up anything. Has it learned?

No. It was never forced to compress.

This is the core of our argument, referencing a point previously made by Ilya Sutskever: LLMs are essentially compression algorithms. During training, they compress the internet into parameters. Compression is lossy, and it is this lossiness that makes it powerful. Compression forces the model to find structure, generalize, and build representations that transfer across contexts. A model that memorizes all training samples is inferior to one that extracts underlying patterns. Lossy compression is learning itself.

Ironically, the mechanism that makes LLMs so powerful during training (compressing raw data into compact, transferable representations) is precisely what we stop them from doing after deployment. We halt compression at the moment of release, substituting it with external memory.

Of course, most agent shells compress context in some custom way. But doesn't the bitter lesson tell us that the model itself should learn this compression, directly and at scale?

Yu Sun shared an example to illustrate this debate: mathematics. Consider Fermat's Last Theorem. For over 350 years, no mathematician could prove it, not because they lacked the right literature, but because the solution was highly novel. The conceptual distance between existing mathematical knowledge and the final answer was too great.

When Andrew Wiles finally cracked it in the 1990s, he spent seven years working in near isolation, having to invent entirely new techniques to reach the answer. His proof relied on successfully bridging two different branches: elliptic curves and modular forms. Although Ken Ribet had previously shown that establishing this connection would automatically solve Fermat's Last Theorem, no one before Wiles possessed the theoretical tools to actually build that bridge. A similar argument can be made for Grigori Perelman's proof of the Poincaré conjecture.

The core question is: Do these examples prove that LLMs are missing something, some ability to update priors and engage in truly creative thinking? Or does this story恰恰证明恰恰相反——all human knowledge is just data available for training and recombination, and Wiles and Perelman merely demonstrate what LLMs could also do at a larger scale?

This question is empirical, and the answer is still uncertain. But we do know that there are many categories of problems where in-context learning fails today, and parameter-level learning could be useful. For example:

Figure Caption: Problem categories where in-context learning fails and parameter learning might succeed

More importantly, in-context learning can only handle things that can be expressed in language, while weights can encode concepts that prompts cannot convey in words. Some patterns are too high-dimensional, too implicit, too deeply structured to fit into context. For instance, the visual texture that distinguishes a benign artifact from a tumor in a medical scan, or the subtle audio fluctuations that define a speaker's unique rhythm—these patterns are not easily broken down into precise vocabulary.

Language can only approximate them. No prompt, no matter how long, can transmit these things; this kind of knowledge can only live in the weights. They reside in the latent space of learned representations, not in words. No matter how large the context window grows, there will always be knowledge that text cannot describe, knowledge that can only be carried by parameters.

This might explain why explicit "the robot remembers you" features (like ChatGPT's memory) often make users feel discomfort rather than delight. What users really want is not "recall," but "capability." A model that has internalized your behavioral patterns can generalize to new scenarios; a model that merely recalls your history cannot. The gap between "Here's what you wrote last time you replied to this email" (verbatim repetition) and "I understand your way of thinking well enough to anticipate what you need" is the gap between retrieval and learning.

Continual Learning Primer

There are multiple paths to continual learning. The dividing line is not "whether there is memory function," but: Where does compression happen? These paths exist on a spectrum, from no compression (pure retrieval, frozen weights), to full internal compression (weight-level learning, the model gets smarter), with an important middle ground (modules).

Figure Caption: Three paths of continual learning—Context, Modules, Weights

Context

On the context end, teams build smarter retrieval pipelines, agent shells, and prompt orchestration. This is the most mature category: infrastructure is proven, deployment paths are clear. The limitation is depth: context length.

A notable new direction: multi-agent architectures as a scaling strategy for context itself. If a single model is limited to a 128K token window, a coordinated group of agents—each holding its own context, focusing on a slice of the problem, communicating results—can approximate infinite working memory as a whole. Each agent does in-context learning within its own window; the system does aggregation. Karpathy's recent autoresearch project and Cursor's example of building a web browser are early cases. This is a purely non-parametric approach (no weight changes), but it significantly raises the ceiling of what context systems can do.

Modules

In the module space, teams build pluggable knowledge modules (compressed KV caches, adapter layers, external memory stores) that allow general models to specialize without retraining. An 8B model with the right module can match the performance of a 109B model on a target task, with a fraction of the memory footprint. The appeal is its compatibility with existing Transformer infrastructure.

Weights

On the weight update end, researchers are pursuing true parameter-level learning: sparse memory layers that update only relevant parameter segments, reinforcement learning loops that optimize the model from feedback, test-time training that compresses context into weights during inference. These are the deepest methods, and the hardest to deploy, but they truly allow the model to fully internalize new information or skills.

There are various specific mechanisms for parameter updates. Listing a few research directions:

Figure Caption: Overview of research directions in weight-level learning

Weight-level research covers multiple parallel tracks. Regularization and weight space methods have the longest history: EWC (Kirkpatrick et al., 2017) penalizes parameter changes based on their importance to previous tasks; weight interpolation (Kozal et al., 2024) mixes old and new weight configurations in parameter space, but both are relatively fragile at scale.

Test-time training, pioneered by Sun et al. (2020) and later developed into architectural primitives (TTT layers, TTT-E2E, TTT-Discover), takes a截然不同的 approach: perform gradient descent on test data, compressing new information into parameters at the moment it's needed.

Meta-learning asks: Can we train models that know "how to learn"? From MAML's few-shot-friendly parameter initialization (Finn et al., 2017) to Behrouz et al.'s Nested Learning (2025), which structures the model as a hierarchical optimization problem with modules operating on different time scales for fast adaptation and slow updates, inspired by biological memory consolidation.

Distillation retains knowledge of previous tasks by having a student model match frozen teacher checkpoints. LoRD (Liu et al., 2025) makes distillation efficient enough for continuous operation by simultaneously pruning the model and the replay buffer. Self-distillation (SDFT, Shenfeld et al., 2026) flips the source, using the model's own outputs under expert conditions as the training signal, bypassing the catastrophic forgetting of sequential fine-tuning.

Recursive self-improvement operates on similar lines: STaR (Zelikman et al., 2022) bootstraps reasoning能力 from self-generated reasoning chains; AlphaEvolve (DeepMind, 2025) discovered algorithmic optimizations that had gone unimproved for decades; Silver and Sutton's "Age of Experience" (2025) defines agent learning as a never-ending stream of continuous experience.

These research directions are converging. TTT-Discover has already融合 test-time training and RL-driven exploration. HOPE nests fast and slow learning loops within a single architecture. SDFT turns distillation into a fundamental operation for self-improvement. The boundaries between columns are blurring. The next generation of continual learning systems will likely combine multiple strategies: regularization for stability, meta-learning for speed, self-improvement for compound growth. A growing number of startups are betting on different layers of this tech stack.

Continual Learning Startup Landscape

The non-parametric end of the spectrum is the most well-known. Shell companies (Letta, mem0, Subconscious) build orchestration layers and scaffolding, managing what goes into the context window. External storage and RAG infrastructure (e.g., Pinecone, xmemory) provide the retrieval backbone. The data exists; the challenge is getting the right slice in front of the model at the right time. As context windows expand, the design space for these companies grows, especially on the shell side, where a new wave of startups is emerging to manage increasingly complex context strategies.

The parametric end is earlier and more diverse. Companies here are experimenting with some version of "post-deployment compression," allowing models to internalize new information in their weights. The paths roughly correspond to different bets on *how* models should learn after release.

Partial Compression: Learning Without Retraining. Some teams are building pluggable knowledge modules (compressed KV caches, adapter layers, external memory stores) that allow general models to specialize without touching the core weights. The common argument is: you get meaningful compression (not just retrieval), while keeping the stability-plasticity trade-off manageable because learning is isolated, not spread throughout the parameter space. An 8B model with the right module can match the performance of much larger models on target task. The advantage is composability: modules can be plugged and played with existing Transformer architectures, can be swapped or updated independently, with much lower experimentation cost than retraining.

RL and Feedback Loops: Learning from Signals. Other teams bet that the richest signal for post-deployment learning already exists in the deployment loop itself—user corrections, task success/failure, reward signals from real-world outcomes. The core idea is that the model should treat every interaction as a potential training signal, not just an inference request. This is highly analogous to how humans improve at their jobs: do work, get feedback, internalize what works. The engineering challenge is converting sparse, noisy, sometimes adversarial feedback into stable weight updates without catastrophic forgetting. But a model that can truly learn from deployment compounds value in ways context systems cannot.

Data-Centric: Learning from the Right Signals. A related but distinct bet is that the bottleneck is not the learning algorithm, but the training data and surrounding systems. These teams focus on curating, generating, or synthesizing the *right* data to drive continuous updates: the premise is that a model with high-quality, well-structured learning signals needs far fewer gradient steps to improve meaningfully. This dovetails naturally with feedback loop companies but emphasizes the upstream question: it's one thing if the model *can* learn, another what it *should* learn from and to what extent.

New Architectures: Designing Learning Capability from the Ground Up. The most radical bet argues that the Transformer architecture itself is the bottleneck, and continual learning requires fundamentally different computational primitives: architectures with continuous-time dynamics and built-in memory mechanisms. The argument here is structural: if you want a continually learning system, you should embed the learning mechanism into the underlying foundation.

Figure Caption: Continual Learning Startup Landscape

All major labs are also actively working within these categories. Some are exploring better context management and chain-of-thought reasoning, others are experimenting with external memory modules or sleep-time compute pipelines, and several stealth companies are pursuing new architectures. The field is early enough that no single approach has won yet, and given the breadth of use cases, there shouldn't be just one winner.

Why Naive Weight Updates Fail

Updating model parameters in a production environment triggers a cascade of failure modes that are not yet resolved at scale.

Figure Caption: Failure modes of naive weight updates

The engineering problems are well-documented. Catastrophic forgetting means a model sensitive enough to learn from new data will destroy existing representations—the stability-plasticity dilemma. Temporal decoupling refers to the fact that invariant rules and mutable state are compressed into the same set of weights; updating one corrupts the other. Logical integration fails because fact updates don't propagate to their corollaries: changes are confined to the token sequence level, not the semantic concept level. Unlearning is still impossible: there is no differentiable subtraction operation, so there is no precise surgical removal method for false or toxic knowledge.

There is a second class of problems that receives less attention. The current separation between training and deployment is not just an engineering convenience; it is a boundary for safety, auditability, and governance. Opening this boundary causes multiple things to go wrong simultaneously. Safety alignment can degrade unpredictably: even narrow fine-tuning on benign data can produce widespread misaligned behavior.

Continuous updates create an attack surface for data poisoning—a slow, persistent version of prompt injection, but it lives in the weights. Auditability collapses because a continuously updated model is a moving target, making version control, regression testing, or one-time certification impossible. Privacy risks intensify when user interactions are compressed into parameters, baking sensitive information into representations that are harder to filter than information in a retrieved context.

These are open problems, not fundamental impossibilities. Solving them is part of the continual learning research agenda, just like solving the core architectural challenges.

From "Memento" to True Memory

Leonard's tragedy in "Memento" is not that he can't function—in any given scene, he is resourceful, even brilliant. His tragedy is that he can never compound. Every experience remains external—a Polaroid, a tattoo, a note in someone else's handwriting. He can retrieve, but he cannot compress new knowledge.

As Leonard navigates this self-constructed maze, the line between truth and belief begins to blur. His condition doesn't just deprive him of memory; it forces him to constantly reconstruct meaning, making him both the detective and the unreliable narrator of his own story.

Today's AI operates under the same constraints. We have built very powerful retrieval systems: longer context windows, smarter shells, coordinated multi-agent swarms, and they work. But retrieval is not learning. A system that can look up any fact is not forced to find structure. It is not forced to generalize. The lossy compression that made training so powerful—the mechanism that turns raw data into transferable representations—is precisely what we turn off the moment we deploy.

The path forward is likely not a single breakthrough, but a layered system. In-context learning will remain the first line of adaptive defense: it is native, proven, and improving. Module mechanisms can handle the middle ground of personalization and domain specialization.

But for those truly difficult problems—discovery, adversarial adaptation, implicit knowledge that cannot be put into words—we may need to let models continue to compress experience into parameters after training. This means advances in sparse architectures, meta-learning objectives, and self-improvement loops. It might also require us to redefine what a "model" is: not a fixed set of weights, but an evolving system comprising its memory, its update algorithm, and its ability to abstract from its own experience.

The filing cabinet is getting bigger. But a bigger filing cabinet is still a filing cabinet. The breakthrough is to let the model do after deployment what made it powerful during training: compress, abstract, learn. We stand at the turning point from amnesiac models to models with a glimmer of experience. Otherwise, we'll be stuck in our own "Memento."

Preguntas relacionadas

QWhat is the core problem with current large language models (LLMs) regarding memory and learning after deployment, as discussed in the a16z article?

AThe core problem is that LLMs suffer from a form of 'amnesia' or an inability to form new memories after their initial training is complete. Their parameters are frozen, and they cannot internally update their knowledge based on new experiences. They rely on external scaffolds' like chat history (short-term sticky notes), retrieval systems (external notebooks), and system prompts (tattoos) to function, but the model itself never truly internalizes this new information.

QAccording to the article, what is 'continual Learning' and why is it considered a critical research direction in AI?

AContinual learning is the research field focused on enabling AI models to learn continuously and update their parameters (weights) after deployment, thereby internalizing new knowledge and experiences. It is considered critical because the gap between what a model 'knows' at release and what it 'could know' is becoming increasingly apparent. This ability is seen as essential for tackling problems requiring true discovery, adversarial scenarios, and internalizing knowledge that is too implicit to be expressed in language.

QWhat is the 'filing cabinet fallacy' argument presented in the article against relying solely on context learning (ICL)?

AThe 'filing cabinet fallacy' argues that a system with infinite storage and perfect retrieval (like a massive filing cabinet) does not constitute learning because it is never forced to perform compression. Compression, which is lossy, is what forces a model to find structure, generalize, and build transferable representations. Relying solely on context learning and external memory avoids this crucial compression step, preventing the model from truly learning and generalizing from new information after deployment.

QWhat are the three main paths or spectra of continual learning discussed in the article?

AThe three main paths on the continual learning spectrum are: 1. **Context:** Building smarter retrieval pipelines, agent shells, and prompt orchestration without updating model weights. 2. **Modules:** Using pluggable knowledge modules (compressed KV caches, adapter layers, external memory stores) to specialize a general model without full retraining. 3. **Weights:** Pursuing true parameter-level learning through methods like sparse memory layers, reinforcement learning loops from feedback, and test-time training to compress context into weights internally.

QWhat are some of the key challenges and failure modes associated with naively updating a model's weights in a production environment?

AKey challenges and failure modes include: - **Catastrophic Forgetting:** Updating on new data can destroy existing representations (the stability-plasticity dilemma). - **Temporal Decoupling:** Invariant rules and mutable state are compressed into the same weights; updating one can corrupt the other. - **Failure of Logical Integration:** Fact updates don't propagate to their logical corollaries. - **Safety & Security Risks:** Safety alignment can degrade unpredictably, creating a new attack surface for data poisoning. - **Auditability & Governance Collapse:** A continuously updated model is a moving target, making version control, regression testing, and certification difficult. - **Privacy Risks:** User interactions compressed into parameters can bake in sensitive information.

Lecturas Relacionadas

Trading

Spot
Futuros

Artículos destacados

Qué es $S$

Entendiendo SPERO: Una Visión General Completa Introducción a SPERO A medida que el panorama de la innovación continúa evolucionando, la aparición de tecnologías web3 y proyectos de criptomonedas juega un papel fundamental en la configuración del futuro digital. Un proyecto que ha atraído la atención en este campo dinámico es SPERO, denotado como SPERO,$$s$. Este artículo tiene como objetivo reunir y presentar información detallada sobre SPERO, para ayudar a entusiastas e inversores a comprender sus fundamentos, objetivos e innovaciones dentro de los dominios web3 y cripto. ¿Qué es SPERO,$$s$? SPERO,$$s$ es un proyecto único dentro del espacio cripto que busca aprovechar los principios de descentralización y tecnología blockchain para crear un ecosistema que promueva la participación, la utilidad y la inclusión financiera. El proyecto está diseñado para facilitar interacciones de igual a igual de nuevas maneras, proporcionando a los usuarios soluciones y servicios financieros innovadores. En su esencia, SPERO,$$s$ tiene como objetivo empoderar a los individuos al proporcionar herramientas y plataformas que mejoren la experiencia del usuario en el espacio de las criptomonedas. Esto incluye habilitar métodos de transacción más flexibles, fomentar iniciativas impulsadas por la comunidad y crear caminos para oportunidades financieras a través de aplicaciones descentralizadas (dApps). La visión subyacente de SPERO,$$s$ gira en torno a la inclusividad, buscando cerrar brechas dentro de las finanzas tradicionales mientras aprovecha los beneficios de la tecnología blockchain. ¿Quién es el Creador de SPERO,$$s$? La identidad del creador de SPERO,$$s$ sigue siendo algo oscura, ya que hay recursos públicos limitados que proporcionan información de fondo detallada sobre su(s) fundador(es). Esta falta de transparencia puede derivarse del compromiso del proyecto con la descentralización, una ética que muchos proyectos web3 comparten, priorizando las contribuciones colectivas sobre el reconocimiento individual. Al centrar las discusiones en torno a la comunidad y sus objetivos colectivos, SPERO,$$s$ encarna la esencia del empoderamiento sin señalar a individuos específicos. Como tal, comprender la ética y la misión de SPERO sigue siendo más importante que identificar a un creador singular. ¿Quiénes son los Inversores de SPERO,$$s$? SPERO,$$s$ cuenta con el apoyo de una diversa gama de inversores que van desde capitalistas de riesgo hasta inversores ángeles dedicados a fomentar la innovación en el sector cripto. El enfoque de estos inversores generalmente se alinea con la misión de SPERO, priorizando proyectos que prometen avances tecnológicos sociales, inclusión financiera y gobernanza descentralizada. Estas fundaciones de inversores suelen estar interesadas en proyectos que no solo ofrecen productos innovadores, sino que también contribuyen positivamente a la comunidad blockchain y sus ecosistemas. El respaldo de estos inversores refuerza a SPERO,$$s$ como un contendiente notable en el dominio de proyectos cripto que evoluciona rápidamente. ¿Cómo Funciona SPERO,$$s$? SPERO,$$s$ emplea un marco multifacético que lo distingue de los proyectos de criptomonedas convencionales. Aquí hay algunas de las características clave que subrayan su singularidad e innovación: Gobernanza Descentralizada: SPERO,$$s$ integra modelos de gobernanza descentralizada, empoderando a los usuarios para participar activamente en los procesos de toma de decisiones sobre el futuro del proyecto. Este enfoque fomenta un sentido de propiedad y responsabilidad entre los miembros de la comunidad. Utilidad del Token: SPERO,$$s$ utiliza su propio token de criptomoneda, diseñado para servir diversas funciones dentro del ecosistema. Estos tokens permiten transacciones, recompensas y la facilitación de servicios ofrecidos en la plataforma, mejorando la participación y la utilidad general. Arquitectura en Capas: La arquitectura técnica de SPERO,$$s$ apoya la modularidad y escalabilidad, permitiendo la integración fluida de características y aplicaciones adicionales a medida que el proyecto evoluciona. Esta adaptabilidad es fundamental para mantener la relevancia en el cambiante paisaje cripto. Participación de la Comunidad: El proyecto enfatiza iniciativas impulsadas por la comunidad, empleando mecanismos que incentivan la colaboración y la retroalimentación. Al nutrir una comunidad sólida, SPERO,$$s$ puede abordar mejor las necesidades de los usuarios y adaptarse a las tendencias del mercado. Enfoque en la Inclusión: Al ofrecer tarifas de transacción bajas e interfaces amigables para el usuario, SPERO,$$s$ busca atraer a una base de usuarios diversa, incluyendo a individuos que anteriormente pueden no haber participado en el espacio cripto. Este compromiso con la inclusión se alinea con su misión general de empoderamiento a través de la accesibilidad. Cronología de SPERO,$$s$ Entender la historia de un proyecto proporciona información crucial sobre su trayectoria de desarrollo y hitos. A continuación se presenta una cronología sugerida que mapea eventos significativos en la evolución de SPERO,$$s$: Fase de Conceptualización e Ideación: Las ideas iniciales que forman la base de SPERO,$$s$ fueron concebidas, alineándose estrechamente con los principios de descentralización y enfoque comunitario dentro de la industria blockchain. Lanzamiento del Whitepaper del Proyecto: Tras la fase conceptual, se lanzó un whitepaper completo que detalla la visión, los objetivos y la infraestructura tecnológica de SPERO,$$s$ para generar interés y retroalimentación de la comunidad. Construcción de Comunidad y Primeras Interacciones: Se realizaron esfuerzos de divulgación activa para construir una comunidad de primeros adoptantes y posibles inversores, facilitando discusiones en torno a los objetivos del proyecto y obteniendo apoyo. Evento de Generación de Tokens: SPERO,$$s$ llevó a cabo un evento de generación de tokens (TGE) para distribuir sus tokens nativos a los primeros seguidores y establecer liquidez inicial dentro del ecosistema. Lanzamiento de la dApp Inicial: La primera aplicación descentralizada (dApp) asociada con SPERO,$$s$ se puso en marcha, permitiendo a los usuarios interactuar con las funcionalidades centrales de la plataforma. Desarrollo Continuo y Alianzas: Actualizaciones y mejoras continuas a las ofertas del proyecto, incluyendo alianzas estratégicas con otros actores en el espacio blockchain, han moldeado a SPERO,$$s$ en un jugador competitivo y en evolución en el mercado cripto. Conclusión SPERO,$$s$ se erige como un testimonio del potencial de web3 y las criptomonedas para revolucionar los sistemas financieros y empoderar a los individuos. Con un compromiso con la gobernanza descentralizada, la participación comunitaria y funcionalidades diseñadas de manera innovadora, allana el camino hacia un paisaje financiero más inclusivo. Como con cualquier inversión en el espacio cripto que evoluciona rápidamente, se anima a los posibles inversores y usuarios a investigar a fondo y participar de manera reflexiva con los desarrollos en curso dentro de SPERO,$$s$. El proyecto muestra el espíritu innovador de la industria cripto, invitando a una mayor exploración de sus innumerables posibilidades. Mientras el viaje de SPERO,$$s$ aún se desarrolla, sus principios fundamentales pueden, de hecho, influir en el futuro de cómo interactuamos con la tecnología, las finanzas y entre nosotros en ecosistemas digitales interconectados.

72 Vistas totalesPublicado en 2024.12.17Actualizado en 2024.12.17

Qué es $S$

Qué es AGENT S

Agent S: El Futuro de la Interacción Autónoma en Web3 Introducción En el paisaje en constante evolución de Web3 y las criptomonedas, las innovaciones están redefiniendo constantemente cómo los individuos interactúan con las plataformas digitales. Uno de estos proyectos pioneros, Agent S, promete revolucionar la interacción humano-computadora a través de su marco agente abierto. Al allanar el camino para interacciones autónomas, Agent S busca simplificar tareas complejas, ofreciendo aplicaciones transformadoras en inteligencia artificial (IA). Esta exploración detallada profundizará en las complejidades del proyecto, sus características únicas y las implicaciones para el dominio de las criptomonedas. ¿Qué es Agent S? Agent S se presenta como un marco agente abierto innovador, diseñado específicamente para abordar tres desafíos fundamentales en la automatización de tareas informáticas: Adquisición de Conocimiento Específico del Dominio: El marco aprende inteligentemente de diversas fuentes de conocimiento externas y experiencias internas. Este enfoque dual le permite construir un rico repositorio de conocimiento específico del dominio, mejorando su rendimiento en la ejecución de tareas. Planificación a Largo Plazo de Tareas: Agent S emplea planificación jerárquica aumentada por la experiencia, un enfoque estratégico que facilita la descomposición y ejecución eficiente de tareas complejas. Esta característica mejora significativamente su capacidad para gestionar múltiples subtareas de manera eficiente y efectiva. Manejo de Interfaces Dinámicas y No Uniformes: El proyecto introduce la Interfaz Agente-Computadora (ACI), una solución innovadora que mejora la interacción entre agentes y usuarios. Utilizando Modelos de Lenguaje Multimodal de Gran Escala (MLLMs), Agent S puede navegar y manipular diversas interfaces gráficas de usuario sin problemas. A través de estas características pioneras, Agent S proporciona un marco robusto que aborda las complejidades involucradas en la automatización de la interacción humana con las máquinas, preparando el terreno para una multitud de aplicaciones en IA y más allá. ¿Quién es el Creador de Agent S? Si bien el concepto de Agent S es fundamentalmente innovador, la información específica sobre su creador sigue siendo elusiva. El creador es actualmente desconocido, lo que resalta ya sea la etapa incipiente del proyecto o la elección estratégica de mantener a los miembros fundadores en el anonimato. Independientemente de la anonimidad, el enfoque sigue siendo en las capacidades y el potencial del marco. ¿Quiénes son los Inversores de Agent S? Dado que Agent S es relativamente nuevo en el ecosistema criptográfico, la información detallada sobre sus inversores y patrocinadores financieros no está documentada explícitamente. La falta de información disponible públicamente sobre las bases de inversión u organizaciones que apoyan el proyecto plantea preguntas sobre su estructura de financiamiento y hoja de ruta de desarrollo. Comprender el respaldo es crucial para evaluar la sostenibilidad del proyecto y su posible impacto en el mercado. ¿Cómo Funciona Agent S? En el núcleo de Agent S se encuentra una tecnología de vanguardia que le permite funcionar de manera efectiva en diversos entornos. Su modelo operativo se basa en varias características clave: Interacción Humano-Computadora Similar a la Humana: El marco ofrece planificación avanzada de IA, esforzándose por hacer que las interacciones con las computadoras sean más intuitivas. Al imitar el comportamiento humano en la ejecución de tareas, promete elevar las experiencias de los usuarios. Memoria Narrativa: Empleada para aprovechar experiencias de alto nivel, Agent S utiliza memoria narrativa para hacer un seguimiento de las historias de tareas, mejorando así sus procesos de toma de decisiones. Memoria Episódica: Esta característica proporciona a los usuarios una guía paso a paso, permitiendo que el marco ofrezca apoyo contextual a medida que se desarrollan las tareas. Soporte para OpenACI: Con la capacidad de ejecutarse localmente, Agent S permite a los usuarios mantener el control sobre sus interacciones y flujos de trabajo, alineándose con la ética descentralizada de Web3. Fácil Integración con APIs Externas: Su versatilidad y compatibilidad con varias plataformas de IA aseguran que Agent S pueda encajar sin problemas en ecosistemas tecnológicos existentes, convirtiéndolo en una opción atractiva para desarrolladores y organizaciones. Estas funcionalidades contribuyen colectivamente a la posición única de Agent S dentro del espacio cripto, ya que automatiza tareas complejas y de múltiples pasos con una intervención humana mínima. A medida que el proyecto evoluciona, sus posibles aplicaciones en Web3 podrían redefinir cómo se desarrollan las interacciones digitales. Cronología de Agent S El desarrollo y los hitos de Agent S pueden encapsularse en una cronología que resalta sus eventos significativos: 27 de septiembre de 2024: El concepto de Agent S fue lanzado en un documento de investigación integral titulado “Un Marco Agente Abierto que Usa Computadoras Como un Humano”, mostrando las bases del proyecto. 10 de octubre de 2024: El documento de investigación fue puesto a disposición del público en arXiv, ofreciendo una exploración profunda del marco y su evaluación de rendimiento basada en el benchmark OSWorld. 12 de octubre de 2024: Se lanzó una presentación en video, proporcionando una visión visual de las capacidades y características de Agent S, involucrando aún más a posibles usuarios e inversores. Estos marcadores en la cronología no solo ilustran el progreso de Agent S, sino que también indican su compromiso con la transparencia y la participación comunitaria. Puntos Clave Sobre Agent S A medida que el marco Agent S continúa evolucionando, varios atributos clave destacan, subrayando su naturaleza innovadora y potencial: Marco Innovador: Diseñado para proporcionar un uso intuitivo de las computadoras similar a la interacción humana, Agent S aporta un enfoque novedoso a la automatización de tareas. Interacción Autónoma: La capacidad de interactuar de manera autónoma con las computadoras a través de GUI significa un salto hacia soluciones informáticas más inteligentes y eficientes. Automatización de Tareas Complejas: Con su metodología robusta, puede automatizar tareas complejas y de múltiples pasos, haciendo que los procesos sean más rápidos y menos propensos a errores. Mejora Continua: Los mecanismos de aprendizaje permiten a Agent S mejorar a partir de experiencias pasadas, mejorando continuamente su rendimiento y eficacia. Versatilidad: Su adaptabilidad en diferentes entornos operativos como OSWorld y WindowsAgentArena asegura que pueda servir a una amplia gama de aplicaciones. A medida que Agent S se posiciona en el paisaje de Web3 y criptomonedas, su potencial para mejorar las capacidades de interacción y automatizar procesos significa un avance significativo en las tecnologías de IA. A través de su marco innovador, Agent S ejemplifica el futuro de las interacciones digitales, prometiendo una experiencia más fluida y eficiente para los usuarios en diversas industrias. Conclusión Agent S representa un audaz avance en la unión de la IA y Web3, con la capacidad de redefinir cómo interactuamos con la tecnología. Aunque aún se encuentra en sus primeras etapas, las posibilidades para su aplicación son vastas y atractivas. A través de su marco integral que aborda desafíos críticos, Agent S busca llevar las interacciones autónomas al primer plano de la experiencia digital. A medida que nos adentramos más en los reinos de las criptomonedas y la descentralización, proyectos como Agent S sin duda desempeñarán un papel crucial en la configuración del futuro de la tecnología y la colaboración humano-computadora.

363 Vistas totalesPublicado en 2025.01.14Actualizado en 2025.01.14

Qué es AGENT S

Cómo comprar S

¡Bienvenido a HTX.com! Hemos hecho que comprar Sonic (S) sea simple y conveniente. Sigue nuestra guía paso a paso para iniciar tu viaje de criptos.Paso 1: crea tu cuenta HTXUtiliza tu correo electrónico o número de teléfono para registrarte y obtener una cuenta gratuita en HTX. Experimenta un proceso de registro sin complicaciones y desbloquea todas las funciones.Obtener mi cuentaPaso 2: ve a Comprar cripto y elige tu método de pagoTarjeta de crédito/débito: usa tu Visa o Mastercard para comprar Sonic (S) al instante.Saldo: utiliza fondos del saldo de tu cuenta HTX para tradear sin problemas.Terceros: hemos agregado métodos de pago populares como Google Pay y Apple Pay para mejorar la comodidad.P2P: tradear directamente con otros usuarios en HTX.Over-the-Counter (OTC): ofrecemos servicios personalizados y tipos de cambio competitivos para los traders.Paso 3: guarda tu Sonic (S)Después de comprar tu Sonic (S), guárdalo en tu cuenta HTX. Alternativamente, puedes enviarlo a otro lugar mediante transferencia blockchain o utilizarlo para tradear otras criptomonedas.Paso 4: tradear Sonic (S)Tradear fácilmente con Sonic (S) en HTX's mercado spot. Simplemente accede a tu cuenta, selecciona tu par de trading, ejecuta tus trades y monitorea en tiempo real. Ofrecemos una experiencia fácil de usar tanto para principiantes como para traders experimentados.

744 Vistas totalesPublicado en 2025.01.15Actualizado en 2025.03.21

Cómo comprar S

Discusiones

Bienvenido a la comunidad de HTX. Aquí puedes mantenerte informado sobre los últimos desarrollos de la plataforma y acceder a análisis profesionales del mercado. A continuación se presentan las opiniones de los usuarios sobre el precio de S (S).

活动图片