Original Author: Cuy Sheffield, Vice President and Head of Crypto at Visa
Original Compilation: Saoirse, Foresight News
As cryptocurrency and AI gradually mature, the most important transformations in these two fields are no longer "theoretically feasible" but "reliably implementable in practice." Currently, both technologies have crossed critical thresholds, achieving significant performance improvements, but their practical adoption rates remain uneven. The core developments in 2026 will stem from this gap between performance and adoption."
Below are several key themes I have been following, along with preliminary thoughts on the direction of these technological developments, areas of value accumulation, and "why the eventual winners may differ entirely from the industry pioneers."
Theme 1: Cryptocurrency is transitioning from a speculative asset class to a high-quality technology
The first decade of cryptocurrency development was characterized by "speculative advantages"—its market is global, continuous, and highly open, with extreme volatility making cryptocurrency trading more dynamic and attractive than traditional financial markets.
However, the underlying technology was not yet ready for mainstream adoption: early blockchains were slow, expensive, and unstable. Beyond speculative scenarios, cryptocurrency almost never outperformed existing traditional systems in terms of cost, speed, or convenience.
Today, this imbalance is beginning to reverse. Blockchain technology has become faster, more economical, and more reliable. The most attractive application scenarios for cryptocurrency are no longer speculative but lie in infrastructure—particularly in settlement and payment processes. As cryptocurrency evolves into a more mature technology, speculation will gradually lose its central role: it will not disappear entirely but will no longer be the primary source of value.
Theme 2: Stablecoins are a clear achievement of cryptocurrency's "pure utility"
Stablecoins differ from previous cryptocurrency narratives in that their success is based on specific, objective criteria: in certain scenarios, stablecoins are faster, cheaper, and more widely accessible than traditional payment channels, while seamlessly integrating into modern software systems.
Stablecoins do not require users to view cryptocurrency as an "ideology" to believe in. Their applications often occur "implicitly" within existing products and workflows—this has finally enabled institutions and enterprises that considered the cryptocurrency ecosystem "too volatile and insufficiently transparent" to clearly understand its value.
It can be said that stablecoins help re-anchor cryptocurrency to "utility" rather than "speculation," setting a clear benchmark for "how cryptocurrency can succeed in practice."
Theme 3: When cryptocurrency becomes infrastructure, "distribution capability" is more important than "technological novelty"
In the past, when cryptocurrency primarily served as a "speculative tool," its "distribution" was endogenous—new tokens only needed to "exist" to naturally accumulate liquidity and attention.
As cryptocurrency becomes infrastructure, its application scenarios are shifting from the "market level" to the "product level": it is embedded in payment processes, platforms, and enterprise systems, often without end-users being aware of its presence.
This shift greatly benefits two types of entities: first, enterprises with existing distribution channels and reliable customer relationships; second, institutions with regulatory licenses, compliance systems, and risk management infrastructure. Relying solely on "protocol novelty" is no longer sufficient to drive large-scale adoption of cryptocurrency.
Theme 4: AI agents possess practical value, and their impact is extending beyond the coding field
The practicality of AI agents (Agents) is increasingly evident, but their role is misunderstood: the most successful agents are not "autonomous decision-makers" but "tools that reduce coordination costs in workflows."
Historically, this has been most evident in software development—agent tools accelerate coding, debugging, code refactoring, and environment setup. In recent years, however, this "tool value" has significantly expanded to more fields.
Take tools like Claude Code as an example. Although positioned as a "developer tool," its rapid adoption reflects a deeper trend: agent systems are becoming "interfaces for knowledge work," not limited to programming alone. Users are beginning to apply "agent-driven workflows" to research, analysis, writing, planning, data processing, and operational tasks—tasks that lean more toward "general professional work" than traditional programming.
The key is not "ambient coding" itself but the core pattern behind it:
- Users delegate "intentions and goals," not "specific steps";
- Agents manage "contextual information" across files, tools, and tasks;
- The work mode shifts from "linear progression" to "iterative, conversational."
In various knowledge work scenarios, agents excel at gathering context, executing bounded tasks, reducing handoffs, and accelerating iteration efficiency. However, they still have shortcomings in "open-ended judgment," "accountability," and "error correction."
Therefore, most agents used in production scenarios still need to be "scoped, supervised, and embedded in systems," rather than operating fully independently. The practical value of agents stems from the "restructuring of knowledge workflows," not "replacing labor" or "achieving full autonomy."
Theme 5: AI's bottleneck has shifted from "intelligence level" to "trustworthiness"
AI models have rapidly improved in intelligence. The current limiting factor is no longer "singular language fluency or reasoning ability" but "reliability in practical systems."
Production environments have zero tolerance for three types of issues: first, AI "hallucinations" (generating false information); second, inconsistent outputs; third, opaque failure modes. Once AI involves customer service, financial transactions, or compliance, "roughly correct" results are no longer acceptable.
Establishing "trust" requires four foundations: first, traceability of results; second, memory capability; third, verifiability; fourth, the ability to proactively expose "uncertainty." Before these capabilities mature sufficiently, AI's autonomy must be constrained.
Theme 6: Systems engineering determines whether AI can be deployed in production scenarios
Successful AI products treat "models" as "components" rather than "finished products"—their reliability stems from "architectural design," not "prompt optimization."
Here, "architectural design" includes state management, control flow, evaluation and monitoring systems, and fault handling and recovery mechanisms. This is why AI development is increasingly resembling "traditional software engineering" rather than "cutting-edge theoretical research."
Long-term value will accrue to two types of entities: first, system builders; second, platform owners who control workflows and distribution channels.
As agent tools expand from coding to research, writing, analysis, and operational processes, the importance of "systems engineering" will become even more pronounced: knowledge work is often complex, state-dependent, and context-intensive, making agents that "reliably manage memory, tools, and iterative processes" (not just generate outputs) more valuable.
Theme 7: The contradiction between open models and centralized control raises unresolved governance issues
As AI systems become more powerful and integrate deeper into the economic sphere, the question of "who owns and controls the most powerful AI models" is creating core contradiction.
On one hand, R&D at the AI frontier remains "capital-intensive" and is increasingly concentrated due to "compute access, regulatory policies, and geopolitics"; on the other hand, open-source models and tools continue to iterate and improve, driven by "broad experimentation and ease of deployment."
This "coexistence of centralization and openness" has sparked a series of unresolved questions: dependency risk, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a "hybrid model"—frontier models push the boundaries of technical capability, while open or semi-open systems integrate these capabilities into "widely distributed software."
Theme 8: Programmable money gives rise to new agent payment flows
When AI systems play a role in workflows, their need for "economic interaction" increases—such as paying for services, calling APIs, compensating other agents, or settling "usage-based interaction fees."
This demand has brought "stablecoins" back into focus: they are seen as "machine-native currency," programmable, auditable, and transferable without human intervention.
Take protocols like x402, aimed at developers, as an example. Although still in early experimental stages, the direction is clear: payment flows will operate as "APIs," not traditional "checkout pages"—enabling "continuous, granular transactions" between software agents.
Currently, this field is still nascent: transaction sizes are small, user experience is rough, and security and permission systems are still being refined. But infrastructure innovation often starts from such "early exploration."
Notably, the significance is not "autonomy for autonomy's sake" but rather that "new economic behaviors become possible when software can programmatically complete transactions."
Conclusion
Whether for cryptocurrency or artificial intelligence, the early development stages favored "eye-catching concepts" and "technological novelty"; in the next stage, "reliability," "governance capability," and "distribution capability" will become more critical competitive dimensions.
Today, the technology itself is no longer the primary limiting factor; "embedding the technology into actual systems" is the key.
In my view, the hallmark of 2026 will not be "a single breakthrough technology" but rather the "steady accumulation of infrastructure"—facilities that, while operating silently, are quietly reshaping "how value flows" and "how work is done."