DeAI: In the Era of AI's 'Wild Growth', Why Web3 is Needed to Govern It

marsbitPublicado em 2026-01-06Última atualização em 2026-01-06

Resumo

The article "DeAI: Why Web3 Governance is Needed in the Era of AI's 'Wild Growth'" discusses the rise of decentralized AI (DeAI) as a counterpoint to the current centralized AI paradigm. It highlights two major concerns with centralized AI: the lack of verifiability in model outputs and the scalability limitations of centralized infrastructure. DeAI addresses these issues through verifiable compute, using cryptography and consensus mechanisms to ensure transparent and provable model execution. This approach not only builds trust in AI outputs but also enables cross-border collaboration. Projects like Prime Intellect and Inference Labs are already exploring distributed, verifiable inference on decentralized GPU networks. Economically, DeAI aligns with the industry's shift from raw computational power (Return-on-GPU) to efficiency and value-oriented推理. A permissionless, global network of heterogeneous GPUs could compete with centralized providers like AWS on cost while offering greater transparency. Moreover, DeAI promises to democratize AI development, allowing contributors—whether providing compute, data, or applications—to participate in governance and share rewards. Although still in early stages, DeAI represents a path toward ethical, open, and decentralized AI ecosystems, complementing rather than replacing centralized models.

Original Author: K, Web3Caff Research Analyst

In the trajectory of artificial intelligence development, the past two years have witnessed a significant structural shift. Model capabilities continue to break through, reasoning efficiency is constantly optimized, and global capital and state machinery are flocking to the field. However, behind this wave of fervor and capital focus on centralization, DeAI (Decentralized AI training and reasoning architecture) is emerging as another path to the future, directly addressing two major hidden dangers in current AI development: blind trust mechanisms and scalability fragility.

The prosperity of centralized AI is built upon massive physical infrastructure, from supercomputing clusters to closed black-box model reasoning, from packaged SaaS products to internal enterprise API calls. But just as the internet evolved from closed to open, from Web2 platforms to Web3 protocols, the development of AI will inevitably face two fundamental questions: First, how can users verify that the results of model reasoning have not been tampered with and are authentic? Second, when training and reasoning cross geographical, device, cultural, and legal boundaries, can centralized architectures still maintain cost and performance advantages?

DeAI networks propose a fundamentally different solution path compared to the centralized paradigm. It centers on the concept of "Verifiable Compute," using cryptography and consensus mechanisms to ensure that every model run has a traceable, provable execution path. This not only solves the user's problem of "blind trust" in the model but also provides a universal trust foundation for cross-border collaboration. Current pioneers like Prime Intellect and Inference Labs have already implemented partially verifiable reasoning in remote GPU clusters, opening new possibilities for distributed training and autonomous AI services. [70]

From an economic perspective, the rise of DeAI is also closely related to the shift in the AI industry's RoG (Return-on-GPU, i.e., the revenue generated per hour of GPU computing power). The design of GPT-4.1 no longer simply pursues large models and stacking computing power but emphasizes fine-tuning and optimized reasoning resource allocation—for example, reusing existing context during generation and reducing unnecessary recomputation to minimize无效输出 and token consumption, thereby directing more computing power towards truly valuable reasoning processes. [68] This marks a shift in industry focus from "how much GPU can be burned" to "how much value can be obtained per hour." This efficiency-oriented approach provides an excellent breakthrough point for decentralized AI networks.

The high fixed costs and efficiency bottlenecks of centralized GPU clusters in large-scale deployment will struggle to compete with a permissionless, heterogeneous GPU network contributed by users globally. If such a network possesses "verifiability," it can not only compete with the cost structures of centralized infrastructures like AWS and Azure but also inherently offers transparency and trustworthiness.

Furthermore, the impact of DeAI extends far beyond the technical level; it will reshape the ownership and participation structure of AI development. In the current closed training ecosystem dominated by giants like OpenAI and Anthropic, the vast majority of developers can only exist as "model users," unable to participate in the training profits or reasoning decisions of the models. In a DeAI network, every contributor—whether a node providing computing power, a user providing data, or an engineer developing Agent applications—can participate in governance and share profits through the protocol. This is not only an innovation in economic mechanisms but also a step forward in the ethics of AI development.

Of course, DeAI is still in its early exploratory stages. It has not yet established performance levels sufficient to replace centralized models, nor has it broken through bottlenecks such as network stability and verification efficiency. But the future of AI will not be a single path; it will be multi-track and parallel. Centralized platforms will continue to dominate the enterprise market, pursuing极致 productization with RoG optimization; meanwhile, DeAI networks will grow in edge scenarios and emerging markets, gradually evolving an open model ecosystem with its own vitality. Just as the internet brought information freedom, DeAI brings autonomy over intelligence. Its importance lies not only in its technical advantages but also in the possibility it offers of another world—a future where we don't need to trust specific intermediaries, yet can still trust intelligence itself.

This content is excerpted from the research report "Web3 2025 Annual 40,000-Word Report (Part 2): Facing the Historic Convergence of Finance × Computing × Internet Order, Is a Major Industry Shift About to Begin? A Panoramic Analysis of Its Structural Changes, Value Potential, Risk Boundaries, and Future Prospects" published by Web3Caff Research.

This report (now available for free reading) was written by Web3Caff Research analyst K. It systematically梳理 the core logic behind the developmental changes in Web3 for 2025, focusing on why application exploration and system collaboration are gradually becoming new focal points against the backdrop of evolving underlying infrastructure and regulatory capabilities. Key points include:

  1. Background of Stage Evolution: The underlying reasons for the shift in industry focus after the completion of a phase of infrastructure construction;
  2. Key Mechanism Changes: The impact of gradually clarifying rule frameworks and on-chain mechanisms on system operation methods;
  3. Main Application Directions: Exploration paths围绕 payment settlement, real-world scenario mapping, and programmable collaboration;
  4. Future Development Directions: Discussing the evolution of Web3 in 2026 and beyond.

Perguntas relacionadas

QWhat are the two main concerns about centralized AI that DeAI aims to address?

ADeAI aims to address the issues of 'blind trust mechanism' and 'scalability fragility' in centralized AI, ensuring verifiable computation and providing a trust foundation for cross-border collaboration.

QHow does DeAI ensure the authenticity and integrity of model inference results?

ADeAI uses 'Verifiable Compute' as its core concept, employing cryptography and consensus mechanisms to ensure that each model run has a traceable and provable execution path.

QWhat economic shift in the AI industry does the article mention as favorable for DeAI's development?

AThe industry is shifting focus from 'how many GPUs can be burned' to 'how much value can be obtained per hour' (Return-on-GPU or RoG), emphasizing fine-tuning and efficient resource allocation, which benefits decentralized GPU networks.

QHow does DeAI change the ownership and participation structure in AI development compared to centralized models?

AIn DeAI networks, contributors such as compute providers, data users, and developers can participate in governance and share profits through protocols, unlike in centralized ecosystems where most developers are merely 'model users'.

QWhat challenges does DeAI currently face according to the article?

ADeAI is still in its early stages and has not yet matched the performance levels of centralized models, while also facing bottlenecks in network stability and verification efficiency.

Leituras Relacionadas

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

In recent months, the rapid growth of the AI industry has attracted significant talent from the crypto sector. A persistent question among researchers intersecting both fields is whether blockchain can become a foundational part of AI infrastructure. While many previous AI and Crypto projects focused on application layers (like AI Agents, on-chain reasoning, data markets, and compute rentals), few achieved viable commercial models. Gensyn differentiates itself by targeting the most critical and expensive layer of AI: model training. Gensyn aims to organize globally distributed GPU resources into an open AI training network. Developers can submit training tasks, nodes provide computational power, and the network verifies results while distributing incentives. The core issue addressed is not decentralization for its own sake, but the increasing centralization of compute power among tech giants. In the era of large models, access to GPUs (like the H100) has become a decisive bottleneck, dictating the pace of AI development. Major AI companies are heavily dependent on large cloud providers for compute resources. Gensyn's approach is significant for several reasons: 1) It operates at the core infrastructure layer (model training), the most resource-intensive and technically demanding part of the AI value chain. 2) It proposes a more open, collaborative model for compute, potentially increasing resource utilization by dynamically pooling idle GPUs, similar to early cloud computing logic. 3) Its technical moat lies in solving complex challenges like verifying training results, ensuring node honesty, and maintaining reliability in a distributed environment—making it more of a deep-tech infrastructure company. 4) It targets a validated, high-growth market with genuine demand, rather than pursuing blockchain integration without purpose. Ultimately, the boundaries between Crypto and AI are blurring. AI requires global resource coordination, incentive mechanisms, and collaborative systems—areas where crypto-native solutions excel. Gensyn represents a step toward making advanced training capabilities more accessible and collaborative, moving beyond a niche controlled by a few giants. If successful, it could evolve into a fundamental piece of AI infrastructure, where the most enduring value in the AI era is often created.

marsbitHá 6h

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

marsbitHá 6h

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbitHá 8h

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbitHá 8h

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

Corning, a 175-year-old glass company, is experiencing a dramatic revival as a key player in AI infrastructure, driven by surging demand for high-performance optical fiber in data centers. AI data centers require vastly more fiber than traditional ones—5 to 10 times as much per rack—to handle high-speed data transmission between GPUs. This structural demand shift, coupled with supply constraints from the lengthy expansion cycle for fiber preforms, has created a significant supply-demand gap. Nvidia has invested in Corning, along with Lumentum and Coherent, in a $4.5 billion total commitment to secure the optical supply chain for AI. Corning's competitive edge lies in its expertise in producing ultra-low-loss, high-density, and bend-resistant specialty fiber, which is critical for 800G+ and future 1.6T data rates. Its deep involvement in co-packaged optics (CPO) with partners like Nvidia further solidifies its position. While not the largest fiber manufacturer globally, Corning's revenue from enterprise/data center clients now exceeds 40% of its optical communications sales, and it has secured multi-year supply agreements with major hyperscalers including Meta and Nvidia. Financially, Corning's optical communications revenue has surged, doubling from $1.3 billion in 2023 to over $3 billion in 2025. Its stock price has risen nearly 6-fold since late 2023. Key future catalysts include the rollout of Nvidia's CPO products and the scale of undisclosed customer agreements. However, risks include high current valuations and potential disruption from next-generation technologies like hollow-core fiber. The company's long-term bet on light over electricity, maintained even through the telecom bubble crash, is now being validated by the AI boom.

marsbitHá 8h

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

marsbitHá 8h

Trading

Spot
Futuros

Artigos em Destaque

Como comprar ERA

Bem-vindo à HTX.com!Tornámos a compra de Caldera (ERA) simples e conveniente.Segue o nosso guia passo a passo para iniciar a tua jornada no mundo das criptos.Passo 1: cria a tua conta HTXUtiliza o teu e-mail ou número de telefone para te inscreveres numa conta gratuita na HTX.Desfruta de um processo de inscrição sem complicações e desbloqueia todas as funcionalidades.Obter a minha contaPasso 2: vai para Comprar Cripto e escolhe o teu método de pagamentoCartão de crédito/débito: usa o teu visa ou mastercard para comprar Caldera (ERA) instantaneamente.Saldo: usa os fundos da tua conta HTX para transacionar sem problemas.Terceiros: adicionamos métodos de pagamento populares, como Google Pay e Apple Pay, para aumentar a conveniência.P2P: transaciona diretamente com outros utilizadores na HTX.Mercado de balcão (OTC): oferecemos serviços personalizados e taxas de câmbio competitivas para os traders.Passo 3: armazena teu Caldera (ERA)Depois de comprar o teu Caldera (ERA), armazena-o na tua conta HTX.Alternativamente, podes enviá-lo para outro lugar através de transferência blockchain ou usá-lo para transacionar outras criptomoedas.Passo 4: transaciona Caldera (ERA)Transaciona facilmente Caldera (ERA) no mercado à vista da HTX.Acede simplesmente à tua conta, seleciona o teu par de trading, executa as tuas transações e monitoriza em tempo real.Oferecemos uma experiência de fácil utilização tanto para principiantes como para traders experientes.

452 Visualizações TotaisPublicado em {updateTime}Atualizado em 2025.07.17

Como comprar ERA

Discussões

Bem-vindo à Comunidade HTX. Aqui, pode manter-se informado sobre os mais recentes desenvolvimentos da plataforma e obter acesso a análises profissionais de mercado. As opiniões dos utilizadores sobre o preço de ERA (ERA) são apresentadas abaixo.

活动图片