Axe Compute [NASDAQ: AGPU] Completes Corporate Restructuring (formerly POAI), Enterprise-Grade Decentralized GPU Computing Power Aethir Officially Enters Mainstream Market

深潮Published on 2025-12-12Last updated on 2025-12-12

Abstract

Predictive Oncology has officially rebranded as Axe Compute and will trade on NASDAQ under the ticker AGPU. This rebranding signifies the company's shift to operating as an enterprise-level provider, commercializing Aethir's decentralized GPU network to deliver guaranteed computational power for global AI enterprises. Axe Compute's infrastructure is supported by the Aethir Strategic Compute Reserve (SCR), which offers predictable GPU reservations, dedicated computing clusters, and enterprise-grade SLAs to address computational bottlenecks in AI training, inference, and data-intensive workloads. This move marks the first time decentralized GPU infrastructure has entered mainstream capital markets via a U.S. publicly listed company. Axe Compute will serve as the enterprise-facing delivery and contracting entity, while Aethir continues to operate as the underlying decentralized GPU-as-a-Service infrastructure. This structure bridges Web3 decentralized networks with Web2 enterprise demand, allowing businesses to use distributed GPU resources within familiar compliance and procurement frameworks. Aethir's network currently spans 93 countries, over 200 regions, and deploys more than 435,000 GPU containers, supporting high-end hardware like NVIDIA H100, H200, B200, and B300. Axe Compute's model aims to provide guaranteed GPU reservations, dedicated clusters, bare-metal performance, multi-region deployment, and enterprise SLAs—addressing common industry challenges such as long ...

Predictive Oncology today announced its official renaming to Axe Compute, trading under the stock ticker AGPU on the NASDAQ. This rebranding marks Axe Compute's official commercialization of Aethir's decentralized GPU network as an enterprise-grade operator, providing global AI enterprises with guaranteed enterprise-level computing power services.

Axe Compute's core computing infrastructure is planned to be supported by the Aethir Strategic Compute Reserve (SCR). This model aims to address the computing power supply bottlenecks faced by current AI enterprises in training, inference, and data-intensive workloads through predictable GPU reservations, dedicated computing clusters, and enterprise-grade SLAs.

Decentralized Computing Power Enters Mainstream U.S. Stock Market for the First Time

With Axe Compute listing on the NASDAQ as AGPU, decentralized GPU infrastructure has entered the mainstream corporate and capital markets for the first time in the form of a U.S. publicly traded company. Axe Compute will serve as the enterprise front-end delivery and contracting entity, providing services to corporate clients requiring compliant, stable, and scalable computing resources, while Aethir continues to operate as the underlying decentralized GPU-as-a-Service infrastructure.

This structure is seen as a crucial bridge connecting Web3 decentralized computing networks with Web2 enterprise-level computing demands, enabling corporate clients who previously found it difficult to directly adopt decentralized infrastructure to utilize distributed GPU resources within familiar compliance and procurement frameworks.

Aethir Strategic Compute Reserve Supports Enterprise-Grade Delivery

The Aethir Strategic Compute Reserve is a vital component of the Aethir decentralized GPU network. Its design goal is not to passively hold digital assets but to deploy computing resources into actual enterprise workloads, achieve commercial returns through computing utilization rates, and continuously expand computing supply capacity.

To date, Aethir's decentralized GPU network has covered 93 countries and over 200 regions, deploying more than 435,000 GPU containers. It supports mainstream high-end computing hardware, including NVIDIA H100, H200, B200, and B300, providing underlying support for global AI, gaming, and high-performance computing scenarios.

A New Computing Delivery Model for AI Enterprises

Against the backdrop of the current AI industry, GPU procurement cycles are lengthening, centralized cloud services face severe queuing, and computing power prices are highly volatile. Axe Compute stated that its enterprise-grade computing model, based on the Aethir network, aims to provide customers with:

  • Guaranteed GPU reservation mechanisms
  • Dedicated training and inference clusters
  • Bare-metal performance, avoiding virtualization overhead
  • Multi-region deployment capabilities
  • Enterprise-grade SLAs and compliant contract structures

This model attempts to balance the distributed advantages of decentralized computing with enterprise-grade delivery standards.

A Key Milestone for Web3 Infrastructure Expansion into the Enterprise Market

The industry widely believes that Axe Compute's listing provides a publicly evaluable sample of decentralized AI infrastructure for enterprises and capital markets. As enterprise demand enters the Aethir network through the Axe Compute channel, the commercialization path of decentralized GPU computing power is gradually moving from the experimental stage to large-scale implementation.

Officials stated that future enterprise computing deployments by Axe Compute will continue to operate based on Aethir's decentralized GPU network, promoting the practical application of decentralized infrastructure within the AI industry.

Axe Compute Official Website: https://axecompute.com/
Axe Compute Official X: https://x.com/axecompute

Related Questions

QWhat is the new name and stock ticker symbol for Predictive Oncology after its rebranding?

AThe new name is Axe Compute, and it trades under the stock ticker symbol AGPU on NASDAQ.

QWhat is the name of the decentralized GPU network that Axe Compute is commercializing for enterprise AI clients?

AAxe Compute is commercializing the Aethir decentralized GPU network.

QWhat does the Aethir Strategic Compute Reserve (SCR) provide to support Axe Compute's infrastructure?

AThe Aethir Strategic Compute Reserve provides a model for predictable GPU reservations, dedicated computing clusters, and enterprise-grade SLAs to meet the computing power supply bottlenecks faced by AI companies.

QHow does the structure of Axe Compute and Aethir serve as a bridge between different types of technology ecosystems?

AThis structure serves as an important bridge connecting Web3 decentralized computing networks with the enterprise-grade computing demands of Web2, allowing enterprise clients to use distributed GPU resources within familiar compliance and procurement frameworks.

QWhat are some key features of the enterprise-grade computing model offered by Axe Compute?

AKey features include a guaranteed GPU reservation mechanism, dedicated training and inference clusters, bare-metal performance to avoid virtualization overhead, multi-region deployment capabilities, and enterprise-grade SLAs with compliant contract structures.

Related Reads

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit3h ago

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit3h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit4h ago

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit4h ago

Trading

Spot
Futures
活动图片