# Сопутствующие статьи по теме GPU

Новостной центр HTX предлагает последние статьи и углубленный анализ по "GPU", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Who Controls Computing Power, Implicitly Controls the Future of AI: Anastasia, Co-founder of Gonka Protocol

Who Controls Compute, Controls AI's Future: Gonka Protocol Co-Founder Anastasia The centralization of compute power, not just AI models, is the critical power node in AI's future, argues Anastasia Matveeva, co-founder of Gonka Protocol. While public debate focuses on models, true power lies in the underlying infrastructure—access to GPUs, power, and data center capacity. This centralization creates structural barriers to innovation, enforces a rent-extraction model, and introduces systemic fragility. Gonka is a permissionless global network designed to decentralize AI compute. It enables anyone to contribute or access GPU resources via a programmatic, open API. Key to its efficiency is an architecture that minimizes overhead, ensuring most compute is used for actual AI workloads (primarily inference) rather than network maintenance. Rewards and governance are tied to verified compute contribution, not capital stake. The protocol addresses scalability and accessibility by allowing participants of all sizes to join without permission, with influence proportional to their compute power. It supports the emerging AI agent economy with transparent, dynamic pricing and reliable, verifiable computation. While currently not optimized for strict data sovereignty, its decentralized design avoids data accumulation, and its governance allows for future evolution to meet regulatory demands. The urgency for such decentralized solutions is high to prevent a calcified AI future dominated by a few infrastructure gatekeepers.

marsbit03/03 07:58

Who Controls Computing Power, Implicitly Controls the Future of AI: Anastasia, Co-founder of Gonka Protocol

marsbit03/03 07:58

The Next Earthquake in AI: Why the Real Danger Isn't the SaaS Killer, But the Computing Power Revolution?

The next seismic shift in AI isn't about SaaS disruption but a fundamental revolution in computing power. While many focus on AI applications like Claude Cowork replacing traditional software, the real transformation is happening beneath the surface: a dual revolution in algorithms and hardware that threatens NVIDIA’s dominance. First, algorithmic efficiency is advancing through architectures like MoE (Mixture of Experts), which activates only a fraction of a model’s parameters during computation. DeepSeek-V2, for example, uses just 9% of its 236 billion parameters to match GPT-4’s performance, decoupling AI capability from compute consumption and slashing training costs by up to 90%. Second, specialized inference hardware from companies like Cerebras and Groq is replacing GPUs for AI deployment. These chips integrate memory directly onto the processor, eliminating latency and drastically reducing inference costs. OpenAI’s $10 billion deal with Cerebras and NVIDIA’s acquisition of Groq signal this shift. Together, these trends could collapse the total cost of developing and running state-of-the-art AI to 10-15% of current GPU-based approaches. This paradigm shift undermines NVIDIA’s monopoly narrative and its valuation, which relies on the assumption that AI growth depends solely on its hardware. The real black swan event may not be an AI application breakthrough but a quiet technical report confirming the decline of GPU-centric compute.

marsbit02/12 04:38

The Next Earthquake in AI: Why the Real Danger Isn't the SaaS Killer, But the Computing Power Revolution?

marsbit02/12 04:38

The Next Earthquake in AI: Why the Real Danger Isn't the SaaS Killer, but the Computing Power Revolution?

The next seismic shift in AI is not the threat of "SaaS killers" but a fundamental revolution in computing power. While many focus on how AI applications like Claude Cowork are disrupting traditional software, the real transformation is happening beneath the surface—in the infrastructure that powers AI. Two converging technological paths are challenging NVIDIA’s GPU dominance: 1. **Algorithmic Efficiency**: DeepSeek’s Mixture-of-Experts (MoE) architecture allows massive models (e.g., DeepSeek-V2 with 236B parameters) to activate only a small fraction of "experts" (9%) during computation, achieving GPT-4-level performance at 10% of the computational cost. This decouples AI capability from sheer compute power. 2. **Specialized Hardware**: Inference-optimized chips from companies like Cerebras and Groq integrate memory directly onto the chip, eliminating data transfer delays. This "zero-latency" design drastically improves speed and efficiency, prompting even OpenAI to sign a $10B deal with Cerebras. Together, these advances could cause a cost collapse: training costs may drop by 90%, and inference costs could fall by an order of magnitude. The total cost of running world-class AI may plummet to 10-15% of current GPU-based solutions. This paradigm shift threatens NVIDIA’s valuation, built on the assumption of perpetual GPU dominance. If the market realizes that GPUs are no longer the only—or best—option, the foundation of NVIDIA’s trillions in market cap could crumble. The real black swan event may not be a new AI application, but a quiet technical breakthrough that reshapes the compute landscape.

marsbit02/11 01:58

The Next Earthquake in AI: Why the Real Danger Isn't the SaaS Killer, but the Computing Power Revolution?

marsbit02/11 01:58

On the Eve of the Quantum Computing Wave: Why Nvidia Might Emerge as the Biggest Winner?

Amidst the prevailing market perception that quantum computing remains a distant, sci-fi concept, Barclays' latest research challenges this view, arguing that the technology is on the verge of transitioning from a "lab toy" to a commercial tool. The report highlights several key misconceptions: First, quantum computing is not "too early"; the industry is approaching a watershed moment around 2026–2027 when "quantum advantage" is expected to be demonstrated, requiring stable operation of 100 logical qubits. Second, quantum computers will not replace classical systems like GPUs but instead complement them. Each logical qubit may require a GPU for error correction and control, potentially driving significant demand for chips from companies like NVIDIA and AMD, with projected incremental value exceeding $100 billion by 2040. Third, hardware approaches are not equal. Trapped ions currently lead in precision, silicon spin offers scalability potential, and neutral atoms excel in qubit count. Fourth, quantum computers are not yet powerful enough to break modern encryption, requiring thousands of logical qubits—far beyond current capabilities. Finally, the investment landscape is broader than often assumed, with opportunities across quantum processors, supply chains, semiconductor manufacturing, and enabling infrastructure, spanning both public and private companies.

比推02/09 15:01

On the Eve of the Quantum Computing Wave: Why Nvidia Might Emerge as the Biggest Winner?

比推02/09 15:01

NVIDIA's $2 Billion Investment in CoreWeave: The Industrial Revolution of Crypto Computing Power Transitioning to AI

NVIDIA has announced a strategic investment of $2 billion in CoreWeave’s Class A common stock, marking a pivotal shift of crypto mining infrastructure toward AI compute. CoreWeave, originally a major Ethereum PoW mining operator, transitioned to AI cloud services after Ethereum’s move to Proof-of-Stake. The investment supports CoreWeave’s goal to build over 5 gigawatts of AI infrastructure by 2030, representing nearly one-third of global AI compute capacity. This move accelerates the transformation of crypto mining firms with idle GPU resources into AI service providers, improving global compute efficiency and creating a “dual-track” model where GPU clusters can serve both crypto and AI workloads. The deal also strengthens the link between crypto and AI ecosystems, enabling new applications such as AI-generated NFTs, on-chain AI inference, and AI-powered DeFi. Capital markets have responded positively, with rising valuations for mining firms like Hut 8 and Iris Energy. Tokens reliant on GPU compute, such as RNDR and Akash, also stand to benefit. However, risks include potential GPU shortages for smaller mining coins and increased regulatory scrutiny as companies like CoreWeave operate under stricter compliance frameworks. Overall, NVIDIA’s investment signifies a major convergence of crypto and AI compute, reshaping value models and laying the foundation for a new era of integrated AI and Web3 applications.

marsbit01/27 13:38

NVIDIA's $2 Billion Investment in CoreWeave: The Industrial Revolution of Crypto Computing Power Transitioning to AI

marsbit01/27 13:38

Jensen Huang Announces 8 New Products in 1.5 Hours, NVIDIA Fully Bets on AI Inference and Physical AI

NVIDIA CEO Jensen Huang unveiled eight major announcements during his CES 2026 keynote, focusing on advancing AI inference and physical AI technologies. The centerpiece was the NVIDIA Vera Rubin POD AI supercomputer, which integrates six custom chips—Vera CPU, Rubin GPU, NVLink 6 Switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-X CPO—designed for协同 performance. The Rubin GPU offers 5x higher inference and 3.5x higher training performance than Blackwell, with support for HBM4 memory. The Vera Rubin NVL72 system delivers 3.6 EFLOPS in NVFP4 inference performance in a single rack, with enhanced memory bandwidth. NVIDIA also introduced the Spectrum-X Ethernet CPO for improved power efficiency, a推理上下文内存存储平台 to optimize KV cache storage and reduce recomputation, and the DGX SuperPOD based on Rubin architecture, cutting token costs for large MoE models to 1/10. On the software side, NVIDIA expanded its open-source offerings, including new models and datasets, and emphasized the rise of physical AI. The company open-sourced the Alpha-Mayo model for autonomous driving, enabling reasoning-based decision-making, and announced production-ready NVIDIA DRIVE platforms for Mercedes-Benz. Partnerships with Siemens and robotics firms like Boston Dynamics were highlighted, underscoring NVIDIA’s full-stack approach to AI infrastructure and real-world AI applications.

marsbit01/06 04:36

Jensen Huang Announces 8 New Products in 1.5 Hours, NVIDIA Fully Bets on AI Inference and Physical AI

marsbit01/06 04:36

活动图片