# Scaling Articoli collegati

Il Centro Notizie HTX fornisce gli articoli più recenti e le analisi più approfondite su "Scaling", coprendo tendenze di mercato, aggiornamenti sui progetti, sviluppi tecnologici e politiche normative nel settore crypto.

Why Does the Term 'Year of AI Computing Power Realization' Have Pitfalls? —Understanding the Four Hurdles from Policy Signals to Actual Orders in One Article

This article critiques the phrase "The First Year of AI Computing Power Cashing In," arguing it oversimplifies a complex, multi-stage process. It proposes a "Four Gates" framework to assess the true commercialization of domestic AI computing power (like Huawei's Ascend chips): 1. **Policy Procurement:** Widely open in 2026. Significant government funding and large bulk orders from tech giants like Alibaba and Tencent exist. However, purchasing hardware is not the same as deploying it for real use. 2. **Real Deployment:** A crack has opened. The key evidence is DeepSeek V4, a top-tier AI model fully migrating from NVIDIA's CUDA to domestic computing platforms. This proves the capability for real, high-level tasks, but widespread adoption beyond leading tech firms is still nascent. 3. **Mature Software Ecosystem:** A narrow crack has opened. While frameworks like Huawei's CANN are progressing, they lag far behind NVIDIA's vast, established CUDA ecosystem in terms of supported models and developer ease-of-use. Building this middle-to-downstream developer environment is estimated to need 1-2 more years. 4. **Scalable Replication:** Essentially closed. This final gate, where thousands of mid-sized enterprises across various industries can easily adopt the technology without major migration costs, is not expected before 2027-2028. The core risk is conflating these stages. While 2026 marks a real turning point in policy-driven procurement and proving technical viability (Gates 1 & 2), the phrase "cashing in" is premature for the full industry. True, large-scale value realization depends on the later, slower-to-open gates of software maturity and scalable replication to the broader market. DeepSeek V4's shift is identified as the most critical 2026 signal, changing the narrative from "can it work?" to "when will supply meet demand?"

marsbit05/08 11:34

Why Does the Term 'Year of AI Computing Power Realization' Have Pitfalls? —Understanding the Four Hurdles from Policy Signals to Actual Orders in One Article

marsbit05/08 11:34

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

The article "a16z: AI's 'Amnesia' – Can Continual Learning Cure It?" explores the limitations of current large language models (LLMs), which, like the protagonist in the film *Memento*, are trapped in a perpetual present—unable to form new memories after training. While methods like in-context learning (ICL), retrieval-augmented generation (RAG), and external scaffolding (e.g., chat history, prompts) provide temporary solutions, they fail to enable true internalization of new knowledge. The authors argue that compression—the core of learning during training—is halted at deployment, preventing models from generalizing, discovering novel solutions (e.g., mathematical proofs), or handling adversarial scenarios. The piece introduces *continual learning* as a critical research direction to address this, categorizing approaches into three paths: 1. **Context**: Scaling external memory via longer context windows, multi-agent systems, and smarter retrieval. 2. **Modules**: Using pluggable adapters or external memory layers for specialization without full retraining. 3. **Weights**: Enabling parameter updates through sparse training, test-time training, meta-learning, distillation, and reinforcement learning from feedback. Challenges include catastrophic forgetting, safety risks, and auditability, but overcoming these could unlock models that learn iteratively from experience. The conclusion emphasizes that while context-based methods are effective, true breakthroughs require models to compress new information into weights post-deployment, moving from mere retrieval to genuine learning.

marsbit04/25 04:23

a16z: AI's 'Amnesia', Can Continuous Learning Cure It?

marsbit04/25 04:23

Vitalik's Full Speech at the 2026 Hong Kong Web3 Carnival

In his keynote speech at the 2026 Hong Kong Web3 Carnival, Ethereum co-founder Vitalik Buterin outlined the platform’s vision as a "world computer" and detailed its technical roadmap for the next five years. Buterin emphasized Ethereum’s two core functions: serving as a public bulletin board where applications can publish verifiable data, and enabling shared computational objects like tokens, NFTs, and DAOs. He stressed the importance of Ethereum lies in its ability to provide self-sovereignty, verifiability, and permissionless participation without relying on trusted third parties. He discussed the evolution of Layer 2 solutions, arguing that meaningful L2s should complement Ethereum by integrating necessary off-chain components—such as oracles or privacy protocols—rather than simply scaling through centralization. Key short-term goals include scaling data availability and computational capacity through initiatives like increasing the gas limit and deploying zkEVM for more complex, verifiable computations. Buterin also highlighted ongoing efforts to improve quantum resistance, privacy, and efficiency through proposals like EIP-8141 for account abstraction and quantum-safe signatures. Long-term, Ethereum aims to maximize security and decentralization through formal verification, AI-assisted proof generation, and a hybrid consensus model combining Bitcoin’s longest-chain rule with BFT-style finality. The goal is a robust, easily verifiable platform that supports a wide range of applications—from finance and identity to decentralized social networks—while ensuring long-term resilience and trustlessness.

marsbit04/20 05:40

Vitalik's Full Speech at the 2026 Hong Kong Web3 Carnival

marsbit04/20 05:40

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

Summary: In a remarkable validation of Chinese AI research, Meta and METR have independently reached conclusions that align perfectly with the "Density Law" proposed by a Tsinghua University and FaceWall Intelligent team two years ago. Published in Nature Machine Intelligence in late 2025, the law states that the computational power required to achieve a specific level of AI performance halves every 3.5 months. This convergence was starkly evident in April 2026. METR reported that AI capabilities are doubling every 88.6 days, while Meta's new model, Muse Spark, demonstrated it could match the performance of a model from the previous year using less than one-tenth of the training compute. When plotted, the growth curves from all three sources—using different metrics (parameters, compute, task length)—show an almost identical exponential slope. The findings have profound implications: AI inference costs are collapsing faster than anticipated, powerful edge-computing AI is becoming rapidly feasible, and the industry's strategy of simply scaling model size is becoming economically inefficient. The Chinese team, which has been building its "MiniCPM" model series based on this law since 2024, is seen as having a significant two-year lead in practical engineering experience, marking a rare instance where Chinese researchers pioneered a fundamental predictive trend in AI.

marsbit04/13 12:14

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

marsbit04/13 12:14

活动图片