# Сопутствующие статьи по теме AI

Новостной центр HTX предлагает последние статьи и углубленный анализ по "AI", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Giants Collectively Raise Prices, Is the AI Price Hike Wave Coming? Can We Still Afford Lobster Employees?

Major AI companies, including Alibaba Cloud, Baidu Intelligent Cloud, Tencent Cloud, and Zhipu, have recently announced significant price increases for AI computing and storage services, with hikes ranging from 5% to over 460% in some models. This trend follows similar moves by global giants like Amazon AWS and Google Cloud earlier this year. The price surge is driven by explosive demand for computing power, fueled by the rapid adoption of AI agents like OpenClaw (referred to as "Lobster" in the article), which consume tokens at rates dozens or even hundreds of times higher than traditional AI applications. This has created a severe supply-demand imbalance. Additionally, shortages in high-end hardware—such as AI chips and high-bandwidth memory (HBM)—have constrained computing capacity and raised operational costs. The industry is shifting away from loss-leading pricing strategies toward value-based models, prioritizing sustainable development over market-share competition. A new "token economy" is emerging, where pricing is increasingly based on token usage, complexity, and speed rather than flat fees. This reflects AI computing's evolution from a generic service to a specialized, high-value resource. Some companies are even considering token allowances as part of employee benefits, highlighting its growing role as both a production tool and a cost factor. The article concludes by questioning whether AI services will remain affordable as compute costs continue to rise.

marsbit04/13 04:20

Giants Collectively Raise Prices, Is the AI Price Hike Wave Coming? Can We Still Afford Lobster Employees?

marsbit04/13 04:20

When AI's Bottleneck Is No Longer the Model: Perseus Yang's Open Source Ecosystem Building Practices and Reflections

In 2026, the AI industry's primary bottleneck is no longer model capability but rather the encoding of domain knowledge, agent-world interfaces, and toolchain maturity. The open-source community is rapidly bridging this gap, evidenced by projects like OpenClaw and Claude Code experiencing explosive growth in their Skill ecosystems. Perseus Yang, a contributor to over a dozen AI open-source projects, argues that Skill systems are the most underestimated infrastructure of the AI agent era. They enable non-coders to program AI by writing natural language SKILL.md files, transferring power from engineers to all professionals. His project, GTM Engineer Skills, demonstrates this by automating go-to-market workflows, proving Skills can extend far beyond engineering into areas like product strategy and business analysis. He also identifies a critical blind spot: while browser automation thrives, agent operations are nearly absent from mobile apps, the world's dominant computing interface. His project, OpenPocket, is an open-source framework that allows agents to operate Android devices via ADB. It features human-in-the-loop security, agent isolation, and the ability for agents to autonomously create and save new reusable Skills. Yang believes the value of open source lies not in the code itself, but in defining the infrastructure standards during this formative period. His work validates the SKILL.md format as a portable unit for agent capability and pioneers new architectures for agent operation in API-less environments. His design philosophy prioritizes usability for non-technical users, ensuring the agent ecosystem can be expanded by practitioners from all fields, not just engineers.

marsbit04/13 01:29

When AI's Bottleneck Is No Longer the Model: Perseus Yang's Open Source Ecosystem Building Practices and Reflections

marsbit04/13 01:29

From Wall Street to Silicon Valley, Anthropic Steals All the Spotlight from OpenAI

From Wall Street to Silicon Valley, Anthropic is seizing the spotlight from OpenAI. In just one year, the power dynamics in the AI have shifted significantly. Anthropic is now challenging OpenAI across multiple fronts: market share, secondary market valuation, venture capital sentiment, and public perception. At the recent HumanX AI conference, the consensus was clear—Anthropic is the new darling of Silicon Valley. Its annualized recurring revenue (ARR) has reportedly reached $300 billion, surpassing OpenAI's $250 billion. In the secondary market, Anthropic's valuation has overtaken OpenAI's, with strong investor preference for its shares. Anthropic dominates the enterprise sector, holding 42-54% of the code generation market and 40% of the enterprise agent market, compared to OpenAI's 21% and 27%, respectively. It also leads in new enterprise adoption and cost efficiency. While OpenAI retains a strong consumer user base with ChatGPT, it faces challenges inization and high operational expenses. A leaked internal memo from OpenAI identified Anthropic as its biggest threat, emphasizing its compute infrastructure advantage, but the very need for such a memo highlights its defensive position. Despite OpenAI's strong backing from Amazon and NVIDIA, the market is now valuing efficiency, cost-effectiveness, and precise market fit—areas where Anthropic currently leads. However, experts caution that the AI race is far from over and the landscape remains highly fluid.

marsbit04/13 01:07

From Wall Street to Silicon Valley, Anthropic Steals All the Spotlight from OpenAI

marsbit04/13 01:07

Stop Staring at GPUs: CPUs Are Becoming the 'New Bottleneck' in the AI Era

In the AI era, while GPUs have long been the focus for computational power, the narrative is shifting as CPUs are increasingly becoming the new bottleneck. By 2026, system performance is more dependent on execution and scheduling capabilities, with CPUs playing a critical role in enabling AI operations. A supply crisis is emerging, with server CPU prices rising about 30% in Q4 2025 due to high demand and production constraints, as GPU orders compete for limited semiconductor capacity. Companies like Google and Intel have deepened collaborations, and Elon Musk is investing in custom CPU solutions for his ventures, highlighting the strategic importance of CPU infrastructure. The shift is driven by the rise of agentic AI, where CPUs handle tasks such as multi-step reasoning, API calls, and data I/O, accounting for 50–90.6% of total latency in intelligent workloads. Expanding context windows in AI models further strain GPU memory, necessitating CPU offloading for key-value cache management. Major players are adopting varied strategies: Intel is strengthening its Xeon processor line and partnerships; AMD is benefiting from increased demand, with server CPU revenue surpassing 40%; and NVIDIA is designing CPUs like Grace to optimize GPU-CPU synergy through high-speed interconnects. The industry is witnessing a rebalancing of compute infrastructure, with CPUs gaining prominence as essential enablers of scalable AI agent systems. By 2030, the CPU market is projected to double to $60 billion, driven largely by AI demands. The focus is now on overcoming system-level bottlenecks to maximize the efficiency and economic viability of AI deployments.

marsbit04/13 00:57

Stop Staring at GPUs: CPUs Are Becoming the 'New Bottleneck' in the AI Era

marsbit04/13 00:57

Edge AI Daily Morning Report (April 12)

Edge AI Daily Brief (April 12) **Silicon Valley Front:** CoreWeave expanded partnerships with Meta and Anthropic, reflecting surging AI compute demand. Major cloud providers in China raised prices by 5%-30% due to soaring GPU costs and a 1000x increase in daily token usage since 2024. Anthropic, with annualized revenue exceeding $30B, is exploring in-house chip development to address shortages and signed a 3.5GW TPU deal with Google and Broadcom. The U.S. MATCH Act tightened semiconductor export controls, lowering technology thresholds and threatening global supply chains. ASML and Tokyo Electron saw stock declines. OpenAI addressed a third-party Axios library security issue, requiring macOS app updates. Microsoft restructured Windows Insider channels to simplify testing. Meta, Amazon, and Google invested in small modular nuclear reactors (SMRs) to power energy-intensive AI data centers. Mozilla criticized Microsoft for forcing Copilot integration in Windows 11, highlighting broader concerns about user choice and DMA compliance. Microsoft paused new carbon credit purchases due to quality concerns. **Domestic Progress:** MUJI’s Q2 revenue grew 14.8%, while Amazon launched a global smart hub in Shenzhen to streamline cross-border logistics for Chinese sellers, cutting delivery times by up to 7 days. **Open Source Trends:** Meta AI and KAIST proposed "Neural Computers" (NCs), merging computation and memory into learning runtime states. Agent AI is shifting from prediction to world-state modeling, driving edge infrastructure redesign. Quantum computing demonstrated exponential advantages in classical data processing, using under 60 logical qubits to outperform classical machines. France began migrating government systems to Linux to enhance digital sovereignty and reduce U.S. tech reliance. (Source: Edge AI Daily, Guangjiao Guancha)

marsbit04/12 00:52

Edge AI Daily Morning Report (April 12)

marsbit04/12 00:52

5 Minutes to Make AI Your Second Brain

This article introduces a powerful personal knowledge management system combining Claude Code and Obsidian, designed to function as an "AI second brain." Unlike traditional RAG systems that perform temporary, one-off retrievals, this system enables AI to continuously build and maintain an evolving knowledge wiki. The architecture consists of three layers: a raw data layer (notes, articles, transcripts), an AI-maintained structured knowledge base that builds cross-references, and a schema layer that governs organization and system logic. Core operations are Ingest (bringing in external information), Query (instant knowledge access), and Lint (checking consistency and fixing issues). The system's power lies in creating a "compound interest" effect for knowledge: it reduces cognitive load by offloading the tasks of connecting, organizing, and understanding information to AI, while simultaneously improving the accuracy and contextual consistency of the AI's outputs. The setup process is quick, requiring users to download Obsidian, create a vault (knowledge repository), configure Claude Code to access that vault, and apply a specific system prompt. Advanced tips include using a browser extension to easily add web content, maintaining separate vaults for work and personal life, and utilizing the "Orphans" feature to identify unlinked ideas. The main drawbacks are the need for visual thinking, a commitment to ongoing maintenance, and local storage usage. Ultimately, the system transforms scattered information into a reusable, interconnected network of knowledge.

marsbit04/11 12:46

5 Minutes to Make AI Your Second Brain

marsbit04/11 12:46

Claiming the "Happy Horse": Alibaba's AI Lays Out the "Eight Trigrams Formation"

Alibaba has officially claimed the "HappyHorse" (HappyHorse-1.0) AI video generation model, which recently topped the global benchmark on Artificial Analysis with an Elo score of 1357. Developed by Alibaba’s ATH (Alibaba Token Hub) innovation unit, the model is notable for its ability to generate high-definition video with synchronized audio and sound effects from text input, significantly improving motion coherence and reducing production time and cost. This launch is part of a broader acceleration in Alibaba’s AI strategy. In late March and early April, the company released three flagship models in quick succession: Qwen3.5-Omni, Wan2.7-Image, and Qwen3.6-Plus. The latter broke global daily call volume records with 1.4 trillion tokens processed shortly after release. Alibaba has also undergone significant organizational restructuring to support its AI ambitions. In March, it established the ATH business group, led by CEO Wu Yongming, to integrate AI development, cloud services, and application deployment. Further changes in April included forming a group-level technology committee and consolidating the Tongyi Lab into a dedicated AI model division. The company is investing heavily in AI, with plans to spend over 380 billion RMB on cloud and AI infrastructure over three years. Its self-developed GPUs have already seen mass production. While the market has responded positively to these moves, challenges remain in balancing centralized control with operational flexibility and maintaining team stability amid rapid changes.

marsbit04/11 04:07

Claiming the "Happy Horse": Alibaba's AI Lays Out the "Eight Trigrams Formation"

marsbit04/11 04:07

活动图片