2026-04-17 Пятница

Новостной центр - Страница 16

Получайте криптоновости и тенденции рынка в режиме реального времени с помощью Новостного центра HTX.

Claude Deliberately Dumbs Down? Are Models Starting to 'Discriminate Based on the User'?

"Claude Deliberately Downgraded? Models Begin to 'Discriminate Based on Users'?" Recent analysis by AMD AI Group Senior Director Stella Laurenzo reveals significant behavioral degradation in Anthropic's Claude since mid-February. Data from 6,852 session files shows Claude's median "thinking" output plummeted 67-73% from 2,200 to 600 characters, with one-third of code edits now performed without reading files first. Users began reporting slower, lazier responses in March, with some describing Claude as "lobotomized." Anthropic's introduction of "adaptive thinking" in early February, officially described as adjusting reasoning depth based on task complexity, effectively became a global throttling mechanism. By March, default effort was quietly reduced to "medium" while thinking summaries were hidden. Anthropic's Claude Code lead Boris Cherny confirmed this was intentional optimization, not a bug, suggesting users manually switch to "high effort" mode. The company never announced these significant changes, leaving paying subscribers with reduced capabilities at unchanged prices. This reflects a broader industry trend where AI companies are silently reducing capabilities to control GPU costs. Analysis shows extreme users generate $42,121 in actual inference costs while paying only $400 monthly, creating unsustainable subsidy model. Anthropic is now testing "high effort" mode by default for Teams and Enterprise users, signaling that superior reasoning is becoming a分层资源. Enterprise API users report significantly better performance at $4k-12k monthly costs, while consumer subscribers receive a "good enough" downgraded version. The incident marks the end of AI's subsidy era, with the industry shifting from universal普惠to elite stratification, quietly compromising consumer experience to manage real costs while offering premium capabilities to deep-pocketed enterprise clients.

marsbit2 дня назад 10:32

Claude Deliberately Dumbs Down? Are Models Starting to 'Discriminate Based on the User'?

marsbit2 дня назад 10:32

DeAgentAI Announces Establishment of AIA Ecosystem Fund, Focusing on 'AI Agent + Physical AI' Track

DeAgentAI, a leading decentralized AI infrastructure project on SUI and BNB Chain, has announced the establishment of the AIA Ecosystem Fund. The fund will focus on the integrated track of "AI Agent + Physical AI," aiming to incubate and accelerate the next generation of AI applications with autonomous decision-making capabilities and extend AI technology from on-chain intelligence to the real world. The fund will provide comprehensive support in technology, user traffic, and ecosystem resources. Its core investment directions include AI Agent applications with autonomous on-chain execution and multi-agent collaboration capabilities, and Physical AI projects that extend AI inference into the physical world through hardware and computing efficiency. The fund has already made seed-round investments in two projects: - AliceAI: An AI-driven prediction market decision system that compresses fragmented information into verifiable, tamper-proof decision signals, offering a full-cycle solution from signal generation to automated execution via Telegram Bot. - An ASIC AI chip project: A custom hardware solution designed specifically for Transformer-based inference, aiming to reduce token processing costs to less than one-tenth of current GPU solutions while significantly improving energy efficiency and lowering latency. According to DeAgentAI’s founder, the goal is to bridge the gap between on-chain intelligence and the physical world, supporting key protocols that connect users to the future of Physical AI.

marsbit2 дня назад 10:21

DeAgentAI Announces Establishment of AIA Ecosystem Fund, Focusing on 'AI Agent + Physical AI' Track

marsbit2 дня назад 10:21

An Internal Memo Exposes OpenAI's Most Real Anxieties and Ambitions

An internal memo from OpenAI's Chief Revenue Officer, Denise Dresser, reveals the company's strategic priorities and competitive anxieties as the enterprise AI market matures. The document outlines a shift from competing solely on model capability to winning on integration, platform strategy, and becoming "hardest to replace." Key priorities for Q2 include: the model layer, the agent platform, expanding market reach via Amazon, selling the full tech stack, and controlling deployment. The goal is to evolve from a point solution to an enterprise AI "operating system" by deeply embedding into customer workflows, creating switching costs, and securing multi-year, nine-figure deals. The memo contains a direct and unusually sharp critique of rival Anthropic, accusing it of building a narrative on "fear" and "restriction," suffering from compute shortages leading to user experience issues, and overstating its annualized revenue by $8 billion due to accounting methods. This public criticism is seen as a calculated move for investor narratives, internal mobilization, and external signaling. For the Chinese AI market, the memo highlights a gap in competition stages. While domestic players still focus on benchmarks and price wars, the next phase will be won on deployment, platform integration, and ecosystem. It also underscores the critical importance of data sovereignty and trust, suggesting that compliant, auditable, on-premise solutions could be a major differentiator in regulated industries. A notable warning for Chinese companies is OpenAI's claim that its biggest constraint is "capacity," not demand. This contrasts sharply with the domestic market's challenge of finding enterprise customers willing to make large, long-term paid commitments, pointing to a fundamental gap in commercial adoption readiness.

marsbit2 дня назад 10:21

An Internal Memo Exposes OpenAI's Most Real Anxieties and Ambitions

marsbit2 дня назад 10:21

A Four-Page Internal Letter: What Card Is OpenAI Playing?

OpenAI's internal memo, revealed by The Information, outlines a strategic narrative against Anthropic across three key areas: revenue accounting, enterprise competition, and compute capacity. First, OpenAI CRO Denise Dresser challenged Anthropic’s reported $30B annualized revenue, claiming the actual net figure—using OpenAI’s accounting method—is $22B. The discrepancy stems from differing GAAP interpretations: Anthropic books gross revenue (including cloud partner shares), while OpenAI records net revenue after partner deductions. Second, enterprise adoption data from Ramp shows Anthropic rapidly closing the gap with OpenAI, narrowing from an 11% to a 4.6% difference within months. Anthropic already leads in high-value sectors like tech, finance, and professional services. Dresser acknowledged Anthropic’s edge in coding capabilities but warned against being a "single-product company" in a platform war. Third, while current compute capacity is comparable (OpenAI ~1.9 GW vs. Anthropic ~1.4 GW), OpenAI’s long-term plans aim for 30 GW by 2030—four times Anthropic’s projected 7-8 GW by 2027. Anthropic’s growth depends on sustaining enterprise revenue to cover rising cloud costs, estimated to reach $6.4B by 2027. The memo also highlighted OpenAI’s strategic shift: reducing reliance on Microsoft (which “limited customer reach”) and partnering with Amazon, which invests in both OpenAI and Anthropic. This places Amazon’s Bedrock platform as a battleground where both models compete for the same enterprise clients.

marsbit04/14 08:44

A Four-Page Internal Letter: What Card Is OpenAI Playing?

marsbit04/14 08:44

StarkWare Makes Drastic Cuts to Survive, L2 'Technical Faith' Liquidated by the Market

StarkWare, the infrastructure company behind Starknet, has announced a major restructuring, including layoffs and splitting into two separate business units. This move comes as the Layer 2 network faces a severe decline, with monthly revenue plummeting over 95% from its late 2023 peak to just tens of thousands of dollars. CEO Eli Ben-Sasson stated the company had become "too big and inefficient" and must return to a startup mentality. The new structure creates a Starknet development unit, focused on the core protocol, and an applications unit, tasked with direct revenue generation by building products that leverage StarkWare's unique tech stack, potentially in quantum security and Bitcoin-related areas. This reflects a wider crisis in the L2 sector triggered by Ethereum's EIP-4844 upgrade, which drastically reduced data availability fees and shattered the core business model of profiting from gas差价. The market has since polarized. Base and Arbitrum now dominate, capturing the majority of value and fees, while Starknet's TVL sits at a fraction of Base's and its native token STRK trades below its total historical fundraising amount. The article concludes that technical superiority is no longer enough to win; distribution power and strategic alliances are now the key drivers. StarkWare's shift from an infrastructure provider to a product-focused company is a strategic retreat in this consolidating market, forcing it to prove it can build and sell products, not just invent advanced technology.

marsbit04/14 08:05

StarkWare Makes Drastic Cuts to Survive, L2 'Technical Faith' Liquidated by the Market

marsbit04/14 08:05

活动图片