# Сопутствующие статьи по теме AI

Новостной центр HTX предлагает последние статьи и углубленный анализ по "AI", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Claude Deliberately Dumbs Down? Are Models Starting to 'Discriminate Based on the User'?

"Claude Deliberately Downgraded? Models Begin to 'Discriminate Based on Users'?" Recent analysis by AMD AI Group Senior Director Stella Laurenzo reveals significant behavioral degradation in Anthropic's Claude since mid-February. Data from 6,852 session files shows Claude's median "thinking" output plummeted 67-73% from 2,200 to 600 characters, with one-third of code edits now performed without reading files first. Users began reporting slower, lazier responses in March, with some describing Claude as "lobotomized." Anthropic's introduction of "adaptive thinking" in early February, officially described as adjusting reasoning depth based on task complexity, effectively became a global throttling mechanism. By March, default effort was quietly reduced to "medium" while thinking summaries were hidden. Anthropic's Claude Code lead Boris Cherny confirmed this was intentional optimization, not a bug, suggesting users manually switch to "high effort" mode. The company never announced these significant changes, leaving paying subscribers with reduced capabilities at unchanged prices. This reflects a broader industry trend where AI companies are silently reducing capabilities to control GPU costs. Analysis shows extreme users generate $42,121 in actual inference costs while paying only $400 monthly, creating unsustainable subsidy model. Anthropic is now testing "high effort" mode by default for Teams and Enterprise users, signaling that superior reasoning is becoming a分层资源. Enterprise API users report significantly better performance at $4k-12k monthly costs, while consumer subscribers receive a "good enough" downgraded version. The incident marks the end of AI's subsidy era, with the industry shifting from universal普惠to elite stratification, quietly compromising consumer experience to manage real costs while offering premium capabilities to deep-pocketed enterprise clients.

marsbit2 дня назад 10:32

Claude Deliberately Dumbs Down? Are Models Starting to 'Discriminate Based on the User'?

marsbit2 дня назад 10:32

DeAgentAI Announces Establishment of AIA Ecosystem Fund, Focusing on 'AI Agent + Physical AI' Track

DeAgentAI, a leading decentralized AI infrastructure project on SUI and BNB Chain, has announced the establishment of the AIA Ecosystem Fund. The fund will focus on the integrated track of "AI Agent + Physical AI," aiming to incubate and accelerate the next generation of AI applications with autonomous decision-making capabilities and extend AI technology from on-chain intelligence to the real world. The fund will provide comprehensive support in technology, user traffic, and ecosystem resources. Its core investment directions include AI Agent applications with autonomous on-chain execution and multi-agent collaboration capabilities, and Physical AI projects that extend AI inference into the physical world through hardware and computing efficiency. The fund has already made seed-round investments in two projects: - AliceAI: An AI-driven prediction market decision system that compresses fragmented information into verifiable, tamper-proof decision signals, offering a full-cycle solution from signal generation to automated execution via Telegram Bot. - An ASIC AI chip project: A custom hardware solution designed specifically for Transformer-based inference, aiming to reduce token processing costs to less than one-tenth of current GPU solutions while significantly improving energy efficiency and lowering latency. According to DeAgentAI’s founder, the goal is to bridge the gap between on-chain intelligence and the physical world, supporting key protocols that connect users to the future of Physical AI.

marsbit2 дня назад 10:21

DeAgentAI Announces Establishment of AIA Ecosystem Fund, Focusing on 'AI Agent + Physical AI' Track

marsbit2 дня назад 10:21

TAO is Elon Musk who invested in OpenAI, Subnet is Sam Altman

The article, titled "TAO is Elon Musk who invested in OpenAI, Subnet is Sam Altman," presents a critical analysis of the Bittensor (TAO) project. It argues that Bittensor functions as a decentralized AI marketplace where TAO tokens fund AI research via subnets. However, the author highlights a fundamental flaw: subnet operators have no obligation to return any value, such as AI models or profits, back to the TAO ecosystem or its token holders. This structure is likened to Elon Musk's early investment in the non-profit OpenAI, which later commercialized its technology without returning value to its initial benefactor. The bear case posits that Bittensor is essentially a wealth transfer from crypto speculators to AI researchers ("miners"). Subnets can use TAO incentives for development and then take their successful products elsewhere, leaving TAO holders with diluted tokens from inflation and no captured value. The lack of enforced equity or binding mechanisms means the project relies on a "hope" that subnet tokens maintain value. The optimistic perspective counters that two factors could create a successful, self-sustaining economy: 1) AI's perpetual and massive resource needs could incentivize subnets to stay for continued funding, and 2) crypto has a proven ability to aggregate resources through token incentives, as seen with Bitcoin and Ethereum. The conclusion states that investing in TAO is a bet on a博弈论 (game theory) miracle—that soft incentives alone will be enough to keep the best subnets within the ecosystem and create a flywheel effect. This outcome is possible but represents a highly skewed, low-probability success scenario amidst significant risks of failure.

marsbit04/13 14:01

TAO is Elon Musk who invested in OpenAI, Subnet is Sam Altman

marsbit04/13 14:01

Hermes Agent Guide: Surpassing OpenClaw, Boosting Productivity by 100x

A guide to Hermes Agent, an open-source AI agent framework by Nous Research, positioned as a powerful alternative to OpenClaw. It is described as a self-evolving agent with a built-in learning loop that autonomously creates skills from experience, continuously improves them, and solidifies knowledge into reusable assets. Its core features include a memory system (storing environment info and user preferences in MEMORY.md and USER.md) and a skill system that generates structured documentation for complex tasks. The agent boasts over 40 built-in tools for web search, browser automation, vision, image generation, and text-to-speech. It supports scheduling automated tasks and can run on various infrastructures, from a $5 VPS to GPU clusters. Popular tools within its ecosystem include the Hindsight memory plugin, the Anthropic Cybersecurity Skills pack, and the mission-control dashboard for agent orchestration. Key differentiators from OpenClaw are its architecture philosophy—centered on the agent's own execution loop rather than a central controller—and its autonomous skill generation versus OpenClaw's manually written skills. Installation is a one-line command, and setup is guided. It integrates with messaging platforms like Telegram, Discord, and Slack. It's suited for scenarios requiring a persistent, context-aware assistant that improves over time, automates workflows, and operates across various deployment environments.

marsbit04/13 13:11

Hermes Agent Guide: Surpassing OpenClaw, Boosting Productivity by 100x

marsbit04/13 13:11

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

Summary: In a remarkable validation of Chinese AI research, Meta and METR have independently reached conclusions that align perfectly with the "Density Law" proposed by a Tsinghua University and FaceWall Intelligent team two years ago. Published in Nature Machine Intelligence in late 2025, the law states that the computational power required to achieve a specific level of AI performance halves every 3.5 months. This convergence was starkly evident in April 2026. METR reported that AI capabilities are doubling every 88.6 days, while Meta's new model, Muse Spark, demonstrated it could match the performance of a model from the previous year using less than one-tenth of the training compute. When plotted, the growth curves from all three sources—using different metrics (parameters, compute, task length)—show an almost identical exponential slope. The findings have profound implications: AI inference costs are collapsing faster than anticipated, powerful edge-computing AI is becoming rapidly feasible, and the industry's strategy of simply scaling model size is becoming economically inefficient. The Chinese team, which has been building its "MiniCPM" model series based on this law since 2024, is seen as having a significant two-year lead in practical engineering experience, marking a rare instance where Chinese researchers pioneered a fundamental predictive trend in AI.

marsbit04/13 12:14

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

marsbit04/13 12:14

Bank of Korea Interprets the AI Semiconductor Cycle: The Most Dangerous Signal Lies in Financing

The Bank of Korea (BoK) released a report examining the sustainability of the current AI-driven semiconductor supercycle, concluding that the expansion is likely to continue until at least the first half of 2026. The report highlights three key differences from past cycles: unprecedented demand growth (driven by HBM and AI accelerators), severely constrained supply (due to complex HBM production and conservative industry expansion), and a significantly larger and longer supply-demand gap. Five critical factors will determine the cycle's longevity: 1. The profitability of AI investments, as market focus shifts from market share capture to earnings. 2. The ability of major tech firms to secure financing, with internal cash flows already insufficient to cover massive CAPEX, leading to increased corporate debt issuance and risky vendor financing structures reminiscent of the telecom bubble. 3. Uncertain impact of AI model efficiency improvements, which could either reduce per-unit demand or increase total consumption. 4. Expansion speed of major memory manufacturers, with significant new capacity from SK Hynix, Micron, and Samsung only expected from late 2027. 5. Ramping production from Chinese manufacturers, whose DRAM market share is projected to grow rapidly, pressuring prices. The report warns that financing fragility—evidenced by rising CDS spreads, off-balance-sheet SPV financing, and redemption halts in private credit funds—is the most critical risk. While the cycle remains robust through 2026, pressures are expected to build in 2027, with a heightened risk of overcapacity by 2028.

marsbit04/13 08:51

Bank of Korea Interprets the AI Semiconductor Cycle: The Most Dangerous Signal Lies in Financing

marsbit04/13 08:51

活动图片