2026-04-17 Пятница

Новостной центр - Страница 20

Получайте криптоновости и тенденции рынка в режиме реального времени с помощью Новостного центра HTX.

TAO is Elon Musk who invested in OpenAI, Subnet is Sam Altman

The article, titled "TAO is Elon Musk who invested in OpenAI, Subnet is Sam Altman," presents a critical analysis of the Bittensor (TAO) project. It argues that Bittensor functions as a decentralized AI marketplace where TAO tokens fund AI research via subnets. However, the author highlights a fundamental flaw: subnet operators have no obligation to return any value, such as AI models or profits, back to the TAO ecosystem or its token holders. This structure is likened to Elon Musk's early investment in the non-profit OpenAI, which later commercialized its technology without returning value to its initial benefactor. The bear case posits that Bittensor is essentially a wealth transfer from crypto speculators to AI researchers ("miners"). Subnets can use TAO incentives for development and then take their successful products elsewhere, leaving TAO holders with diluted tokens from inflation and no captured value. The lack of enforced equity or binding mechanisms means the project relies on a "hope" that subnet tokens maintain value. The optimistic perspective counters that two factors could create a successful, self-sustaining economy: 1) AI's perpetual and massive resource needs could incentivize subnets to stay for continued funding, and 2) crypto has a proven ability to aggregate resources through token incentives, as seen with Bitcoin and Ethereum. The conclusion states that investing in TAO is a bet on a博弈论 (game theory) miracle—that soft incentives alone will be enough to keep the best subnets within the ecosystem and create a flywheel effect. This outcome is possible but represents a highly skewed, low-probability success scenario amidst significant risks of failure.

marsbit04/13 14:01

TAO is Elon Musk who invested in OpenAI, Subnet is Sam Altman

marsbit04/13 14:01

Hermes Agent Guide: Surpassing OpenClaw, Boosting Productivity by 100x

A guide to Hermes Agent, an open-source AI agent framework by Nous Research, positioned as a powerful alternative to OpenClaw. It is described as a self-evolving agent with a built-in learning loop that autonomously creates skills from experience, continuously improves them, and solidifies knowledge into reusable assets. Its core features include a memory system (storing environment info and user preferences in MEMORY.md and USER.md) and a skill system that generates structured documentation for complex tasks. The agent boasts over 40 built-in tools for web search, browser automation, vision, image generation, and text-to-speech. It supports scheduling automated tasks and can run on various infrastructures, from a $5 VPS to GPU clusters. Popular tools within its ecosystem include the Hindsight memory plugin, the Anthropic Cybersecurity Skills pack, and the mission-control dashboard for agent orchestration. Key differentiators from OpenClaw are its architecture philosophy—centered on the agent's own execution loop rather than a central controller—and its autonomous skill generation versus OpenClaw's manually written skills. Installation is a one-line command, and setup is guided. It integrates with messaging platforms like Telegram, Discord, and Slack. It's suited for scenarios requiring a persistent, context-aware assistant that improves over time, automates workflows, and operates across various deployment environments.

marsbit04/13 13:11

Hermes Agent Guide: Surpassing OpenClaw, Boosting Productivity by 100x

marsbit04/13 13:11

Brother Sun "Rights Protection" Stands Up Against the Trump Family, WLFI Is the Real Scythe in the Crypto Circle

The article details the controversy surrounding World Liberty Financial (WLFI), a cryptocurrency project linked to the Trump family. It reports that WLFI allegedly used the DeFi lending protocol Dolomite, whose co-founder is also a WLFI advisor, as a disguised channel to sell tokens by collateralizing around 5 billion WLFI tokens to borrow approximately $75 million in stablecoins. Despite WLFI's claims that the loans were for ecosystem development and posed no liquidation risk, critics argue it was a way for insiders to cash out, shifting risk to retail investors. The piece highlights WLFI's significant price decline—over 66% since its September 2025 launch—and suggests the Trump family and insiders are the main source of selling pressure, as they control nearly 74% of the token supply. It also revisits WLFI’s prior move to blacklist 272 addresses, including those of investor Justin Sun, under the pretext of preventing large-scale sell-offs, which now appears to be an effort to reduce competition for their own sales. Sun publicly accused WLFI of exploiting users, freezing assets, and treating the crypto community as a "personal ATM." WLFI countered by threatening legal action. The author notes that while Sun’s criticism may gain sympathy, a legal battle in the U.S. against the well-connected Trump family would be risky for him. Finally, the article concludes that WLFI exemplifies how powerful elites can exploit crypto’s regulatory gray areas for profit, and urges the community to reject such projects driven more by political privilege than genuine decentralized finance ideals.

Odaily星球日报04/13 12:17

Brother Sun "Rights Protection" Stands Up Against the Trump Family, WLFI Is the Real Scythe in the Crypto Circle

Odaily星球日报04/13 12:17

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

Summary: In a remarkable validation of Chinese AI research, Meta and METR have independently reached conclusions that align perfectly with the "Density Law" proposed by a Tsinghua University and FaceWall Intelligent team two years ago. Published in Nature Machine Intelligence in late 2025, the law states that the computational power required to achieve a specific level of AI performance halves every 3.5 months. This convergence was starkly evident in April 2026. METR reported that AI capabilities are doubling every 88.6 days, while Meta's new model, Muse Spark, demonstrated it could match the performance of a model from the previous year using less than one-tenth of the training compute. When plotted, the growth curves from all three sources—using different metrics (parameters, compute, task length)—show an almost identical exponential slope. The findings have profound implications: AI inference costs are collapsing faster than anticipated, powerful edge-computing AI is becoming rapidly feasible, and the industry's strategy of simply scaling model size is becoming economically inefficient. The Chinese team, which has been building its "MiniCPM" model series based on this law since 2024, is seen as having a significant two-year lead in practical engineering experience, marking a rare instance where Chinese researchers pioneered a fundamental predictive trend in AI.

marsbit04/13 12:14

Tsinghua's Prediction 2 Years Ago Is Becoming Global Consensus: Meta and Two Other Major AI Institutions Have Reached the Same Conclusion

marsbit04/13 12:14

活动图片