# Сопутствующие статьи по теме AI

Новостной центр HTX предлагает последние статьи и углубленный анализ по "AI", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

When AI Starts Paying for Itself

The article "When AI Starts Paying for Itself" discusses the emergence of the x402 protocol, which enables AI agents to autonomously make micro-payments for services like data and computation. In 2025, Coinbase and Cloudflare revived the long-dormant HTTP 402 status code ("Payment Required") to create a seamless payment layer for the internet. The protocol allows an AI agent to receive a payment request, authorize it with a cryptographic signature, and complete the transaction in under a second—with no human involvement, accounts, or traditional banking infrastructure. Supported by low-cost Layer 2 blockchains and stablecoins, x402 processed over 100 million transactions within months. Its V2 update added multi-chain support and session-based authentication. Google later integrated a similar model into its Agentic Payments Protocol (AP2). However, trust between autonomous agents remains a challenge. ERC-8004, an Ethereum standard, addresses this by providing on-chain identity (via NFT-based IDs), reputation tracking, and task verification systems. The ecosystem faces risks: speculative "x402-themed" meme tokens have surged without real utility, technical vulnerabilities exist, and competing standards from Google and a16z threaten fragmentation. Furthermore, regulatory frameworks for AI-driven transactions are undeveloped. In summary, x402 and ERC-8004 aim to create a trustless, open economic network for AI agents—but must overcome technical, economic, and competitive hurdles to achieve widespread adoption.

marsbit03/04 02:54

When AI Starts Paying for Itself

marsbit03/04 02:54

OpenClaw Endorses Venice.ai, VVV Token Surges Over 500% in One Month

OpenClaw, an open-source self-hosted AI agent platform, has listed Venice.ai—a privacy-focused, uncensored generative AI platform—as a recommended model provider. This endorsement comes shortly after OpenClaw’s founder publicly discouraged young people from engaging with cryptocurrency, creating a notable contrast. Venice.ai, founded by crypto OG Erik Voorhees, positions itself as a decentralized alternative to ChatGPT. It emphasizes user privacy by not storing any data on its servers; all content remains encrypted on the user’s local device. The platform offers two privacy modes: Private (using open-source models on decentralized GPUs) and Anonymized (removing user metadata from prompts). The project features a dual-token economy: - VVV: A capital asset used for staking (currently ~19% APY) and minting DIEM. - DIEM: Represents perpetual AI compute credit. 1 DIEM = $1 daily API credit, usable across Venice’s models. This structure allows high-frequency users to access AI services at a lower marginal cost over time. VVV’s price surged over 500% in a month, rising from ~$1.5 to ~$8.4. This growth is attributed to both supply constraints—including a permanent burn of unclaimed airdropped tokens and reduced annual emissions—and rising demand, especially after OpenClaw’s integration. With over 25,000 API users and a staking rate of 38.8% for VVV, Venice is positioning itself as a privacy-backend solution for the expanding AI agent ecosystem, blending crypto-economic incentives with scalable AI infrastructure.

Odaily星球日报03/04 02:31

OpenClaw Endorses Venice.ai, VVV Token Surges Over 500% in One Month

Odaily星球日报03/04 02:31

AI Within the Range of Artillery

"AI in the Range of Cannons" discusses the vulnerability of AI infrastructure in the context of modern warfare, triggered by a real-world incident. On March 1, an Iranian missile struck an Amazon data center in the UAE, causing a fire, power outage, and disruption of about 60 cloud services. This led to a global outage of Claude, a major AI service running on Amazon's cloud. Although officially attributed to surging user demand, the incident is linked to a U.S.-Israel airstrike on Iran that used Claude for intelligence analysis, despite a recent U.S. ban on Anthropic (Claude's developer) for refusing unrestricted military use. The article highlights that this marks the first physical destruction of a commercial data center in war, emphasizing that AI, though virtual, relies on physical infrastructure located in geopolitically unstable regions like the Middle East. Silicon Valley has heavily invested in AI infrastructure in the Gulf due to cheap electricity, wealthy sovereign funds, and data localization laws, with projects from Amazon, Microsoft, and OpenAI. However, security frameworks like the Pax Silica agreement focus on chip controls and political alignment, ignoring physical security risks. The piece raises critical questions: When data centers serve both civilian and military purposes, are they legitimate targets? International law lacks clarity. The incident shifts focus from AI replacing jobs to its fragility—over 1,300 large data centers worldwide are protected only by basic measures like fire systems and generators. As AI becomes national infrastructure, its protection becomes a collective responsibility, beyond individual companies or governments. The title’s metaphor underscores that in an era of conflict, even advanced technology lies within the range of destruction.

marsbit03/03 10:29

AI Within the Range of Artillery

marsbit03/03 10:29

活动图片