# Сопутствующие статьи по теме AI

Новостной центр HTX предлагает последние статьи и углубленный анализ по "AI", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

What Is the Web3 Workplace Really Like? A Sample Observation from a Leading Exchange

Based on interviews and data from leading crypto exchange Gate, this article explores the realities of working in Web3, countering common stereotypes of instability and high pressure. A key feature is remote work, embraced by over 66% of Web3 companies. While offering flexibility, it can create isolation and make vetting companies difficult, driving talent toward established firms like Gate, which has a 13-year history and global regulatory licenses. This provides a sense of security absent in newer projects. The workforce is highly educated (89% hold bachelor's degrees or higher) and global. Talent is attracted by growth potential, learning opportunities, and the ability to have a global impact. Compensation, while not always exceeding top tech firms, offers geographic arbitrage—earning a competitive salary while living in a lower-cost region. Performance-based incentives are central. At Gate, year-end bonuses range from 2-6 months' salary, with top performers receiving up to 20 months' pay. The culture emphasizes "high effort, high reward," not just long hours. Work intensity is high due to the 24/7 nature of crypto, but the flexibility of remote work and a results-oriented model prevent a pure "996" culture. The article concludes that while Web3 has its challenges, it offers unique opportunities for growth and flexibility. It advises against relying on polarized external narratives and encourages firsthand experience to understand the real Web3 workplace.

Odaily星球日报03/02 11:08

What Is the Web3 Workplace Really Like? A Sample Observation from a Leading Exchange

Odaily星球日报03/02 11:08

After Integrating OpenClaw into Every Aspect of My Life, I Personally Switched It Off

After extensively using OpenClaw (formerly Clawdbot and Moltbot) for over a month as a 24/7 AI assistant integrated with Telegram, email, and calendar, the author decided to shut it down. The primary reasons were its unreliability in long-term memory retention despite claims, high and unpredictable API costs (over $150 monthly), and significant security vulnerabilities, including exposed API keys and unauthorized data transmission. The author realized that a constantly running AI was unnecessary for most valuable tasks, which were better handled through active, intentional work. The core functions of OpenClaw—remembering user context and automating tasks—were effectively replicated using Claude’s ecosystem. By creating a consolidated CLAUDE.md file (replacing OpenClaw’s multiple configuration files), leveraging Claude’s built-in memory features, and integrating with Obsidian via CLI for efficient knowledge management, the author achieved similar functionality with greater reliability. For mobile access, Claude’s Remote Control feature or a Telegram bot solution provided seamless interaction. Scheduled tasks were handled through Claude’s Cowork feature, avoiding the cost of continuous API checks. Ultimately, Claude Pro or Max subscriptions offered a more predictable cost structure ($20–$200/month) and a stable, secure environment. The author concluded that Claude’s ecosystem delivers nearly all of OpenClaw’s promised benefits without the operational headaches, making it a superior choice for practical AI assistance.

marsbit03/02 10:13

After Integrating OpenClaw into Every Aspect of My Life, I Personally Switched It Off

marsbit03/02 10:13

Big Short Prototype: Trillion-Dollar AI Investment Started on the Wrong Path from the Beginning

Michael Burry draws a parallel between a 19th-century case study and modern AI development to argue that the current path of large language models (LLMs) is fundamentally flawed. He references an 1880 article from the Smithsonian about Melville Ballard, a deaf man who, without formal language, engaged in complex abstract reasoning about the origins of the universe, life, and God. This story demonstrates that true reasoning and understanding exist prior to and independent of language. Burry contends that by prioritizing language processing over the development of genuine reasoning capabilities, LLMs are merely creating sophisticated mirrors of data, not true understanding. They operate in an intermediate zone, simulating reasoning but lacking the innate rational capacity that precedes language. This "language-first" approach, driven by immense computational brute force, leads to inherent flaws like hallucinations and an inability to achieve real comprehension. The proposed solution is a shift towards a "reasoning-first" architecture, which would focus on compressing information and utilizing System 2 reasoning to drastically reduce computational needs. Burry suggests that true AI must pass a "Ballard Test": demonstrating rational thought without language. He concludes by linking this technological critique to a cyclical pattern of speculative investment booms, comparing the current AI hype to the 19th-century mining speculation in San Francisco, warning of an inevitable bust if the foundational approach isn't corrected.

marsbit03/02 06:57

Big Short Prototype: Trillion-Dollar AI Investment Started on the Wrong Path from the Beginning

marsbit03/02 06:57

活动图片