# Сопутствующие статьи по теме AGI

Новостной центр HTX предлагает последние статьи и углубленный анализ по "AGI", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

Why Is OpenAI Playing Catch-Up with Claude Code?

In the rapidly evolving field of AI coding assistants, OpenAI, which once led the generative AI wave with ChatGPT, has found itself in the unexpected position of playing catch-up against Anthropic’s Claude Code. Through interviews with OpenAI executives, engineers, and developers, the article reveals that OpenAI’s early lead in AI programming—via its Codex project—was deprioritized as the company shifted resources toward ChatGPT and multimodal models. This strategic shift allowed Anthropic, founded by former OpenAI members, to focus intensely on coding capabilities, leading to the successful launch of Claude Code. OpenAI later reorganized internal teams and accelerated development of its AI programming products, such as the reasoning-based model o1 and later o3. Despite these efforts, Claude Code gained significant traction, especially after integration with tools like Cursor, which OpenAI attempted to acquire unsuccessfully. A proposed acquisition of Windsurf also failed due to tensions with Microsoft, OpenAI’s major partner. By late 2025 and early 2026, OpenAI’s Codex began narrowing the gap, with user growth rising to about 40% of Claude Code’s usage. The competition reflects broader industry trends where AI agents are increasingly automating cognitive work, raising questions about the future of software development and white-collar jobs. Despite progress, concerns around safety and societal impact remain as AI coding tools become more powerful and pervasive.

marsbit03/13 07:39

Why Is OpenAI Playing Catch-Up with Claude Code?

marsbit03/13 07:39

a16z: After AI Grants Humans Superpowers, Where Do We Go From Here?

A new paper titled "The Minimal Economics of AGI" explores the economic implications of AI automation, particularly as AI agents evolve from tools into collaborative partners capable of long-horizon tasks. The authors, Christian Catalini and Eddy Lazzarin, argue that the core economic divide will be between automation (tasks that can be measured and automated) and verification (tasks requiring human oversight, judgment, and contextual understanding). Key themes include: - The "coder’s curse": top experts training AI systems may inadvertently automate their own roles over time. - Three future human roles: directors (setting intent), verifiers (domain experts ensuring quality), and meaning-makers (creating cultural and social value). - Cryptocurrency and blockchain are positioned as critical for identity, provenance, and trust in a world flooded with AI-generated content. - Two potential economic outcomes: a "hollow economy" with systemic risk from under-verification, or an "augmented economy" where AI amplifies human potential and reduces costs for education, healthcare, and innovation. - The importance of small, agile teams leveraging AI for outsized impact, with crypto infrastructure enabling coordination at scale. The authors emphasize that AI acts as a force multiplier, granting individuals "superpowers," and urge a focus on verification, adaptability, and ambitious experimentation.

marsbit03/09 11:31

a16z: After AI Grants Humans Superpowers, Where Do We Go From Here?

marsbit03/09 11:31

SBF's Protege Turns $225 Million into $5.5 Billion in One Year

Leopold Aschenbrenner, a 24-year-old former member of FTX’s Future Fund team and later an OpenAI researcher, has become one of the most talked-about figures in AI investing. His fund, Situational Awareness LP, grew its publicly disclosed holdings from $225 million in Q4 2024 to $5.5 billion by Q4 2025—an extraordinary surge in just one year. After graduating top of his class from Columbia University, Aschenbrenner worked at FTX until its collapse. He then joined OpenAI’s Superalignment team but was fired in 2024 following internal disputes over AI safety. Shortly after, he published a influential 165-page essay, "Situational Awareness: The Decade Ahead," which laid the groundwork for his AI-focused investment fund. Situational Awareness LP’s strategy is highly concentrated, with 86% of its portfolio in its top ten holdings. The fund avoids popular AI application plays, instead targeting upstream infrastructure—especially energy, computing, optical communications, and storage. Its largest position is in Bloom Energy, whose stock has surged over 10x since late 2024. The fund also holds several Bitcoin mining companies, including Core Scientific and Bitdeer, betting on their pivot to AI compute rather than crypto. Aschenbrenner’s trajectory mirrors—yet diverges from—that of SBF, his former FTX associate. While SBF faced legal downfall, Aschenbrenner repositioned himself at the forefront of AI investing, turning disruption into opportunity.

marsbit03/05 07:31

SBF's Protege Turns $225 Million into $5.5 Billion in One Year

marsbit03/05 07:31

SBF's Protege Turns $225 Million into $5.5 Billion in One Year

Leopold Aschenbrenner, a 24-year-old former member of FTX’s Future Fund team and later an OpenAI employee, has become one of the most talked-about figures in AI investing. His fund, Situational Awareness LP, grew its publicly disclosed holdings from $255 million in Q4 2024 to $5.5 billion by Q4 2025—an extraordinary surge in just one year. Aschenbrenner, who graduated top of his class from Columbia University, was involved in the effective altruism movement, much like SBF. After FTX’s collapse, he joined OpenAI’s Superalignment team but was fired in 2024 over internal disagreements regarding AI safety. Shortly after, he published a influential 165-page essay on AI and the path to superintelligence, which led him to establish his AI-focused investment fund. Situational Awareness LP’s strategy is highly concentrated, with 86% of its portfolio in its top ten holdings. It focuses on upstream AI infrastructure—such as energy, computing power, and hardware—rather than application-layer companies. Notable positions include Bloom Energy (which saw a 10x gain), several Bitcoin mining firms transitioning to AI compute (like Core Scientific and Bitdeer), and a short position on Infosys, betting AI will replace IT outsourcing. Aschenbrenner’s story mirrors that of other brilliant young figures in tech and finance, but unlike SBF—now imprisoned—he has pivoted successfully into the AI boom, turning disruption into opportunity.

Odaily星球日报03/05 07:28

SBF's Protege Turns $225 Million into $5.5 Billion in One Year

Odaily星球日报03/05 07:28

OpenAI Is Turning AI into a Nuclear Arms Race That Ordinary People Can't Afford

In a record-breaking funding round, OpenAI has secured $110 billion, raising its post-money valuation to $840 billion. This investment, led by Amazon, NVIDIA, and SoftBank, marks the largest-ever private tech funding and signals a new phase in the global AI race—one defined by extreme capital concentration and geopolitical significance. The scale of funding dwarfs the GDP of many mid-sized nations and equals nearly half of NVIDIA’s annual revenue. It also accounts for more than half of all AI startup funding in 2025, accelerating an industry-wide arms race in compute, talent, and model development. This capital influx, however, risks widening the gap between giants and smaller players, potentially stifling innovation and increasing market consolidation. Strategic investors are not merely providing capital: Amazon’s $50 billion commitment includes an eight-year, $100 billion cloud expansion deal. SoftBank’s $30 billion staged investment serves as both a hedge and a bridge for future sovereign wealth entrants. NVIDIA’s $30 billion replaces an earlier partnership promise and effectively locks up its advanced GPU supply, creating a closed loop that sidelines competitors. Despite ChatGPT reaching 900 million weekly active users and 50 million paid subscribers, OpenAI’s burn rate remains high. It spent $0.62 for every dollar earned in 2025, with cumulative cash burn projected to hit $1150 billion by 2029. At the same time, its market share is eroding amid rising competition from Google’s Gemini and Musk’s Grok. Facing mounting financial pressure, OpenAI is eyeing a potential IPO in Q4 2026. The offering could mark either the peak of the AI investment bubble or the beginning of the AGI era—but for now, the world watches as OpenAI races against capital, competition, and time.

marsbit02/28 11:46

OpenAI Is Turning AI into a Nuclear Arms Race That Ordinary People Can't Afford

marsbit02/28 11:46

Sentient Foundation Officially Established: Committed to Promoting Open Source AGI to Ensure It Benefits All Humanity

Sentient Foundation has officially launched on February 10 as a nonprofit organization dedicated to ensuring that artificial general intelligence (AGI) remains open-source, decentralized, and aligned with human interests. It aims to prevent AGI from being monopolized by a few corporations and instead advocates for a future where this transformative technology benefits all of humanity. The foundation emphasizes that current powerful models like ChatGPT and Gemini are controlled by private entities, risking the concentration of power. It highlights the success of open-source alternatives like DeepSeek and Qwen, which demonstrate that open AI can compete with and even surpass closed models. Sentient Foundation will act as a neutral guardian of the open AGI ecosystem, focusing on key areas such as value alignment and safety, global research collaboration, developer support, inclusive governance, and public advocacy. It draws inspiration from historic open-source successes like Linux, Apache, and Android. Working alongside Sentient Labs, which leads technical research on AI frameworks and models, the foundation ensures that innovations serve the broader goal of open and aligned AGI. It invites researchers, developers, institutions, and policymakers to join its global efforts in promoting transparent, equitable, and beneficial AGI development.

marsbit02/20 01:41

Sentient Foundation Officially Established: Committed to Promoting Open Source AGI to Ensure It Benefits All Humanity

marsbit02/20 01:41

活动图片