Indepth Research

Provide in-depth research reports and independent analysis, leveraging data, technology, and economic insights to deliver a comprehensive examination of the blockchain ecosystem, project potential, and market trends.

The Small-Town Youth Labeling AI Giants

In China's hinterland cities like Datong, Shanxi, thousands of young people are working as data annotators—the invisible workforce behind AI development. They perform repetitive tasks like drawing bounding boxes on images or rating AI-generated responses, earning piece-rate wages as low as a few cents per task. These workers, mostly from rural areas or small towns, endure intense labor conditions: strict monitoring, high error tolerance thresholds, and mental exhaustion. Despite the cognitive nature of their work, they are often paid meager salaries, with some earning as little as ¥30 ($4) for a day’s work. As AI industry evolves, even highly educated workers—including master’s graduates—are being drawn into similar precarious freelance roles, evaluating complex AI outputs under vague and shifting standards. Yet the industry is structured through layers of outsourcing, where most profits flow to tech giants like OpenAI and Microsoft, while annotators see dwindling incomes. Worse, as AI models become more self-sufficient, the demand for human annotators is declining. Companies like Li Auto have slashed annotation costs by using AI-powered tools that complete in hours what used to take humans years. These annotators, who helped train the very systems now replacing them, face an uncertain future—a stark contrast to the booming valuations and optimistic narratives of the global AI industry. No one seems to see a problem with any of this.

marsbit04/07 04:37

The Small-Town Youth Labeling AI Giants

marsbit04/07 04:37

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

"The New Yorker investigation, based on internal documents and interviews with over 100 sources, reveals deep internal distrust in OpenAI’s leadership, particularly toward CEO Sam Altman. Key allegations include a pattern of dishonesty, undermining safety protocols, and prioritizing commercial interests over OpenAI’s original non-profit mission to develop AI safely. Chief Scientist Ilya Sutskever compiled a 70-page dossier accusing Altman of repeatedly lying to the board—for instance, falsely claiming GPT-4 features had passed safety reviews. Anthropic co-founder Dario Amodei’s private notes further detail how Microsoft’s investment deal effectively neutered OpenAI’s safety commitments. The report also highlights unfulfilled promises, such as allocating only 1-2% of promised computing resources to critical safety teams. Internal conflicts extend to CFO Sarah Friar, who opposed Altman’s aggressive IPO timeline amid financial concerns. Microsoft executives compared Altman to fraudsters like SBF, citing a tendency to distort facts and renege on agreements. Critics argue that Altman’s unchecked authority and alleged disregard for transparency pose significant risks given OpenAI’s powerful, potentially dangerous AI technology. The company’s transformation from a safety-first non-profit to a profit-driven entity raises fundamental questions about its governance and ethical commitments."

marsbit04/07 03:40

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

marsbit04/07 03:40

70-Page Confidential Document's First Allegation: 'Lying', Altman Told the Board 'I Can't Change My Character'

In a major investigation, Pulitzer winner Ronan Farrow and Andrew Marantz reveal two previously undisclosed documents: a ~70-page confidential file compiled by former OpenAI chief scientist Ilya Sutskever and over 200 pages of internal notes by Anthropic CEO Dario Amodei from his time at OpenAI. Sutskever’s file, which opens with the accusation that Sam Altman exhibited a "pattern of lying," alleges he misled executives and the board on safety protocols and corporate matters. Amodei’s notes similarly claim “the problem at OpenAI is Sam himself,” citing instances like Altman denying agreed-upon terms in Microsoft’s $1 billion deal. Key revelations include: - No written report was produced from the post-reinstatement independent investigation into Altman. - OpenAI’s superalignment team received only 1-2% of the promised computing resources, mostly on outdated clusters. - In 2018, executives considered a "National Plan" to auction AI tech to nations including China and Russia. - Microsoft executives expressed strong distrust toward Altman, with one comparing his risk profile to figures like Bernie Madoff. During a board call after his firing, Altman reportedly said, "I can’t change my personality," which a director interpreted as an admission of persistent dishonesty. Altman denies intentional deception, attributing his behavior to "well-intentioned adaptation" and conflict avoidance.

marsbit04/06 14:24

70-Page Confidential Document's First Allegation: 'Lying', Altman Told the Board 'I Can't Change My Character'

marsbit04/06 14:24

Data Research: How Big Is the Liquidity Gap Between Hyperliquid and CME Crude Oil?

This analysis compares the liquidity and market structure of Hyperliquid's xyz:CL perpetual crude oil contract with CME's CLJ6 futures contract over a three-week period from late February to mid-March 2026. Key findings reveal a significant liquidity gap: Hyperliquid's average depth is less than 1% of CME's, with a 125x difference at the ±2 bps level. The median trade size on Hyperliquid ($543) is 166x smaller than on CME ($90,450), reflecting its crypto-native retail user base. For a $1M order, estimated slippage on Hyperliquid (15.4 bps) is approximately 20x higher than on CME (0.79 bps), indicating it currently lacks the capacity for institutional-sized orders. However, a notable trend emerged during weekends when CME is closed. Hyperliquid's weekend trading volume grew significantly over the three observed weekends, from $31M to over $1B, and the average trade size increased, suggesting use by traders seeking exposure or hedging ahead of Monday's open. While an initial "discovery boundary" mechanism limited price discovery on the first weekend, subsequent weekends showed Hyperliquid's price increasingly converged with CME's Monday opening price, demonstrating its evolving price discovery capabilities. The report concludes that while Hyperliquid's absolute liquidity metrics are not comparable to CME, its growing weekend activity shows promise. However, high transaction costs for large orders remain a major barrier to attracting institutional participants.

Odaily星球日报04/06 02:50

Data Research: How Big Is the Liquidity Gap Between Hyperliquid and CME Crude Oil?

Odaily星球日报04/06 02:50

活动图片