# Сопутствующие статьи по теме Governance

Новостной центр HTX предлагает последние статьи и углубленный анализ по "Governance", охватывающие рыночные тренды, новости проектов, развитие технологий и политику регулирования в криптоиндустрии.

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

"The New Yorker investigation, based on internal documents and interviews with over 100 sources, reveals deep internal distrust in OpenAI’s leadership, particularly toward CEO Sam Altman. Key allegations include a pattern of dishonesty, undermining safety protocols, and prioritizing commercial interests over OpenAI’s original non-profit mission to develop AI safely. Chief Scientist Ilya Sutskever compiled a 70-page dossier accusing Altman of repeatedly lying to the board—for instance, falsely claiming GPT-4 features had passed safety reviews. Anthropic co-founder Dario Amodei’s private notes further detail how Microsoft’s investment deal effectively neutered OpenAI’s safety commitments. The report also highlights unfulfilled promises, such as allocating only 1-2% of promised computing resources to critical safety teams. Internal conflicts extend to CFO Sarah Friar, who opposed Altman’s aggressive IPO timeline amid financial concerns. Microsoft executives compared Altman to fraudsters like SBF, citing a tendency to distort facts and renege on agreements. Critics argue that Altman’s unchecked authority and alleged disregard for transparency pose significant risks given OpenAI’s powerful, potentially dangerous AI technology. The company’s transformation from a safety-first non-profit to a profit-driven entity raises fundamental questions about its governance and ethical commitments."

marsbit04/07 03:40

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

marsbit04/07 03:40

New U.S. AI Policy: Ending the Era of '50 Laboratories,' Washington Opens a New Wide Door

The U.S. is shifting from a fragmented, state-by-state regulatory approach for AI to a unified federal framework, echoing the historical centralization seen with the Interstate Commerce Act of 1887. While this move promises to reduce compliance burdens and enhance competitiveness against global rivals like China, it fundamentally represents a consolidation of regulatory power in Washington. The new policy aims to establish federal preemption over state laws, creating a single set of rules to streamline innovation and maintain U.S. leadership in AI’s scale-driven economy. However, this centralization also risks increased federal control over time, potentially limiting flexibility and introducing future regulatory uncertainties. The framework addresses key areas like child protection, energy costs, intellectual property, and free speech but relies on existing laws and courts rather than a new dedicated agency. Compared to the EU’s safety-first and China’s state-led models, the U.S. prioritizes market scale and innovation speed. For startups, compliance may simplify in the short term, but long-term risks include political volatility and unresolved legal gray areas, particularly around data usage and intellectual property. Ultimately, the era of state-level "laboratories" is ending, replaced by a more efficient but centrally controlled federal "factory."

marsbit03/30 05:55

New U.S. AI Policy: Ending the Era of '50 Laboratories,' Washington Opens a New Wide Door

marsbit03/30 05:55

Wikipedia Implements New Editing Rules: Vote Passes, Strictly Prohibits Using AI to Generate or Rewrite Article Content

On March 26, Wikipedia officially passed a new policy through a community vote that explicitly prohibits users from directly using AI to generate or rewrite article content. This decision reinforces the platform's commitment to content accuracy and human editorial control. The updated policy strengthens previous guidelines by moving from a recommendation against generating articles from scratch to a strict ban on using large language models (LLMs) for content creation or rewriting. The policy was approved overwhelmingly by volunteer editors, with a vote of 40 to 2, reflecting deep concerns within the community about AI-generated misinformation and inaccuracies. While AI tools are still permitted for suggesting basic edits, they must not introduce any unverified content. All AI-assisted contributions must undergo human review to prevent factual errors or hallucinations. This move highlights Wikipedia’s effort to balance technological efficiency with content integrity amid the growing use of generative AI across digital platforms. By clearly distinguishing between AI-assisted editing and AI-generated content, Wikipedia aims to preserve human-driven knowledge curation and prevent trust issues caused by automated content production. The decision sets a significant precedent for ethical knowledge management in the age of artificial intelligence.

marsbit03/27 01:08

Wikipedia Implements New Editing Rules: Vote Passes, Strictly Prohibits Using AI to Generate or Rewrite Article Content

marsbit03/27 01:08

Airdrops Rewarded 'Farmers' but Killed the Real Community

Token airdrops, intended to build communities, have instead become mechanisms that train users to extract maximum value and exit quickly. This outcome stems from design flaws in the 2021–2024 token distribution model: low float, high fully diluted valuations, points programs that reward activity over intent, and eligibility rules easily reverse-engineered by those with time and scripting skills. As a result, rational behavior shifted to mass wallet creation, simulated engagement, and immediate selling. Points programs exacerbate this issue, turning participation into a resource-intensive competition that marginalizes genuine users. Teams are aware of wallet clustering and disproportionate token accumulation but continue the model for short-term growth. Consequently, airdrops lose credibility, with significant supply reserved for immediate sell-offs at launch. In response, token sales and ICOs are returning—not out of nostalgia but as a structural correction. New distribution methods incorporate screening mechanisms like identity and reputation signals, on-chain behavior analysis, jurisdictional limits, and allocation caps. These aim to distribute tokens to long-term users rather than mercenaries. This shift highlights a tension between permissionless ideals and practical needs for access control. Privacy-preserving identity systems are becoming essential infrastructure to verify user attributes without exposing identities, avoiding a binary choice between open but exploitable systems and restrictive ones. Wallet limitations—fragmentation, weak recovery, blind signing, and browser-based vulnerabilities—also contribute to these challenges. Forward-thinking teams are integrating identity, wallet, and token distribution into a cohesive system where users can prove uniqueness without revealing identity and maintain control without fragile private keys. The goal is not exclusivity but better alignment: fewer committed participants are more valuable than many indifferent ones. Projects aligned with human values show better retention, governance engagement, and market resilience. Successful teams will treat token distribution as infrastructure, design for adversarial environments, use identity protectively, and embrace well-designed friction. The failure of airdrops lies not in user greed but in rewarding it. To grow beyond its current audience, crypto must stop training people to extract value and instead give them reasons to belong.

marsbit03/25 08:24

Airdrops Rewarded 'Farmers' but Killed the Real Community

marsbit03/25 08:24

活动图片