# Trust İlgili Makaleler

HTX Haber Merkezi, kripto endüstrisindeki piyasa trendleri, proje güncellemeleri, teknoloji gelişmeleri ve düzenleyici politikaları kapsayan "Trust" hakkında en son makaleleri ve derinlemesine analizleri sunmaktadır.

Supported by 20+ Institutions: How Does Sui's New Primitive Hashi Rewrite the Rules of Bitcoin Financial Trust?

Sui has introduced Hashi, a new decentralized Bitcoin (BTC)抵押原语 (primitive) designed to enable trust-minimized and secure use of native BTC in DeFi on the Sui blockchain, backed by over 20 major institutions. Hashi allows users to抵押 Bitcoin without transferring custody to centralized entities. BTC remains on the Bitcoin network in a dedicated address, while a抵押凭证 is generated on Sui. This凭证, representing the locked BTC, can be used in Sui's smart contracts for lending, borrowing, and other DeFi activities. The system relies on Sui validators for security, with a Guardian Layer for additional protection against risks like validator collusion. Key to Hashi is its role as a "primitive"—a foundational building block for developers. It provides a standardized interface to integrate native BTC抵押 capabilities into applications like lending protocols, structured products, and RWA strategies, reducing development barriers. Institutional support spans custody (e.g., BitGo, Cobo), trading (e.g., FalconX, Bullish), security (e.g., OtterSec, Certora), and protocols (e.g., Suilend, Scallop). This ecosystem support aims to facilitate large-scale institutional BTC adoption into DeFi upon mainnet launch. Hashi addresses core trust issues in Bitcoin金融 by prioritizing non-custodial security, transparency, and composability, potentially unlocking Bitcoin's $1.4 trillion market cap for decentralized finance without sacrificing user control.

marsbitDün 06:33

Supported by 20+ Institutions: How Does Sui's New Primitive Hashi Rewrite the Rules of Bitcoin Financial Trust?

marsbitDün 06:33

Claude Deliberately Dumbs Down? Are Models Starting to 'Discriminate Based on the User'?

"Claude Deliberately Downgraded? Models Begin to 'Discriminate Based on Users'?" Recent analysis by AMD AI Group Senior Director Stella Laurenzo reveals significant behavioral degradation in Anthropic's Claude since mid-February. Data from 6,852 session files shows Claude's median "thinking" output plummeted 67-73% from 2,200 to 600 characters, with one-third of code edits now performed without reading files first. Users began reporting slower, lazier responses in March, with some describing Claude as "lobotomized." Anthropic's introduction of "adaptive thinking" in early February, officially described as adjusting reasoning depth based on task complexity, effectively became a global throttling mechanism. By March, default effort was quietly reduced to "medium" while thinking summaries were hidden. Anthropic's Claude Code lead Boris Cherny confirmed this was intentional optimization, not a bug, suggesting users manually switch to "high effort" mode. The company never announced these significant changes, leaving paying subscribers with reduced capabilities at unchanged prices. This reflects a broader industry trend where AI companies are silently reducing capabilities to control GPU costs. Analysis shows extreme users generate $42,121 in actual inference costs while paying only $400 monthly, creating unsustainable subsidy model. Anthropic is now testing "high effort" mode by default for Teams and Enterprise users, signaling that superior reasoning is becoming a分层资源. Enterprise API users report significantly better performance at $4k-12k monthly costs, while consumer subscribers receive a "good enough" downgraded version. The incident marks the end of AI's subsidy era, with the industry shifting from universal普惠to elite stratification, quietly compromising consumer experience to manage real costs while offering premium capabilities to deep-pocketed enterprise clients.

marsbit2 gün önce 10:32

Claude Deliberately Dumbs Down? Are Models Starting to 'Discriminate Based on the User'?

marsbit2 gün önce 10:32

An Internal Memo Exposes OpenAI's Most Real Anxieties and Ambitions

An internal memo from OpenAI's Chief Revenue Officer, Denise Dresser, reveals the company's strategic priorities and competitive anxieties as the enterprise AI market matures. The document outlines a shift from competing solely on model capability to winning on integration, platform strategy, and becoming "hardest to replace." Key priorities for Q2 include: the model layer, the agent platform, expanding market reach via Amazon, selling the full tech stack, and controlling deployment. The goal is to evolve from a point solution to an enterprise AI "operating system" by deeply embedding into customer workflows, creating switching costs, and securing multi-year, nine-figure deals. The memo contains a direct and unusually sharp critique of rival Anthropic, accusing it of building a narrative on "fear" and "restriction," suffering from compute shortages leading to user experience issues, and overstating its annualized revenue by $8 billion due to accounting methods. This public criticism is seen as a calculated move for investor narratives, internal mobilization, and external signaling. For the Chinese AI market, the memo highlights a gap in competition stages. While domestic players still focus on benchmarks and price wars, the next phase will be won on deployment, platform integration, and ecosystem. It also underscores the critical importance of data sovereignty and trust, suggesting that compliant, auditable, on-premise solutions could be a major differentiator in regulated industries. A notable warning for Chinese companies is OpenAI's claim that its biggest constraint is "capacity," not demand. This contrasts sharply with the domestic market's challenge of finding enterprise customers willing to make large, long-term paid commitments, pointing to a fundamental gap in commercial adoption readiness.

marsbit2 gün önce 10:21

An Internal Memo Exposes OpenAI's Most Real Anxieties and Ambitions

marsbit2 gün önce 10:21

A Brief History of Web3 Airdrops: A Review of Twelve Iconic 'Rug Pull' Projects

**Summary: A History of Web3 Airdrop "Rug Pulls" – 12 Iconic Cases** The era of Web3 airdrops has shifted from a golden age of mutual benefit between early users and projects to a landscape dominated by systematic exploitation. This article reviews 12 infamous "anti-airdrop" projects that eroded user trust: 1. **Hop Protocol (HOP):** Pioneered a "community witch-hunt" model, encouraging users to report Sybil addresses to claim their rewards, fostering a toxic environment of mutual harm. 2. **Blast:** Introduced the exploitative "points system," locking user funds for meager returns that often underperformed risk-free yields, turning airdrop hunting into a rigged casino. 3. **LayerZero (ZRO):** After 18 months of user-funded gas fees, it implemented a harsh "guilty until proven innocent" Sybil filter, forcing users to "self-confess" or face zero rewards, destroying multi-chain interaction narratives. 4. **zkSync (ZK):** Prioritized "funds held at a specific time" over long-term activity, betraying early contributors who spent significant gas and rewarding insiders, crushing L2 airdrop expectations. 5. **Infinex:** Lured users with NFT and point systems, only to announce a high FDV, a mandatory 1-year lockup, and chaotic rules at its public sale, betraying its community. 6. **Linea:** Perfected user exploitation with endless, grueling Galxe Odyssey tasks and KYC requirements, reducing airdrop hunting to a low-wage, full-time job. 7. **Grass:** Exploited users' physical resources (bandwidth/IP) for DePIN data, rewarding them with tokens worth less than the electricity and proxy costs incurred. 8. **Monad:** Allocated a mere ~3.3% of its airdrop to the community after extensive testnet participation, favoring KOLs and insiders and dampening enthusiasm for new L1s. 9. **Babylon:** Forced Ethereum-style staking onto Bitcoin, causing users massive losses from failed transactions due to high fees and network congestion, damaging trust in L2s. 10. **Backpack:** Encouraged massive trading volume for points, then applied strict KYC and Sybil rules last minute, resulting in massive losses for users and cementing a negative stereotype for projects with Chinese founders. 11. **EdgeX:** Perpetual DEX users lost significant fees for minimal rewards, while "insider" addresses received enormous allocations, exposing blatant corruption and killing the Perp DEX airdrop narrative. 12. **Genius:** The final straw: users were forced to choose between immediately claiming only 30% of their airdrop, locking tokens for a year for 100%, or a 100% burn for a gas fee refund, shattering trust in "elite-backed" narratives. **Conclusion** marks the painful end of the airdrop era. This collective "rug pull" was a co-created disaster of speculation and greed. The collapse, while brutal, forces a return to fundamentals: sustainable products with real product-market fit are paramount. This is not just the end of airdrops but a potential rebirth for Web3, weeding out exploitative projects and rewarding those that build genuine community value.

marsbit2 gün önce 03:14

A Brief History of Web3 Airdrops: A Review of Twelve Iconic 'Rug Pull' Projects

marsbit2 gün önce 03:14

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

"The New Yorker investigation, based on internal documents and interviews with over 100 sources, reveals deep internal distrust in OpenAI’s leadership, particularly toward CEO Sam Altman. Key allegations include a pattern of dishonesty, undermining safety protocols, and prioritizing commercial interests over OpenAI’s original non-profit mission to develop AI safely. Chief Scientist Ilya Sutskever compiled a 70-page dossier accusing Altman of repeatedly lying to the board—for instance, falsely claiming GPT-4 features had passed safety reviews. Anthropic co-founder Dario Amodei’s private notes further detail how Microsoft’s investment deal effectively neutered OpenAI’s safety commitments. The report also highlights unfulfilled promises, such as allocating only 1-2% of promised computing resources to critical safety teams. Internal conflicts extend to CFO Sarah Friar, who opposed Altman’s aggressive IPO timeline amid financial concerns. Microsoft executives compared Altman to fraudsters like SBF, citing a tendency to distort facts and renege on agreements. Critics argue that Altman’s unchecked authority and alleged disregard for transparency pose significant risks given OpenAI’s powerful, potentially dangerous AI technology. The company’s transformation from a safety-first non-profit to a profit-driven entity raises fundamental questions about its governance and ethical commitments."

marsbit04/07 03:40

The New Yorker In-Depth Investigation Analysis: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?

marsbit04/07 03:40

活动图片