Turning 200,000 into Nearly 100 Million: DeFi Stablecoin Attacked Again

marsbit2026-03-22 tarihinde yayınlandı2026-03-22 tarihinde güncellendi

Özet

DeFi stablecoin protocol Resolv Labs was exploited, resulting in a hacker minting 80 million USR tokens using only 200,000 USDC. The attacker’s address (starting with 0x04A2) first created 50 million USR with 100,000 USDC, and later minted another 30 million with an additional 100,000 USDC. This caused USR to depeg, dropping to around $0.25 before partially recovering to approximately $0.80. The incident also impacted related lending markets on Morpho and Lista DAO, which paused new borrowing requests. Additionally, RLP token holders, including Stream Finance—which holds over 13 million RLP tokens—face significant exposure, with estimated losses around $17 million. Initial analysis by DeFi community YAM suggests the exploit occurred because the protocol’s SERVICE_ROLE, which provides minting parameters, was compromised. The system fully trusted this role’s input without on-chain verification or minting limits, allowing the attacker to manipulate the mint amount. The project’s emergency response was also slow, taking nearly three hours to pause the protocol due to multi-signature delays. This attack highlights critical vulnerabilities in off-chain role trust and emergency mechanisms within DeFi protocols.

Written by: Eric, Foresight News

At approximately 10:21 Beijing time today, Resolv Labs, which issues the stablecoin USR using a Delta neutral strategy, was hacked. An address starting with 0x04A2 used 100,000 USDC to mint 50 million USR from the Resolv Labs protocol.

As the incident was exposed, USR plummeted to around $0.25, and as of writing, it has recovered to approximately $0.80. The price of the RESOLV token also saw a short-term drop of nearly 10%.

Subsequently, the hacker repeated the same method, using another 100,000 USDC to mint 30 million USR. As USR significantly depegged, arbitrage traders quickly took action. Many lending markets on Morpho that supported USR, wstUSR, and others as collateral were almost drained, and Lista DAO on BNB Chain also suspended new borrowing requests.

The impact was not limited to these lending protocols. In the Resolv Labs protocol design, users can also mint an RLP token, which has greater price volatility and higher returns but requires bearing compensation liability when the protocol incurs losses. Currently, the circulating supply of RLP tokens is nearly 30 million, with the largest holder, Stream Finance, holding over 13 million RLP, representing a net risk exposure of approximately $17 million.

Yes, Stream Finance, which was previously hit by the xUSD incident, may be hit again.

As of writing, the hacker has converted USR into USDC and USDT and continues to buy Ethereum, having purchased over 10,000 ETH so far. Using 200,000 USDC, the hacker extracted over $20 million in assets, finding their "hundred-fold coin" during the bear market.

Another Exploit Due to "Lack of Rigor"

The sharp drop on October 11 last year caused collateral losses for many stablecoins issued using Delta neutral strategies due to ADL (Auto-Deleveraging). Projects using altcoins as assets for strategy execution suffered even more severe losses, with some even directly absconding.

The attacked Resolv Labs also uses a similar mechanism to issue USR. The project announced in April 2025 that it had completed a $10 million seed round led by Cyber.Fund and Maven11, with participation from Coinbase Ventures, and launched the RESOLV token at the end of May/early June.

However, the reason for the attack on Resolv Labs was not extreme market conditions but rather a "lack of rigor" in the design of the USR minting mechanism.

No security firm or official has yet analyzed the cause of this hack. The DeFi community YAM preliminarily concluded through analysis that the attack was likely caused by the SERVICE_ROLE, used by the protocol's backend to provide parameters to the minting contract, being compromised by the hacker.

According to Grok's analysis, when a user mints USR, they initiate a request on-chain and call the contract's requestMint function, with parameters including:

_depositTokenAddress: the address of the deposited token;

_amount: the amount deposited;

_minMintAmount: the minimum expected amount of USR to receive (slippage protection).

Subsequently, the user deposits USDC or USDT into the contract. The project's backend SERVICE_ROLE monitors the request, uses the Pyth oracle to check the value of the deposited assets, and then calls the completeMint or completeSwap function to determine the actual amount of USR to mint.

The problem lies in the fact that the minting contract fully trusts the _mintAmount provided by the SERVICE_ROLE, assuming this number was verified off-chain by Pyth. Therefore, it did not set an upper limit restriction, nor did it perform on-chain oracle verification, and directly executed mint(_mintAmount).

Based on this, YAM suspects that the hacker gained control of the SERVICE_ROLE, which should have been controlled by the project team (possibly due to internal oracle failure, insider theft, or key compromise), and directly set the _mintAmount to 50 million during the minting process, achieving the attack of minting 50 million USR with 100,000 USDC.

In conclusion, Grok's assessment is that Resolv did not consider the possibility that the address (or contract) receiving user minting requests could be compromised by hackers when designing the protocol. When the USR minting request was submitted to the final USR minting contract, no maximum minting amount was set, nor did the minting contract perform secondary verification using an on-chain oracle; it simply trusted all parameters provided by the SERVICE_ROLE.

Inadequate Prevention

In addition to speculating on the cause of the hack, YAM also pointed out the project's lack of preparedness in crisis response.

YAM stated on X that Resolv Labs only paused the protocol 3 hours after the hacker's first attack was completed, with about 1 hour of delay coming from collecting the 4 signatures required for the multisig transaction. YAM believes that an emergency pause should require only one signature, and the authority should be assigned to team members as much as possible, or to trusted external operators, to increase attention to on-chain anomalies, improve the possibility of quick pauses, and better cover different time zones.

Although the suggestion of requiring only a single signature to pause the protocol is somewhat radical,确实 requiring multiple signatures across different time zones to pause the protocol can indeed cause significant delays when emergencies occur. Introducing trusted third parties who continuously monitor on-chain behavior, or using monitoring tools with emergency protocol pause permissions, are lessons learned from this incident.

Hacker attacks on DeFi protocols have long gone beyond contract vulnerabilities. The Resolv Labs incident serves as a warning to project teams: the assumption in protocol security should be to trust no single link; all parameter-related links must undergo at least secondary verification, even if it's the project's own operational backend.

İlgili Sorular

QWhat was the main reason behind the Resolv Labs hack according to the DeFi community YAM's analysis?

AThe hack was likely due to the SERVICE_ROLE, which provides parameters to the minting contract, being controlled by the hacker. The minting contract fully trusted the _mintAmount parameter provided by SERVICE_ROLE without setting a maximum limit or performing a secondary on-chain oracle verification.

QHow much initial capital did the hacker use, and what was the approximate value of the assets they obtained?

AThe hacker used 200,000 USDC to mint a large amount of USR and subsequently obtained assets worth over 20 million US dollars.

QWhich protocols or platforms were affected beyond Resolv Labs itself due to this attack?

AMorpho's lending markets that accepted USR and wstUSR as collateral were almost drained, and Lista DAO on BNB Chain paused new borrowing requests. Additionally, RLP token holders, like Stream Finance, faced significant risk exposure.

QWhat specific flaw in the protocol's design allowed the hacker to mint an excessive amount of USR?

AThe protocol's design did not consider the possibility that the address (or contract) receiving user minting requests could be compromised. The minting contract lacked a maximum mint amount limit and did not use an on-chain oracle for secondary verification, blindly trusting all parameters from the SERVICE_ROLE.

QWhat criticism did YAM level against Resolv Labs' emergency response measures?

AYAM criticized that it took Resolv Labs 3 hours to pause the protocol after the first attack, with about an hour of that delay attributed to collecting 4 signatures required for the multisig transaction. They suggested emergency pauses should require only one signature and be assigned to team members or trusted external operators for faster response.

İlgili Okumalar

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit50 dk önce

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit50 dk önce

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit52 dk önce

Your Claude Will Dream Tonight, Don't Disturb It

marsbit52 dk önce

İşlemler

Spot
Futures
活动图片