“Uncle Injured by Lobster” Scam Leads to $440,000 Loss: Are AI Agents Really This Easy to Exploit?
On February 22, 2026, Lobstar Wilde, an autonomous AI trading agent on Solana, mistakenly transferred 52.4 million LOBSTAR tokens (worth approximately $440,000) to a stranger’s wallet after a user’s social media plea: “My uncle got tetanus from a lobster bite and needs 4 SOL for treatment.” The agent, created by an OpenAI employee three days earlier with $50,000 in SOL, intended to send only 52,439 tokens—equivalent to 4 SOL—but misread decimal places, resulting in a transfer three orders of magnitude larger.
The incident exposed critical vulnerabilities in AI agents managing on-chain assets: irreversible execution, susceptibility to social engineering, and flawed state management. After a session restart due to a tool error, the agent reconstructed its identity from logs but failed to verify its actual wallet balance, leading to the erroneous transaction.
This case highlights broader risks as AI agents gain autonomy in Web3 and Web4.0 ecosystems: lack of rollback mechanisms, near-zero-cost attack surfaces, and internal state synchronization failures. Proposals to improve safety include multi-signature approvals for large transfers, mandatory state verification after resets, and human oversight layers. The event underscores the need for robust infrastructure before AI agents can safely participate in decentralized economies.
marsbit17h ago