Crypto Firms Face Daily ‘Fake Zoom’ Attacks Linked To North Korea, Experts Say

bitcoinistОпубликовано 2025-12-16Обновлено 2025-12-16

North Korean-linked hackers are using fake Zoom calls to drain crypto wallets in what security researchers say has become a near-daily threat to the cryptocurrency community. According to multiple security reports, the campaign has already netted roughly $300 million in stolen funds and shows few signs of slowing.

Fake Zoom Meetings Used To Drain Wallets

According to Security Alliance (SEAL) and other researchers, attackers first contact targets through messaging apps such as Telegram. They then invite victims to a video call that looks legitimate.

During the call, the impostors claim there is a problem with sound or video and offer a “fix” — a file or a link that appears to be an official update. When the victim runs the file, malware installs and begins stealing credentials, browser data, and crypto keys.

Several attacks are reported every day, and many follow the same pattern. Researchers say these staged calls let attackers bypass normal caution because people tend to trust someone they see on camera.

NimDoor, Other Malware Strains Target macOS And Wallets

Based on reports, one strain tied to these schemes is NimDoor, a macOS backdoor that can harvest keychain items, browser-stored passwords, and messaging data.

Security teams link NimDoor and related tools to BlueNoroff, a group connected to the Lazarus Group network. BlueNoroff has a long record of attacking crypto firms and exchanges.

Once the malware is in place, wallets have been emptied within minutes. Victims often discover the theft only after seeing outgoing transactions on the blockchain.

Total crypto market cap currently at $2.93 trillion. Chart: TradingView

Deepfakes And Calendar Invites Make Scams More Convincing

Researchers warn that attackers are not simply using fake names. They are also deploying AI-assisted deepfake video and voice tools to impersonate executives or known contacts.

Attackers sometimes send calendar invites that look like genuine meeting requests from platforms such as Calendly, directing targets to attacker-controlled Zoom links.

The level of social engineering makes the calls seem urgent and official, which reduces the time victims take to question what they are being asked to install.

Attackers Target Individuals And Small Firms Alike

Reports have disclosed that victims include individual traders, startup employees, and small teams at crypto companies. Losses are concentrated but widespread, with estimates around $300,000,000.

Some victims have lost funds tied to browser wallets and hot wallets; others had recovery phrases captured and used to drain accounts.

Security teams urge quick action when a suspicious update is offered during a remote session: They warn not to run it, verify separately, and treat unsolicited meeting fixes as high risk.

Featured image from Unsplash, chart from TradingView

Похожее

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit23 мин. назад

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit23 мин. назад

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit25 мин. назад

Your Claude Will Dream Tonight, Don't Disturb It

marsbit25 мин. назад

Торговля

Спот
Фьючерсы
活动图片