2024年勒索软件支付飙升,黑客夺回了集中式交易所:Chainalysis

币界网Опубликовано 2024-08-15Обновлено 2024-08-15

币界网报道:

虽然自去年以来,非法交易总量有所下降,但被盗资金和勒索软件的数量却显著增加。截至7月底,加密货币骗局中被盗资金的金额几乎翻了一番,从8.57亿美元增加到15.8亿美元。

勒索软件支付也有所增长,2023年年中的数字为4.491亿美元,而今年为4.598亿美元。这一趋势基本上表明了勒索软件创纪录的一年的可能性。

犯罪分子将注意力转移回集中交易

根据Chainalysis与CryptoPotato分享的最新调查结果,截至今年7月底,被盗资产的总价值已超过15.8亿美元,比去年同期增长84.4%。

尽管被盗价值大幅上升,但2024年的黑客事件数量仅略高于2023年,同比增长2.76%。与此同时,每起事件的平均损失价值飙升了79.46%,从2023年上半年的每起事件590万美元攀升至2024年迄今的每起活动1060万美元。

区块链数据平台还表示,在四年来专注于去中心化交易所之后,犯罪分子似乎正在回归本源,再次瞄准中心化交易所。

在2022年对DeFi服务和跨链网桥的攻击达到峰值后,包括来自朝鲜的攻击者在内的攻击者正在使用先进的社会工程策略,如申请IT工作,来破坏这些交易所。事实上,联合国报告称,目前有4000多名朝鲜人受雇于西方科技公司。

勒索软件攻击增加

Chainalysis报道称,支付的赎金已达到4.598亿美元,这使2024年成为有史以来最糟糕的一年。Kiva Consulting的总法律顾问Andrew Davis表示,尽管LockBit和ALPHV/BlackCat造成了中断,但勒索软件活动仍然相当稳定。

“无论是这些著名的威胁行为者行动的前附属机构,还是新的暴发户,大量新的勒索软件团体都加入了这场战斗,展示了实施攻击的新方法和技术,例如扩展其初始访问和横向移动方法的手段。”

勒索软件攻击明显恶化,每年观察到的最大赎金支付明显增加。2024年,勒索软件集团Dark Angels的最高单笔付款达到约7500万美元。这比2023年同比增长96%,比2022年惊人地增长了335%。

大公司和关键基础设施提供商正成为勒索软件攻击的主要目标,因为他们的“财力雄厚和系统重要性”使他们更有可能同意支付巨额赎金。

Похожее

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

In a span of four days, Amazon announced an additional $25 billion investment, and Google pledged up to $40 billion—both direct competitors pouring over $65 billion into the same AI startup, Anthropic. Rather than a typical venture capital move, this signals the latest escalation in the cloud wars. The core of the deal is not equity but compute pre-orders: Anthropic must spend the majority of these funds on AWS and Google Cloud services and chips, effectively locking in massive future compute consumption. This reflects a shift in cloud market dynamics—enterprises now choose cloud providers based on which hosts the best AI models, not just price or stability. With OpenAI deeply tied to Microsoft, Anthropic’s Claude has become the only viable strategic asset for Google and Amazon to remain competitive. Anthropic’s annualized revenue has surged to $30 billion, and it is expanding into verticals like biotech, positioning itself as a cross-industry AI infrastructure layer. However, this funding comes with constraints: Anthropic’s independence is challenged as it balances two rival investors, its safety-first narrative faces pressure from regulatory scrutiny, and its path to IPO introduces new financial pressures. Globally, this accelerates a "tri-polar" closed-loop structure in AI infrastructure, with Microsoft-OpenAI, Google-Anthropic, and Amazon-Anthropic forming exclusive model-cloud alliances. In contrast, China’s landscape differs—investments like Alibaba and Tencent backing open-source model firm DeepSeek reflect a more decoupled approach, though closed-source models from major cloud providers still dominate. The $65 billion bet is ultimately about securing a seat at the table in an AI-defined future—where missing the model layer means losing the cloud war.

marsbit5 ч. назад

Google and Amazon Simultaneously Invest Heavily in a Competitor: The Most Absurd Business Logic of the AI Era Is Becoming Reality

marsbit5 ч. назад

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

DeepSeek-V4 has been released as a preview open-source model, featuring 1 million tokens of context length as a baseline capability—previously a premium feature locked behind enterprise paywalls by major overseas AI firms. The official announcement, however, openly acknowledges computational constraints, particularly limited service throughput for the high-end DeepSeek-V4-Pro version due to restricted high-end computing power. Rather than competing on pure scale, DeepSeek adopts a pragmatic approach that balances algorithmic innovation with hardware realities in China’s AI ecosystem. The V4-Pro model uses a highly sparse architecture with 1.6T total parameters but only activates 49B during inference. It performs strongly in agentic coding, knowledge-intensive tasks, and STEM reasoning, competing closely with top-tier closed models like Gemini Pro 3.1 and Claude Opus 4.6 in certain scenarios. A key strategic product is the Flash edition, with 284B total parameters but only 13B activated—making it cost-effective and accessible for mid- and low-tier hardware, including domestic AI chips from Huawei (Ascend), Cambricon, and Hygon. This design supports broader adoption across developers and SMEs while stimulating China's domestic semiconductor ecosystem. Despite facing talent outflow and intense competition in user traffic—with rivals like Doubao and Qianwen leading in monthly active users—DeepSeek has maintained technical momentum. The release also comes amid reports of a new funding round targeting a valuation exceeding $10 billion, potentially setting a new record in China’s LLM sector. Ultimately, DeepSeek-V4 represents a shift toward open yet realistic infrastructure development in the constrained compute landscape of Chinese AI, emphasizing engineering efficiency and domestic hardware compatibility over pure model scale.

marsbit5 ч. назад

Computing Power Constrained, Why Did DeepSeek-V4 Open Source?

marsbit5 ч. назад

Торговля

Спот
Фьючерсы
活动图片