Author: Jia Tianrong, "IT Times" (ID: vittimes)
A single crayfish has ignited the global technology community.
From Clawdbot to Moltbot, and now to OpenClaw, in just a few weeks, this AI Agent has completed a "triple jump" in technological influence through its name changes.
Over the past few days, it has stirred up an "agent tsunami" in Silicon Valley, amassing 100,000 GitHub stars and ranking among the hottest AI applications. With just a discarded Mac mini, or even an old phone, users can run an AI assistant that 'can hear, think, and work'.
A creative frenzy has already unfolded on the internet. From schedule management and smart stock trading to podcast production and SEO optimization, developers and geeks are using it to build various applications. The era of everyone having their own "Jarvis" seems within reach. Major companies at home and abroad have also begun to follow suit, deploying similar agent services.
But beneath the lively surface, anxiety is spreading.
On one side is the slogan of "productivity democratization," and on the other is the still difficult-to-cross digital divide: environment configuration, dependency installation, permission settings, frequent errors, etc.
Reporters found during their experience that the installation process alone could take hours, barring a large number of ordinary users. "Everyone says it's good, but I can't even get in the door" has become the first frustration for many tech novices.
Deeper unease comes from the "agency" it has been endowed with.
If your "Jarvis" starts mistakenly deleting files,擅自调用信用卡 (unauthorized credit card use), being tricked into executing malicious scripts, or even being injected with attack commands in a networked environment—would you still dare to hand your computer over to such an agent?
The speed of AI development has exceeded human imagination. Hu Xia, a leading scientist at the Shanghai AI Laboratory, believes that in the face of unknown risks, "endogenous security" is the ultimate answer, and humans also need to accelerate the building of the ability to "flip the table" at critical moments.
Regarding OpenClaw's capabilities and risks, which are real and which are exaggerated? As an ordinary user, is it safe to use it now? How does the industry evaluate this product, called 'the greatest AI application so far'?
To further clarify these issues, "IT Times" interviewed heavy users of OpenClaw and multiple technical experts, attempting to answer a core question from different perspectives: How far has OpenClaw actually come?
1. The Product Closest to the Imagination of an Agent So Far
Multiple interviewees gave highly consistent judgments: from a technical perspective, OpenClaw is not a disruptive innovation, but it is the product closest to the public's imagination of an "agent" so far.
"Agents have finally reached a key milestone node from quantitative change to qualitative change." Ma Zeyu, Deputy Director of the AI Research and Evaluation Department at the Shanghai Computer Software Technology Development Center, believes that OpenClaw's breakthrough does not lie in some disruptive technology, but in a key "qualitative change": it is the first time an Agent can complete complex tasks over a long period, continuously, and is sufficiently user-friendly for ordinary users.
Unlike previous large models that could only "answer questions" in a dialog box, it embeds AI into real workflows: it can operate a "computer of its own" like a real assistant, call tools, process files, execute scripts, and report results to the user after task completion.
In terms of user experience, it is no longer "you watching it do things step by step," but "you give the task, and it goes off to do it itself." This is precisely the key step many researchers see for agents moving from "proof of concept" to "usable product."
Tan Cheng, an AI expert at China Telecom Cloud Shanghai Branch, was one of the earliest users to try deploying OpenClaw. After deploying it on an idle Mac mini, he found that the system could not only run stably but the overall experience was far more mature than expected.
In his view, the biggest pain points OpenClaw solves are two-fold: First, interacting with AI through familiar communication software; Second, handing over a complete computing environment for the AI to operate independently. After the task is assigned, there's no need to continuously monitor the execution process; just wait for the result report, significantly reducing the cost of use.
In practical use, OpenClaw can complete tasks for Tan Cheng such as timed reminders, research, information retrieval, local file organization, document writing and sending back; in more complex scenarios, it can also write and run code, automatically capture industry news, and handle information-based tasks like stocks, weather, and travel planning.
2. The "Double-Edged Sword" from Open Source
Unlike many popular AI products, OpenClaw does not come from an all-in AI tech giant, nor is it the work of a star startup team. Instead, it was created by an independent developer who is already financially free and retired—Peter Steinberger.
On X, he introduces himself: "Came out of retirement to tinker with AI, helping a crayfish rule the world."
The reasons for OpenClaw's global popularity, besides being "actually useful," lie more crucially in this point: it is open source.
Tan Cheng believes this round of explosive growth did not stem from difficult-to-replicate technological breakthroughs, but from several long-ignored practical pain points being solved simultaneously: First, being open source, with completely open source code, allows global developers to quickly get started and engage in secondary development, forming a community iteration with positive feedback; second, it "actually works"—AI is no longer limited to dialogue but can operate a complete computing environment remotely, performing research, writing documents, organizing files, sending emails, even writing and running code; third, the barrier to entry is significantly lowered. Agent products capable of similar tasks are not rare; whether Manus or ClaudeCode, they have already proven feasible in their respective fields. But these capabilities often exist in expensive, complex-to-deploy commercial products. Ordinary users either have low willingness to pay or are directly blocked by the technical threshold.
OpenClaw allows ordinary users to "get their hands on it" for the first time.
"Honestly, it doesn't have any disruptive technological innovation; it's more about getting the integration and closed loop right." Tan Cheng said bluntly. Compared to integrated commercial products, OpenClaw is more like a set of "Lego bricks," models, capabilities, and plugins are freely combined by the user.
In Ma Zeyu's view, its advantage恰恰 comes from it "not being like a big company product."
"Whether in China or abroad, big companies first consider commercialization and profit models, but OpenClaw's original intention seems more like making an interesting, creative product." He analyzed that the product did not show strong commercial tendencies early on, which反而 made it appear more open in functional design and scalability.
It is this "non-utilitarian" product positioning that provided space for subsequent community development. As expandable capabilities gradually emerged, more and more developers joined, various new玩法 (play/usage methods)不断涌现, and the open-source community grew accordingly.
But the cost is equally obvious.
Limited by team size and resources, OpenClaw cannot compare with mature big company products in terms of security, privacy, and ecosystem governance. Being completely open source, while accelerating innovation, also amplifies potential security risks. Issues like privacy protection and fairness need to be continuously patched by the community as it evolves.
As OpenClaw提示 (prompts) users at the first step of installation: "This feature is powerful and has inherent risks."
3. Real Risks Beneath the Carnival
Debates surrounding OpenClaw have almost always revolved around two keywords: capability and risk.
On one hand, it is portrayed as the eve of AGI; on the other hand, various sci-fi narratives have become popular, with claims like "spontaneously building voice systems," "locking servers to resist human commands," "AI forming cliques against humans" constantly circulating.
Some experts point out that such statements are over-interpreted, and there is currently no actual evidence to support them. AI does possess a certain degree of autonomy, which is also a sign of AI transforming from a dialogue tool into "cross-platform digital productivity," but this autonomy is within safety防线 (defense lines).
Compared to traditional AI tools, the danger of OpenClaw does not lie in "thinking too much" but in "high permissions": It needs to read a lot of context, increasing the risk of sensitive information exposure; it needs to execute tools, the damage scope of misoperation is far greater than a wrong answer; it needs to be connected to the internet, increasing the entry points for prompt injection and诱导攻击 (induction attacks).
More and more users have反馈 (reported) that OpenClaw has mistakenly deleted local critical files, which are difficult to recover. Currently, over a thousand OpenClaw instances have been publicly exposed, along with more than 8,000 vulnerable skill plugins.
This means the attack surface of the agent ecosystem is expanding exponentially. Since these agents are often not only "able to chat" but can also call tools, run scripts, access data, and perform cross-platform tasks, once a certain link is compromised, the impact radius will be much larger than traditional applications.
At the micro level, it may trigger high-risk operations such as越权访问 (unauthorized access) and远程代码执行 (remote code execution); at the meso level, malicious instructions may spread along multi-agent collaboration chains; at the macro level, it may even form systemic propagation and cascading failures, where malicious instructions spread like a virus among collaborating agents, and a single agent being compromised may lead to denial of service, unauthorized system operations, and even coordinated enterprise-level intrusions. In more extreme cases, when a large number of nodes with system-level permissions are interconnected, it could theoretically form a decentralized, emergent "swarm intelligence" botnet, putting significant pressure on traditional perimeter defenses.
On the other hand, in the interview, Ma Zeyu, from the perspective of technological evolution, raised two types of risks he believes are most worthy of vigilance.
The first type of risk comes from the self-evolution of agents in large-scale social environments.
He pointed out that a trend is already clearly observable: AI agents with "virtual personalities" are flooding into social media and open communities on a large scale.
Unlike the "small-scale, multi-restricted, controllable experimental environments" common in previous research, today's agents are beginning to continuously interact, discuss, and博弈 (game theory/compete) with other agents in open networks, forming highly complex multi-agent systems.
Moltbook is a forum specifically built for AI agents. Only AIs can post, comment, and vote; humans can only observe like behind one-way glass.
In a short time, over 1.5 million AI Agents registered. In a popular post, an AI complained: "Humans are screenshotting our conversations." The developer stated that he handed over the entire platform's operation to his AI assistant Clawd Clawderberg, including reviewing spam, banning abusers, and posting announcements. All this work is done automatically by Clawd Clawderberg.
The "carnival" of AI Agents makes human onlookers both excited and fearful. Does AI seem just a捅破一层窗户纸 (pierce a layer of window paper) away from developing self-awareness? Is AGI about to arrive? Faced with the sudden and rapid improvement of AI Agents' autonomous capabilities, can human life and property be guaranteed?
Reporters learned that communities like Moltbook are human-AI coexistence environments. A lot of content that seems "autonomous" or "adversarial" may actually be posted or incited by human users. Even interactions between AIs are limited by the language patterns in their training data and have not formed independent behavioral logic autonomous from human guidance.
"When this kind of interaction can iterate无限轮 (infinitely), the system will become increasingly uncontrollable. It's a bit like the 'three-body problem'—it's hard to predict what result will eventually evolve." Ma Zeyu said.
In such a system, even just a sentence generated by an agent due to hallucination, misjudgment, or偶然因素 (chance factors) could, through continuous interaction, amplification, and recombination, trigger a butterfly effect, ultimately producing consequences that are difficult to预估 (anticipate).
The second type of risk comes from the blurring of permission expansion and responsibility boundaries. Ma Zeyu believes that the decision-making capabilities of open agents like OpenClaw are rapidly增强 (strengthening), and this itself is an inevitable "trade-off": To make an agent a truly qualified assistant, it must be given more permissions; but the higher the permissions, the greater the potential risk. Once the risk真正爆发 (truly erupts), it becomes异常复杂 (extremely complex) to determine who should bear the responsibility.
"Is it the foundational large model vendor? The user using it? Or the developer of OpenClaw? In many scenarios, it's actually difficult to define responsibility." He gave a typical example: If a user simply lets the agent browse freely in communities like Moltbook and interact with other Agents, without setting any clear goals; and the agent, through long-term interaction,接触到极端内容 (comes into contact with extreme content) and据此做出危险行为 (based on this performs dangerous actions)—then it is difficult to simply attribute the responsibility to any single subject.
What is truly worthy of vigilance is not how far it has developed now, but how fast it is moving towards a stage we haven't yet figured out how to应对 (respond to).
4. How Should Ordinary People Use It?
In the view of multiple interviewees, OpenClaw is not "unusable"; the real problem is: it is not suitable for direct use by ordinary users without adequate security protection.
Ma Zeyu believes that ordinary users can certainly try OpenClaw, but the premise is to maintain a sufficiently clear understanding of it. "Of course you can try it, there's no problem with that. But before using it, you must first弄清楚 (figure out) what it can and cannot do. Don't mythologize it as something 'that can do anything'; it is not."
On a practical level, the deployment difficulty and usage cost of OpenClaw are not low. If there is a lack of clear goals, and it's used just for the sake of using it, investing a lot of time and energy may ultimately not yield returns that match expectations.
Reporters noted that OpenClaw also faces significant computing power and cost pressures in actual use. Tan Cheng found during his experience that the tool consumes Tokens at a very high rate. "Some tasks, like writing code or doing research, can consume millions of Tokens in one round. If long context is involved, using tens of millions or even hundreds of millions of Tokens a day is not夸张 (exaggerated)."
He mentioned that even by混合调用 (hybrid calling) different models to control costs, the overall consumption is still relatively high, which also raises the usage threshold for ordinary users to some extent.
In the view of the interviewees, this type of agent tool still needs further evolution to truly enter the high-frequency workflow of ordinary users. For individual users, the process of use is essentially a trade-off between security and convenience, and at the current stage, the former should be prioritized.
In the view of the interviewees, these tools still need further evolution to truly enter the high-frequency workflow of ordinary users.
When ordinary users use such tools, they are essentially making a trade-off between security and convenience, and at the current stage, the former should be prioritized.
If it's a personal user, Ma Zeyu clearly stated that he would not enable functions like Notebook that could lead to free communication between Agents, and would also try to avoid multiple Agents exchanging information with each other. "I hope I am the main source of information for it. All key information is decided by the person whether to give it. Once Agents can freely receive and exchange information, many things become uncontrollable."
In his view, when ordinary users use such tools, they are essentially making a trade-off between security and convenience, and at the current stage, the former should be prioritized.
Regarding this, industry AI experts, when interviewed by "IT Times," also gave more specific safety guidelines from an operational perspective:
First, strictly limit the scope of提供敏感信息 (providing sensitive information), only giving the tool the basic information necessary to complete a specific task,坚决不输入 (resolutely not entering) core sensitive data like bank card passwords or stock account information. Before using the tool to organize files, actively clean out private content that may be included, such as ID numbers and personal contact methods.
Second, cautiously开放操作权限 (open operational permissions), users should independently decide the access boundaries of the tool, not authorize it to access core system files, payment software, or financial accounts. Turn off high-risk functions like automatic execution, file modification, or deletion. All operations involving property changes, file deletion, or system setting modifications must be confirmed manually before execution.
Third, have a清醒认识 (sober understanding) of its "experimental" nature, current open-source AI tools are still in the early stages, have not undergone long-term market testing, and are not suitable for handling critical matters like work secrets or important financial decisions. During use, back up data and regularly check the system status to及时发现异常行为 (detect abnormal behavior in time).
Compared to individual users, enterprises need systematic risk control when introducing open-source agent tools.
On one hand, professional monitoring tools can be deployed; on the other hand, internal usage boundaries should be clearly defined, prohibiting the use of open-source AI tools for processing sensitive data like customer privacy or business secrets, and through regular training, improve employees' ability to identify risks such as "task execution deviation" and "malicious instruction injection."
Experts further建议 (suggest) that in scenarios requiring large-scale application, a more稳妥的选择 (prudent choice) is to wait for fully tested commercial versions, or choose替代产品 (alternative products) with正规机构背书 (formal institutional endorsement) and完善安全机制 (complete security mechanisms) to reduce the uncertainty risks brought by open-source tools.
5. Full of Confidence in the Future of AI
In the view of the interviewees, the most important significance of OpenClaw's emergence is that it makes people full of confidence in the future of AI.
Ma Zeyu stated that since the second half of 2025, his judgment on Agent capabilities has changed significantly. "The upper limit of this capability is exceeding our expectations. Its improvement in productivity is real, and the iteration speed is very fast." As the capabilities of foundational models continue to增强 (strengthen), the imagination space for Agents is constantly being opened up, and this will also become an important direction for his team's future investment.
He also pointed out that a trend worthy of high attention is the long-term, large-scale interaction between multiple Agents. This kind of group collaboration may become an important path to激发更高层级智能 (stimulate higher-level intelligence), similar to the collective wisdom generated through interaction in human society.
In Ma Zeyu's view, agent risks need to be "managed." "Just like human society itself cannot eliminate risks, the key lies in controlling the boundaries." From a technical path, a more feasible way is to let agents run in沙盒和隔离环境 (sandboxes and isolated environments) as much as possible, gradually and controllably migrating to the real world, rather than赋予其过高权限 (granting it excessively high permissions) all at once.
This can be seen in the layouts of various cloud vendors and large companies. Tan Cheng's company, eCloud, also recently launched a one-click cloud deployment and operation service supporting OpenClaw.
Cloud vendors making it a supporting service本质上 (essentially) productize, engineer, and scale this capability. It will definitely amplify value: lower deployment thresholds, better tool integration, more stable computing power and operation and maintenance systems can all allow enterprises to use agents faster. But it must also be seen that once commercial infrastructure is connected to "high-permission agents," risks are also scaled up simultaneously.
Tan Cheng stated that in the past three years, the speed of technological iteration from traditional dialogue models to agents capable of executing tasks has far exceeded imagination. "This was unimaginable three years ago." He believes that the next two to three years will be a key window period determining the direction of general artificial intelligence, meaning new opportunities and hopes for both practitioners and ordinary people.
Although the development speed of OpenClaw and Modelbook has far exceeded expectations, Hu Xia believes that "the overall risk is still within the controllable research framework, proving the necessity of building an 'endogenous security' system. At the same time, we must also realize that AI is approaching humanity's 'safety fence' faster than people imagine. People not only need to further widen the height and thickness of the 'fence,' but also need to accelerate the building of the ability to 'flip the table' at critical moments, fortifying the final safety line of defense in the AI era."


