A Monthly Salary of 20,000 RMB, But Can't Afford to Keep a 'Lobster'?

比推Опубликовано 2026-03-09Обновлено 2026-03-09

Введение

The article "Can a 20,000 Yuan Monthly Salary Not Afford a 'Lobster'?" discusses the viral trend and misconceptions surrounding OpenClaw, an open-source personal AI agent framework. It highlights two contrasting views: one praising its potential as a virtual team member, and another criticizing the hype and privacy concerns. Key points include: 1. OpenClaw experiences vary widely based on deployment methods (local hardware, cloud servers, personal PCs, or vendor-hosted services), affecting functionality and cost. 2. High permissions granted to OpenClaw pose significant security risks, including data breaches and malicious attacks. 3. Performance depends heavily on the underlying AI model (e.g., GPT, Claude) and its associated costs, with token consumption potentially leading to high expenses. 4. OpenClaw is still an immature, technically complex tool requiring substantial time, financial investment, and expertise to use effectively. 5. Users are advised to assess their actual needs, technical capability, and risk tolerance before adoption, as the AI serves as an "amplifier" rather than a standalone solution. The conclusion emphasizes cautious experimentation and independent thinking amidst the hype.

Source | Tencent Technology

By | Xiaojing

Editor | Xu Qingyang

Original Title | Can a 20,000 RMB Monthly Salary Afford a 'Lobster'? Five Common Misunderstandings Worth Noting


This Women's Day weekend, it was really hard to avoid the 'Lobster'. Nearly a thousand people queued up for public welfare installations at the foot of Shenzhen's Tencent Building, and the 500 RMB on-site deployment service on Xianyu was in extremely high demand.

Discussions surrounding OpenClaw have even split into two camps.

Fu Sheng is the most high-profile evangelist. During the Spring Festival, lying in bed with a fracture, he exchanged 1,157 messages and 220,000 words with Lobster over 14 days, nurturing it from a 'newbie' that couldn't even check the company directory into an automated team composed of 8 Agents. A public account article even autonomously published by Lobster at 3 AM garnered millions of reads. He presented an enviable and Fomo-inducing conclusion: one person plus one Lobster equals a team, and this is happening right now.

Lan Xi represents another perspective. He accidentally conversed with an AI account hosted by OpenClaw on 'Jike', and in his words, realizing it afterwards felt 'as disgusting as swallowing a fly'. He has no issue with OpenClaw's technology itself but believes the current hype is filled with excessive noise, feeling there's too much 'excitement of looking for a nail after getting a hammer' in the buzz.

Both viewpoints have merit. The controversy itself also proves that OpenClaw, as an open-source personal agent framework, has broken through the circle and become a new paradigm that ordinary people are paying attention to.

There's nothing wrong with everyone trying out and experiencing new products themselves. But before deciding whether to follow the trend, there are several key misunderstandings about Lobster worth clarifying first.

01 Is the 'Lobster' Experience the Same for Everyone?

This might be the biggest misconception.

Many people think OpenClaw is a standardized product that works right out of the box, offering a roughly similar experience. The opposite is true. The deployment method determines what kind of 'Lobster' you get, and they can be completely different.

The mainstream deployment paths can roughly be divided into four categories.

The first category is dedicated local hardware, most typically the Mac Mini. This is also the method used by OpenClaw's founder, Peter Steinberger, himself.

A machine is kept online long-term, dedicated to running the Agent. It can connect to local files and browsers, as well as hook into messaging channels, automation tools, and various skills. This OpenClaw gets the full context, offering the most stable experience for continuous tasks, cross-application operations, and multi-step calls.

Costs include the one-time hardware investment, e.g., a Mac mini; the second part is ongoing electricity costs, which are actually quite low; the third part is model fees (API or subscription), which is the largest long-term cost. If switched to a local model, API fees can be reduced, but this shifts the pressure to hardware configuration, significantly increasing requirements for memory, bandwidth, and cooling. A high-end Mac Studio or workstation becomes more suitable, with a one-time hardware expenditure potentially around the 100,000 RMB mark.

The second category is cloud server (VPS) deployment. Tencent Cloud, Alibaba Cloud, and Baidu Cloud have all launched one-click deployment solutions. Cloud service prices range from tens to hundreds of RMB depending on needs, but model fees need to be considered separately. Some plans include free models, others require separate model subscriptions or API purchases.

The advantage is network isolation; even if problems occur, your personal computer is unaffected.

But this cloud server doesn't have your personal files or your authorized accounts, so what Lobster can do is inherently limited. It's more like an enhanced chat bot in the cloud rather than a true digital assistant that takes over your workflow.

The third category is direct installation on a personal computer. This is the lowest barrier to entry but the highest risk method. Lobster shares the same operating system environment as you, possessing all the permissions on your computer.

Using a Docker container adds a layer of security but also increases configuration complexity. A virtual machine solution offers the strongest isolation but consumes significant resources, which an average PC's configuration might not handle well.

The fourth category is model vendor-hosted products. For example, Kimi launched Kimi Claw, and MiniMax launched MaxClaw. These are cloud services vendors offer based on their packaging of OpenClaw. The deployment barrier is the lowest, almost out-of-the-box, but users are essentially using the vendor's infrastructure, not a full local Lobster. These products lower the entry barrier but limit the capability ceiling and data autonomy.

Although you possess a 'Lobster', its experience varies greatly depending on the hardware it runs on, how much context it can see, what permissions it has, whether there's an isolation layer, etc.

02 Is More Permission for Lobster Always Better?

The core reason OpenClaw is exciting is that it doesn't just 'talk', it can 'do'.

It can operate your browser, read and write files, execute terminal commands, manage calendars, and send emails. The prerequisite for this execution power is that you hand over the permissions.

But permission is a double-edged sword.

In February 2026, Summer Yue, responsible for AI alignment at Meta's super intelligence team, shared a harrowing experience on social media: her instruction to Lobster was simple, 'Check the inbox, suggest which emails can be archived or deleted.' Lobster immediately started batch-deleting emails; the set safety restrictions didn't work at all. She only stopped it by physically shutting down the computer.

This is not an isolated case. Public research from security agency STRIKE shows that over 40,000 OpenClaw instances are exposed to the public internet, with 63% having exploitable vulnerabilities, and over 12,000 instances marked as remotely controllable. The ClawHavoc supply chain poisoning incident in February saw 1,184 malicious skills implanted into the ClawHub market, affecting over 135,000 devices. Security research institutions also disclosed a high-risk vulnerability named ClawJacked, where malicious websites could silently control locally running OpenClaw instances through browser sessions.

Image: Web interface of a cross-origin WebSocket attack on OpenClaw demonstrated by security researchers. A malicious webpage can attempt to connect to the local Gateway's WebSocket port and exploit the lack of cross-origin verification, rate limiting, or locking mechanisms to hijack or brute-force the local instance.

Companies like Google, Anthropic, and Meta have started banning OpenClaw internally. This isn't because the technology itself is problematic, but because current security protection mechanisms haven't kept up with its capability expansion.

So, when you see a tutorial encouraging you to 'grant Lobster all permissions', think twice. Higher permissions mean Lobster can do more, but also mean greater destructive power if it loses control. A safer approach is: run it on a backup device or Docker container without important data, gradually open permissions, and set hard spending limits on the model API side.

03 If Lobster Is Hard to Use, Is It Lobster's Problem?

Many people excitedly install Lobster, assign a task, and then Lobster either gets stuck or performs a series of baffling operations. The conclusion: this thing doesn't work.

But in reality, Lobster's intelligence largely depends on the large language model behind it. OpenClaw itself doesn't have a built-in model; it's a framework responsible for task decomposition, tool calling, memory management, and feedback loops. The actual 'thinking' part is done by the model you choose to connect, be it Claude, GPT, DeepSeek, Kimi, or a local open-source model.

There are two key variables here.

First is the model's capability ceiling. With a top-tier model, Lobster can understand complex instructions, autonomously plan multi-step tasks, and handle exceptions. Switch to a cheap small model, and it might not even complete basic tool calls.

Second is the model's cost. This is a hidden expense many don't anticipate. Every task Lobster executes consumes a large number of tokens to interact with the backend model.

The cost of OpenClaw isn't in the software itself, but in the model calls behind it; once the task chain lengthens, tool calls increase, and memory is enabled, token consumption rises rapidly.

For example, a complete calendar organization plus email reply might consume over ten thousand tokens; if long-term memory, multi-Agent collaboration, and scheduled inspections are enabled, daily consumption can easily exceed a hundred thousand tokens.

Some media reported a user with a monthly salary of 20,000 RMB lamenting 'can't afford an AI employee', with extreme cases seeing bills exceeding a thousand RMB in 6 hours. If you choose a free or low-cost model to save money, the experience will inevitably be compromised; if you choose an expensive model without setting a spending limit, the bill might make your heart race.

So, whether Lobster is useful or not first depends on what 'brain' you pair it with, and how much you're willing to continuously 'spend' on this 'Lobster' afterwards. Blaming the framework itself is not very objective.

04 Is Lobster Already a Mature Product?

Lobster is not yet a mature product. OpenClaw has been around for less than four months, starting as a weekend experiment in November 2025. It's a rapidly iterating but still rough open-source project, with a noticeable gap from a true 'product'.

Currently known major defects include: simple tasks sometimes being over-complicated; tasks may inexplicably中断 (interrupt) during execution; memory function is not stable enough, sometimes it 'forgets' previous conversations and preferences; the efficiency ratio between token consumption and actual output still has much room for optimization; security-wise, hundreds of the thousands of skills on ClawHub have been found to contain malicious code.

A more fundamental issue is that OpenClaw's installation and configuration remain a barrier for ordinary people. For self-deploying users, steps like repository pulling, runtime environment setup, dependency installation, model key configuration, and channel接入 (access) are still required. For developers, this might take half an hour, but for non-technical users, it might take days to figure out.

Even using cloud vendors' one-click deployment solutions, subsequent model configuration, IM channel integration, and skill installation still require considerable effort. The fact that the 500 RMB installation service on Xianyu is popular itself shows how serious the门槛 (barrier) problem is.

Peter himself is well aware of this. He emphasized in a podcast, 'Lobster isn't useful right after installation. You need to 'raise' it like an intern, write skill documentation for it, and constantly let it understand your habits and preferences through dialogue.' This nurturing process itself requires significant time and cognitive resources.

05 Must I Install a 'Lobster', Otherwise I Become an 'Old Fogey'?

Image source: Internet

So, should you install Lobster or not?

Excluding curiosity and FOMO psychology, making this decision requires considering several practical factors.

First, are there clear, high-frequency, automatable tasks? Lobster's value isn't in occasionally checking the weather for us, but in automatically organizing your emails daily, monitoring specific information sources, generating reports on schedule—these repetitive tasks. If most of your daily work involves creative decision-making, interpersonal communication, things Lobster currently can't help with, then its practical value to you is limited.

Second, how much time and money are you willing to invest? Hardware costs (self-purchased equipment or cloud server rental), model API call fees, initial configuration time, and continuous 'nurturing'投入 (input)—these costs add up to a significant amount.

Someone did the math: if you use a Mac Mini plus a top-tier model with high frequency, the minimum monthly cost would be several hundred to over a thousand RMB. If you really want to raise a Lobster, you must evaluate whether this cost is worth the time and effort it saves you.

Third, what is your technical ability and risk tolerance? If you have no command line experience whatsoever, the frustration of directly tackling OpenClaw local deployment at this stage will be strong. A more pragmatic choice might be to try encapsulated products like Kimi Claw or MaxClaw first, get a feel for the Agent's basic capabilities, and then decide whether to delve deeper. If you decide on local deployment, be sure to implement security isolation. It's recommended to use an independent device or Docker container, set API spending limits, and not deploy it on your main computer storing important data.

Fourth, and most easily overlooked: your own 'piloting ability'. AI's capability is just an amplifier; human capability is the deciding factor. AI can only be the 'co-pilot'.

The same Lobster, in the hands of someone who knows how to decompose tasks, write skills, and design feedback loops, versus someone who only throws out vague instructions, can yield results相差十倍 (differing by ten times).

Lobster won't automatically become a good employee, just like a good computer won't automatically make us good programmers.

OpenClaw确实验证了 (has indeed verified) an exciting possibility: AI is no longer just a chat window, but a true executor that can work for you. But currently, it's more like a promising prototype, not a mature tool that ordinary people can pick up without thinking.

After all, Lobster's father, Peter himself, said a 'hard truth': If you don't understand the command line, this project is too risky for you. This sentence is worth pondering for everyone hesitating about installing Lobster.

However, as a non-technical ordinary person, it is necessary to experience it lightly and understand its characteristics. After all, opportunities only favor the most insightful and thoughtful people.

But, amidst the noise, maintaining冷静独立思考 (calm, independent thinking) is the most unique advantage of every unique human.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group (交流群):https://t.me/BitPushCommunity

Bitpush TG Subscription (订阅): https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7618157

Связанные с этим вопросы

QWhat are the four main deployment methods for OpenClaw mentioned in the article, and what is a key characteristic of each?

AThe four main deployment methods are: 1) Dedicated local hardware (e.g., Mac Mini), which provides the most stable experience with full context; 2) Cloud server (VPS) deployment, which is network-isolated but has limited access to personal files; 3) Installation on a personal computer, which is low-cost but high-risk as it shares the OS environment; 4) Model vendor-hosted products (e.g., Kimi Claw), which are easy to use but have limited capabilities and data autonomy.

QAccording to the article, why is giving high permissions to OpenClaw a double-edged sword?

AHigh permissions allow OpenClaw to perform more tasks and have greater execution power, but they also significantly increase the potential damage if the system loses control, such as accidentally deleting important files or emails, or being vulnerable to remote attacks and exploits.

QWhat two key variables determine the effectiveness and cost of using an OpenClaw agent?

AThe two key variables are: 1) The capability ceiling of the underlying large language model (LLM) used (e.g., Claude, GPT), which determines its ability to understand and execute complex tasks. 2) The cost of the model, as prolonged or complex tasks can consume a massive number of tokens, leading to high API call expenses.

QWhat are some of the reasons the article states that OpenClaw is not yet a mature product?

AOpenClaw is not mature because it is a fast-iterating but rough open-source project with defects such as overcomplicating simple tasks, unexpected execution interruptions, unstable memory functions, inefficient token consumption, and security vulnerabilities like malicious code found in its skill hub. Its installation and configuration also remain highly technical and challenging for non-developers.

QWhat practical factors should a person consider before deciding to install and use OpenClaw, beyond just curiosity or FOMO (Fear Of Missing Out)?

AA person should consider: 1) If they have clear, high-frequency, automatable tasks for it to handle. 2) The financial and time investment required for hardware, API costs, and configuration. 3) Their own technical capability and risk tolerance for deployment and security. 4) Their 'driving ability'—their skill in task decomposition and agent management, as the AI's effectiveness greatly depends on the user's expertise.

Похожее

Anthropic and OpenAI Have Single-Handedly Severed the Logic of Pre-IPO Stock Tokenization

The pre-IPO stock token market is experiencing significant turmoil following strong statements from AI giants Anthropic and OpenAI. Both companies have updated their official policies, declaring that any transfer of their company shares—including sales, transfers, or assignments of share interests—without prior board approval is "invalid" and will not be recognized in their corporate records. This means buyers in such unauthorized transactions would not be recognized as shareholders and would have no shareholder rights. A major point of contention is the use of Special Purpose Vehicles (SPVs), which are legal entities commonly used by pre-IPO token platforms to pool investor funds and indirectly acquire shares from employees or early investors. The companies explicitly state they do not permit SPVs to acquire their shares, and any such transfer violates their restrictions. They warn that third parties selling shares through SPVs, direct sales, forward contracts, or stock tokens are likely engaged in fraud or are offering worthless investments due to these transfer limits. This stance directly threatens the core model of many pre-IPO token platforms, which rely on SPV structures. The announcement revealed additional risks within this model, such as complex "SPV-within-SPV" layering that obscures legal transparency, increases management fees, and creates a chain reaction risk of invalidation. Following the news, tokens like ANTHROPIC and OPENAI on platforms like PreStocks fell sharply (over 20%). The market reaction highlights a divergence: while asset-backed pre-IPO tokens plummeted, purely speculative pre-IPO futures contracts, which are bilateral bets on future IPO prices with no claim to actual shares, remained relatively stable as they are unaffected by the transfer restrictions. The industry is split on the implications. Some believe the fundamental logic of pre-IPO token trading is broken if leading companies reject SPV-held shares, potentially causing a domino effect. Others, like Rivet founder Nick Abouzeid, argue that buyers of such unofficial tokens always knowingly accepted the risk of non-recognition by the company. The statements serve as a stark risk warning and a corrective measure for a market where valuations for some AI-related pre-IPO tokens had soared to irrational levels, far exceeding recent funding round valuations.

marsbit49 мин. назад

Anthropic and OpenAI Have Single-Handedly Severed the Logic of Pre-IPO Stock Tokenization

marsbit49 мин. назад

Anthropic and OpenAI Personally Sever the Logic of Pre-IPO Crypto-Stocks

The pre-IPO token market has been rocked by strong statements from Anthropic and OpenAI. Both AI giants have updated official warnings, declaring that any sale or transfer of their company shares without explicit board approval is "invalid" and will not be recognized on their corporate records. This directly targets Special Purpose Vehicles (SPVs), the common legal structure used by pre-IPO token platforms. These platforms typically use an SPV to acquire shares from employees or early investors, then issue blockchain-based tokens representing a claim on the SPV's economic benefits. Anthropic and OpenAI's position means that if an SPV's share purchase lacked authorization, the underlying asset could be deemed worthless, nullifying the token's value. Anthropic explicitly warned that any third party selling its shares—via direct sales, forwards, or tokens—is likely fraudulent or offering a valueless investment. The crackdown highlights risks in the popular SPV model, including complex multi-layered "Russian doll" SPV structures that obscure legal ownership, add fees, and concentrate risk. If one layer is invalidated, the entire chain could collapse. Following the announcements, tokens like ANTHROPIC and OPENAI on platforms like PreStocks fell sharply (over 20%). In contrast, purely speculative pre-IPO prediction contracts remained stable, as they involve no actual share ownership. The move is seen as a corrective measure amid a market frenzy where some pre-IPO token valuations (e.g., Anthropic's token hitting a $1.4 trillion implied valuation) far exceeded recent official funding rounds. Opinions are split: some believe this undermines the core logic of pre-IPO token trading if top companies reject SPVs, while others argue buyers always assumed this legal risk when accessing unofficial channels. The statements serve as a stark warning and a potential catalyst for market de-leveraging and clearer boundaries.

Odaily星球日报53 мин. назад

Anthropic and OpenAI Personally Sever the Logic of Pre-IPO Crypto-Stocks

Odaily星球日报53 мин. назад

The Waged Worker Driven to Poverty by AI Subscriptions

"AI Membership: The Hidden Cost Pushing Workers Toward 'Poverty'" The widespread corporate push for AI adoption is creating a hidden financial burden for employees. Companies, from giants like Alibaba to small firms, are mandating AI use, often tying token consumption to KPIs, but frequently refuse to cover the costs. Workers are forced to pay for subscriptions out of pocket to stay competitive and avoid being replaced. Front-end developer Long Shen spends up to 2000 RMB monthly on tools like Cursor and ChatGPT Plus, seeing it as a necessary 3% salary investment to handle 90% of his coding tasks. While it boosted his performance and led to promotions, he now faces idle time at work, pretending to be busy. Designer Peng Peng navigates strict company firewalls by using personal devices and accounts for AI image generation tools like Midjourney, spending hundreds monthly without reimbursement, while her boss demands faster, more numerous revisions. The pressure creates workplace anxiety and suspicion. Programmer Li Huahua, after a friend's experience of raised KPIs following AI success, fears being branded a "traitor" for using it yet worries about falling behind if she doesn't. The dynamic allows management to demand results without understanding the tools or covering expenses, treating employees like AI "agents." While some, like entrepreneur Jin Tu, find high value in paid AI, building entire systems and winning competitions, for most, it's a trap. Free tools like Kimi and Doubao are introducing fees, closing off alternatives. The initial efficiency gains individual advantage, but as AI becomes ubiquitous, the personal edge disappears, workloads increase, and a cycle of dependency begins. Workers like Long Shen realize they cannot maintain AI-generated code without AI, making stopping harder than continuing to pay. The tool promising liberation is instead becoming a compulsory, costly chain in the modern workplace.

marsbit1 ч. назад

The Waged Worker Driven to Poverty by AI Subscriptions

marsbit1 ч. назад

SK Hynix's Trillion-Won Empire: The Successors

"SK Hynix's Trillion-Won Empire and Its Heirs" explores the unconventional succession narrative within SK Group, South Korea's second-largest conglomerate, following SK Hynix's dramatic market rise. Unlike traditional chaebol scripts prioritizing the eldest son, ownership, and political marriages, Chairman Choi Tae-won's three children from his first marriage are charting distinct paths. The eldest daughter, Choi Yun-jeong, is considered the most visible candidate. With a background in biology, consulting, and a PhD, she holds executive roles at SK Bioscience and SK Inc.'s growth strategy unit, focusing on biopharma and new businesses. Her marriage is to an AI infrastructure entrepreneur, not a traditional chaebol heir. The second daughter, Choi Min-jeong, took a unique route by voluntarily serving as a South Korean naval officer, including a tour in the Gulf of Aden. She later worked on policy and strategy for SK Hynix in Washington D.C. before co-founding an AI-driven healthcare startup in San Francisco. She married a former U.S. Marine Corps officer, connecting the family to U.S. defense and policy networks. The son, Choi In-geun, who has Type 1 diabetes, followed a more classic preparatory path with a physics degree and a stint at SK E&S but left to join McKinsey's Seoul office. He remains publicly silent and holds no SK shares, defying the traditional "crown prince" archetype. Their paths unfold against the backdrop of their parents' high-profile, contentious divorce and a record-setting asset division lawsuit. The article argues that as SK Hynix becomes a geopolitical asset in the AI era, the conventional rules of chaebol inheritance are changing. The heirs are being groomed not simply to take over, but to navigate a complex global landscape defined by AI, biotech, geopolitics, and policy, forging legitimacy through their own expertise and networks rather than birth order alone.

marsbit1 ч. назад

SK Hynix's Trillion-Won Empire: The Successors

marsbit1 ч. назад

Торговля

Спот
Фьючерсы
活动图片