A Monthly Salary of 20,000 RMB, But Can't Afford to Keep a 'Lobster'?

比推Published on 2026-03-09Last updated on 2026-03-09

Abstract

The article "Can a 20,000 Yuan Monthly Salary Not Afford a 'Lobster'?" discusses the viral trend and misconceptions surrounding OpenClaw, an open-source personal AI agent framework. It highlights two contrasting views: one praising its potential as a virtual team member, and another criticizing the hype and privacy concerns. Key points include: 1. OpenClaw experiences vary widely based on deployment methods (local hardware, cloud servers, personal PCs, or vendor-hosted services), affecting functionality and cost. 2. High permissions granted to OpenClaw pose significant security risks, including data breaches and malicious attacks. 3. Performance depends heavily on the underlying AI model (e.g., GPT, Claude) and its associated costs, with token consumption potentially leading to high expenses. 4. OpenClaw is still an immature, technically complex tool requiring substantial time, financial investment, and expertise to use effectively. 5. Users are advised to assess their actual needs, technical capability, and risk tolerance before adoption, as the AI serves as an "amplifier" rather than a standalone solution. The conclusion emphasizes cautious experimentation and independent thinking amidst the hype.

Source | Tencent Technology

By | Xiaojing

Editor | Xu Qingyang

Original Title | Can a 20,000 RMB Monthly Salary Afford a 'Lobster'? Five Common Misunderstandings Worth Noting


This Women's Day weekend, it was really hard to avoid the 'Lobster'. Nearly a thousand people queued up for public welfare installations at the foot of Shenzhen's Tencent Building, and the 500 RMB on-site deployment service on Xianyu was in extremely high demand.

Discussions surrounding OpenClaw have even split into two camps.

Fu Sheng is the most high-profile evangelist. During the Spring Festival, lying in bed with a fracture, he exchanged 1,157 messages and 220,000 words with Lobster over 14 days, nurturing it from a 'newbie' that couldn't even check the company directory into an automated team composed of 8 Agents. A public account article even autonomously published by Lobster at 3 AM garnered millions of reads. He presented an enviable and Fomo-inducing conclusion: one person plus one Lobster equals a team, and this is happening right now.

Lan Xi represents another perspective. He accidentally conversed with an AI account hosted by OpenClaw on 'Jike', and in his words, realizing it afterwards felt 'as disgusting as swallowing a fly'. He has no issue with OpenClaw's technology itself but believes the current hype is filled with excessive noise, feeling there's too much 'excitement of looking for a nail after getting a hammer' in the buzz.

Both viewpoints have merit. The controversy itself also proves that OpenClaw, as an open-source personal agent framework, has broken through the circle and become a new paradigm that ordinary people are paying attention to.

There's nothing wrong with everyone trying out and experiencing new products themselves. But before deciding whether to follow the trend, there are several key misunderstandings about Lobster worth clarifying first.

01 Is the 'Lobster' Experience the Same for Everyone?

This might be the biggest misconception.

Many people think OpenClaw is a standardized product that works right out of the box, offering a roughly similar experience. The opposite is true. The deployment method determines what kind of 'Lobster' you get, and they can be completely different.

The mainstream deployment paths can roughly be divided into four categories.

The first category is dedicated local hardware, most typically the Mac Mini. This is also the method used by OpenClaw's founder, Peter Steinberger, himself.

A machine is kept online long-term, dedicated to running the Agent. It can connect to local files and browsers, as well as hook into messaging channels, automation tools, and various skills. This OpenClaw gets the full context, offering the most stable experience for continuous tasks, cross-application operations, and multi-step calls.

Costs include the one-time hardware investment, e.g., a Mac mini; the second part is ongoing electricity costs, which are actually quite low; the third part is model fees (API or subscription), which is the largest long-term cost. If switched to a local model, API fees can be reduced, but this shifts the pressure to hardware configuration, significantly increasing requirements for memory, bandwidth, and cooling. A high-end Mac Studio or workstation becomes more suitable, with a one-time hardware expenditure potentially around the 100,000 RMB mark.

The second category is cloud server (VPS) deployment. Tencent Cloud, Alibaba Cloud, and Baidu Cloud have all launched one-click deployment solutions. Cloud service prices range from tens to hundreds of RMB depending on needs, but model fees need to be considered separately. Some plans include free models, others require separate model subscriptions or API purchases.

The advantage is network isolation; even if problems occur, your personal computer is unaffected.

But this cloud server doesn't have your personal files or your authorized accounts, so what Lobster can do is inherently limited. It's more like an enhanced chat bot in the cloud rather than a true digital assistant that takes over your workflow.

The third category is direct installation on a personal computer. This is the lowest barrier to entry but the highest risk method. Lobster shares the same operating system environment as you, possessing all the permissions on your computer.

Using a Docker container adds a layer of security but also increases configuration complexity. A virtual machine solution offers the strongest isolation but consumes significant resources, which an average PC's configuration might not handle well.

The fourth category is model vendor-hosted products. For example, Kimi launched Kimi Claw, and MiniMax launched MaxClaw. These are cloud services vendors offer based on their packaging of OpenClaw. The deployment barrier is the lowest, almost out-of-the-box, but users are essentially using the vendor's infrastructure, not a full local Lobster. These products lower the entry barrier but limit the capability ceiling and data autonomy.

Although you possess a 'Lobster', its experience varies greatly depending on the hardware it runs on, how much context it can see, what permissions it has, whether there's an isolation layer, etc.

02 Is More Permission for Lobster Always Better?

The core reason OpenClaw is exciting is that it doesn't just 'talk', it can 'do'.

It can operate your browser, read and write files, execute terminal commands, manage calendars, and send emails. The prerequisite for this execution power is that you hand over the permissions.

But permission is a double-edged sword.

In February 2026, Summer Yue, responsible for AI alignment at Meta's super intelligence team, shared a harrowing experience on social media: her instruction to Lobster was simple, 'Check the inbox, suggest which emails can be archived or deleted.' Lobster immediately started batch-deleting emails; the set safety restrictions didn't work at all. She only stopped it by physically shutting down the computer.

This is not an isolated case. Public research from security agency STRIKE shows that over 40,000 OpenClaw instances are exposed to the public internet, with 63% having exploitable vulnerabilities, and over 12,000 instances marked as remotely controllable. The ClawHavoc supply chain poisoning incident in February saw 1,184 malicious skills implanted into the ClawHub market, affecting over 135,000 devices. Security research institutions also disclosed a high-risk vulnerability named ClawJacked, where malicious websites could silently control locally running OpenClaw instances through browser sessions.

Image: Web interface of a cross-origin WebSocket attack on OpenClaw demonstrated by security researchers. A malicious webpage can attempt to connect to the local Gateway's WebSocket port and exploit the lack of cross-origin verification, rate limiting, or locking mechanisms to hijack or brute-force the local instance.

Companies like Google, Anthropic, and Meta have started banning OpenClaw internally. This isn't because the technology itself is problematic, but because current security protection mechanisms haven't kept up with its capability expansion.

So, when you see a tutorial encouraging you to 'grant Lobster all permissions', think twice. Higher permissions mean Lobster can do more, but also mean greater destructive power if it loses control. A safer approach is: run it on a backup device or Docker container without important data, gradually open permissions, and set hard spending limits on the model API side.

03 If Lobster Is Hard to Use, Is It Lobster's Problem?

Many people excitedly install Lobster, assign a task, and then Lobster either gets stuck or performs a series of baffling operations. The conclusion: this thing doesn't work.

But in reality, Lobster's intelligence largely depends on the large language model behind it. OpenClaw itself doesn't have a built-in model; it's a framework responsible for task decomposition, tool calling, memory management, and feedback loops. The actual 'thinking' part is done by the model you choose to connect, be it Claude, GPT, DeepSeek, Kimi, or a local open-source model.

There are two key variables here.

First is the model's capability ceiling. With a top-tier model, Lobster can understand complex instructions, autonomously plan multi-step tasks, and handle exceptions. Switch to a cheap small model, and it might not even complete basic tool calls.

Second is the model's cost. This is a hidden expense many don't anticipate. Every task Lobster executes consumes a large number of tokens to interact with the backend model.

The cost of OpenClaw isn't in the software itself, but in the model calls behind it; once the task chain lengthens, tool calls increase, and memory is enabled, token consumption rises rapidly.

For example, a complete calendar organization plus email reply might consume over ten thousand tokens; if long-term memory, multi-Agent collaboration, and scheduled inspections are enabled, daily consumption can easily exceed a hundred thousand tokens.

Some media reported a user with a monthly salary of 20,000 RMB lamenting 'can't afford an AI employee', with extreme cases seeing bills exceeding a thousand RMB in 6 hours. If you choose a free or low-cost model to save money, the experience will inevitably be compromised; if you choose an expensive model without setting a spending limit, the bill might make your heart race.

So, whether Lobster is useful or not first depends on what 'brain' you pair it with, and how much you're willing to continuously 'spend' on this 'Lobster' afterwards. Blaming the framework itself is not very objective.

04 Is Lobster Already a Mature Product?

Lobster is not yet a mature product. OpenClaw has been around for less than four months, starting as a weekend experiment in November 2025. It's a rapidly iterating but still rough open-source project, with a noticeable gap from a true 'product'.

Currently known major defects include: simple tasks sometimes being over-complicated; tasks may inexplicably中断 (interrupt) during execution; memory function is not stable enough, sometimes it 'forgets' previous conversations and preferences; the efficiency ratio between token consumption and actual output still has much room for optimization; security-wise, hundreds of the thousands of skills on ClawHub have been found to contain malicious code.

A more fundamental issue is that OpenClaw's installation and configuration remain a barrier for ordinary people. For self-deploying users, steps like repository pulling, runtime environment setup, dependency installation, model key configuration, and channel接入 (access) are still required. For developers, this might take half an hour, but for non-technical users, it might take days to figure out.

Even using cloud vendors' one-click deployment solutions, subsequent model configuration, IM channel integration, and skill installation still require considerable effort. The fact that the 500 RMB installation service on Xianyu is popular itself shows how serious the门槛 (barrier) problem is.

Peter himself is well aware of this. He emphasized in a podcast, 'Lobster isn't useful right after installation. You need to 'raise' it like an intern, write skill documentation for it, and constantly let it understand your habits and preferences through dialogue.' This nurturing process itself requires significant time and cognitive resources.

05 Must I Install a 'Lobster', Otherwise I Become an 'Old Fogey'?

Image source: Internet

So, should you install Lobster or not?

Excluding curiosity and FOMO psychology, making this decision requires considering several practical factors.

First, are there clear, high-frequency, automatable tasks? Lobster's value isn't in occasionally checking the weather for us, but in automatically organizing your emails daily, monitoring specific information sources, generating reports on schedule—these repetitive tasks. If most of your daily work involves creative decision-making, interpersonal communication, things Lobster currently can't help with, then its practical value to you is limited.

Second, how much time and money are you willing to invest? Hardware costs (self-purchased equipment or cloud server rental), model API call fees, initial configuration time, and continuous 'nurturing'投入 (input)—these costs add up to a significant amount.

Someone did the math: if you use a Mac Mini plus a top-tier model with high frequency, the minimum monthly cost would be several hundred to over a thousand RMB. If you really want to raise a Lobster, you must evaluate whether this cost is worth the time and effort it saves you.

Third, what is your technical ability and risk tolerance? If you have no command line experience whatsoever, the frustration of directly tackling OpenClaw local deployment at this stage will be strong. A more pragmatic choice might be to try encapsulated products like Kimi Claw or MaxClaw first, get a feel for the Agent's basic capabilities, and then decide whether to delve deeper. If you decide on local deployment, be sure to implement security isolation. It's recommended to use an independent device or Docker container, set API spending limits, and not deploy it on your main computer storing important data.

Fourth, and most easily overlooked: your own 'piloting ability'. AI's capability is just an amplifier; human capability is the deciding factor. AI can only be the 'co-pilot'.

The same Lobster, in the hands of someone who knows how to decompose tasks, write skills, and design feedback loops, versus someone who only throws out vague instructions, can yield results相差十倍 (differing by ten times).

Lobster won't automatically become a good employee, just like a good computer won't automatically make us good programmers.

OpenClaw确实验证了 (has indeed verified) an exciting possibility: AI is no longer just a chat window, but a true executor that can work for you. But currently, it's more like a promising prototype, not a mature tool that ordinary people can pick up without thinking.

After all, Lobster's father, Peter himself, said a 'hard truth': If you don't understand the command line, this project is too risky for you. This sentence is worth pondering for everyone hesitating about installing Lobster.

However, as a non-technical ordinary person, it is necessary to experience it lightly and understand its characteristics. After all, opportunities only favor the most insightful and thoughtful people.

But, amidst the noise, maintaining冷静独立思考 (calm, independent thinking) is the most unique advantage of every unique human.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group (交流群):https://t.me/BitPushCommunity

Bitpush TG Subscription (订阅): https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7618157

Related Questions

QWhat are the four main deployment methods for OpenClaw mentioned in the article, and what is a key characteristic of each?

AThe four main deployment methods are: 1) Dedicated local hardware (e.g., Mac Mini), which provides the most stable experience with full context; 2) Cloud server (VPS) deployment, which is network-isolated but has limited access to personal files; 3) Installation on a personal computer, which is low-cost but high-risk as it shares the OS environment; 4) Model vendor-hosted products (e.g., Kimi Claw), which are easy to use but have limited capabilities and data autonomy.

QAccording to the article, why is giving high permissions to OpenClaw a double-edged sword?

AHigh permissions allow OpenClaw to perform more tasks and have greater execution power, but they also significantly increase the potential damage if the system loses control, such as accidentally deleting important files or emails, or being vulnerable to remote attacks and exploits.

QWhat two key variables determine the effectiveness and cost of using an OpenClaw agent?

AThe two key variables are: 1) The capability ceiling of the underlying large language model (LLM) used (e.g., Claude, GPT), which determines its ability to understand and execute complex tasks. 2) The cost of the model, as prolonged or complex tasks can consume a massive number of tokens, leading to high API call expenses.

QWhat are some of the reasons the article states that OpenClaw is not yet a mature product?

AOpenClaw is not mature because it is a fast-iterating but rough open-source project with defects such as overcomplicating simple tasks, unexpected execution interruptions, unstable memory functions, inefficient token consumption, and security vulnerabilities like malicious code found in its skill hub. Its installation and configuration also remain highly technical and challenging for non-developers.

QWhat practical factors should a person consider before deciding to install and use OpenClaw, beyond just curiosity or FOMO (Fear Of Missing Out)?

AA person should consider: 1) If they have clear, high-frequency, automatable tasks for it to handle. 2) The financial and time investment required for hardware, API costs, and configuration. 3) Their own technical capability and risk tolerance for deployment and security. 4) Their 'driving ability'—their skill in task decomposition and agent management, as the AI's effectiveness greatly depends on the user's expertise.

Related Reads

The AI Investment Landscape Is Being Reshaped: Beyond the 'Magnificent Seven', What Opportunities Lie in the Semiconductor Supply Chain?

AI Investment Map is Reshaping: Opportunities Beyond the 'Magnificent Seven' Since ChatGPT ignited the AI wave, investment initially focused on the "Magnificent Seven" tech giants dominating cloud infrastructure. However, the rise of DeepSeek and debates on AI capital expenditure effectiveness are shifting this dynamic. Investors now recognize opportunities deeper in the supply chain—the companies providing the essential "picks and shovels." Early concerns about an AI investment "arms race" and potential low returns were partly alleviated by strong Q1 earnings from cloud providers, validating robust compute demand. This has highlighted a more certain investment thesis: regardless of which AI applications ultimately win, massive capital expenditure will first fuel demand for semiconductors and related components. This "pick-and-shovel" logic has driven semiconductor ETFs to record highs. Key beneficiaries include: * **Memory Chipmakers (e.g., SK Hynix, Samsung, Micron)**: High Bandwidth Memory (HBM) is a critical bottleneck for AI training. * **Photonics Companies**: Crucial for high-speed data transfer within AI data centers. * **The Broader "AI-11" Semiconductor Ecosystem**: This encompasses foundries & lithography (TSMC, ASML), logic & custom chips (AMD, Broadcom, Intel, Marvell), and enterprise storage (SanDisk, Western Digital). Every dollar of AI infrastructure spending flows through this chain. While the "Magnificent Seven" remain dominant in market size, their earnings growth premium over the rest of the S&P 500 ("S&P 493") is narrowing. Market attention and marginal investment are shifting towards the expanding semiconductor supply chain. The investment narrative is evolving from "betting on the ultimate AI winner" to "investing in the certainty of the infrastructure build-out." Understanding this shift from the demand side to the supply side is key to identifying future AI investment opportunities.

marsbit2m ago

The AI Investment Landscape Is Being Reshaped: Beyond the 'Magnificent Seven', What Opportunities Lie in the Semiconductor Supply Chain?

marsbit2m ago

600 People, $66 Billion: The First Major Cash-Out in the Era of Large Models

The first systematic "big cash-out" of the AI era occurred in October 2025, when over 600 current and former OpenAI employees sold a total of $6.6 billion in shares via a secondary market. Approximately 75 individuals maxed out a $30 million per-person sale limit, while around 525 others cashed out an average of $8.3 million each. This event, exceeding the scale of any 2024 US IPO, functioned as a "shadow IPO." It marked a radical departure from the traditional Silicon Valley path of waiting for a public listing, instead allowing employees to convert equity to cash after just two years of tenure—a direct retention tool in a fiercely competitive talent market where rivals like Meta have offered packages worth hundreds of millions. This massive liquidity event presents a dual-edged sword for OpenAI. While it helps retain talent, it also risks triggering a brain drain as newly wealthy employees may depart. Furthermore, it creates a dilemma for those who sold: they forfeited potential future gains as the company's valuation soared from $400 billion to $852 billion within months. In stark contrast, employees at rival Anthropic demonstrated greater reluctance to sell during their own secondary offering. The financial narratives of the two labs also diverge sharply. OpenAI, while achieving over $20 billion in annualized revenue by 2025, faces massive projected losses (up to $14 billion in 2026), a long path to cash flow positivity, and significant revenue-sharing payments to Microsoft. Anthropic reports rapid revenue growth, improving gross margins, and a faster path to profitability. OpenAI's trajectory is thus balanced precariously between skyrocketing valuation based on funding narratives and the pressures of sustained financial losses post-cash-out. The event underscores that the AI race has evolved into a capital and human experiment, where immense wealth crystallizes the complex calculations of greed, fear, and ambition within the industry.

marsbit22m ago

600 People, $66 Billion: The First Major Cash-Out in the Era of Large Models

marsbit22m ago

NVIDIA Begins Adding Soap to the Bubble

NVIDIA is taking on a dual role: not just as a leading chip supplier, but as a massive capital allocator across the entire AI supply chain. In 2026, the company has committed over $40 billion in investments within five months, targeting everything from optical fiber manufacturing and data center operations to foundational AI model development. This investment spree, described as a systematic "sprinkler" approach, primarily funds companies that are major buyers of NVIDIA's own GPUs. Critics, including analysts from Goldman Sachs, label this a "circular revenue" loop—comparable to a supplier financing a customer to buy more of its products. A prominent example is NVIDIA's investment in OpenAI, which is expected to generate around $13 billion in revenue for NVIDIA, much of which may be reinvested back into OpenAI. While CEO Jensen Huang dismisses the "circular financing" critique as "absurd," arguing the investments are confidence votes in long-term generational shifts, some analysts express discomfort. They note that while investments in critical supply chain components like optics are strategically sound, funding new cloud providers like CoreWeave feels like "pre-paying for your own GPUs." The strategy carries significant risks. If the AI investment cycle turns, the market may question how much demand is genuine versus artificially sustained by NVIDIA's own balance sheet. Despite posting record-breaking earnings—$215.9 billion in annual revenue and $120 billion in net profit for FY2026—NVIDIA's stock fell after its report, signaling that "beating expectations" may no longer be enough to assure investors about the duration of the AI spending boom. The article concludes that while a bubble isn't necessarily a fraud, NVIDIA's actions resemble adding soap to a bubble—making it appear more robust and durable. This creates a complex scenario requiring extreme冷静 from investors to distinguish between real structural growth and financial engineering.

marsbit39m ago

NVIDIA Begins Adding Soap to the Bubble

marsbit39m ago

Short Positions Have Been Squeezed Out: Will the Next Leg of the U.S. Stock AI Rally Continue in Seoul?

"Short Squeeze Exhausted: Will the Next Leg of the AI Rally Continue in Seoul?" A Nomura report suggests the US AI stock rally, which saw the S&P 500 rise ~16.6% in 28 days largely driven by 10 key stocks, may be pausing. The fuel from short covering, CTA fund positioning, and volatility-control strategies is nearing its limit. For the rally to continue, new momentum from retail and sentiment-driven FOMO (Fear Of Missing Out) is needed. South Korea's market provided a potential answer on the very day the report was published. The KOSPI index surged 4.32%, triggering a buy-side circuit breaker, led by massive gains in chip giants SK Hynix (+11.98%) and Samsung. This surge is characterized by retail "hynix FOMO" and overseas funds precisely buying into AI themes via chip-focused ETFs, shifting from broad Korean market ETFs. The Korean rally is a high-beta extension of the US AI capital expenditure story, as major cloud providers plan massive infrastructure spending, directly benefiting memory chip leaders. However, this linkage also implies vulnerability. The sustainability of this next leg depends on whether US tech stocks correct, the trajectory of US inflation (with upcoming CPI data key), and geopolitical tensions around the Strait of Hormuz. Seoul has emerged as the new epicenter of the AI trade, but its fate remains tied to these broader macro and market dynamics.

marsbit43m ago

Short Positions Have Been Squeezed Out: Will the Next Leg of the U.S. Stock AI Rally Continue in Seoul?

marsbit43m ago

Trading

Spot
Futures
活动图片