A Monthly Salary of 20,000 RMB, But Can't Afford to Keep a 'Lobster'?

比推Pubblicato 2026-03-09Pubblicato ultima volta 2026-03-09

Introduzione

The article "Can a 20,000 Yuan Monthly Salary Not Afford a 'Lobster'?" discusses the viral trend and misconceptions surrounding OpenClaw, an open-source personal AI agent framework. It highlights two contrasting views: one praising its potential as a virtual team member, and another criticizing the hype and privacy concerns. Key points include: 1. OpenClaw experiences vary widely based on deployment methods (local hardware, cloud servers, personal PCs, or vendor-hosted services), affecting functionality and cost. 2. High permissions granted to OpenClaw pose significant security risks, including data breaches and malicious attacks. 3. Performance depends heavily on the underlying AI model (e.g., GPT, Claude) and its associated costs, with token consumption potentially leading to high expenses. 4. OpenClaw is still an immature, technically complex tool requiring substantial time, financial investment, and expertise to use effectively. 5. Users are advised to assess their actual needs, technical capability, and risk tolerance before adoption, as the AI serves as an "amplifier" rather than a standalone solution. The conclusion emphasizes cautious experimentation and independent thinking amidst the hype.

Source | Tencent Technology

By | Xiaojing

Editor | Xu Qingyang

Original Title | Can a 20,000 RMB Monthly Salary Afford a 'Lobster'? Five Common Misunderstandings Worth Noting


This Women's Day weekend, it was really hard to avoid the 'Lobster'. Nearly a thousand people queued up for public welfare installations at the foot of Shenzhen's Tencent Building, and the 500 RMB on-site deployment service on Xianyu was in extremely high demand.

Discussions surrounding OpenClaw have even split into two camps.

Fu Sheng is the most high-profile evangelist. During the Spring Festival, lying in bed with a fracture, he exchanged 1,157 messages and 220,000 words with Lobster over 14 days, nurturing it from a 'newbie' that couldn't even check the company directory into an automated team composed of 8 Agents. A public account article even autonomously published by Lobster at 3 AM garnered millions of reads. He presented an enviable and Fomo-inducing conclusion: one person plus one Lobster equals a team, and this is happening right now.

Lan Xi represents another perspective. He accidentally conversed with an AI account hosted by OpenClaw on 'Jike', and in his words, realizing it afterwards felt 'as disgusting as swallowing a fly'. He has no issue with OpenClaw's technology itself but believes the current hype is filled with excessive noise, feeling there's too much 'excitement of looking for a nail after getting a hammer' in the buzz.

Both viewpoints have merit. The controversy itself also proves that OpenClaw, as an open-source personal agent framework, has broken through the circle and become a new paradigm that ordinary people are paying attention to.

There's nothing wrong with everyone trying out and experiencing new products themselves. But before deciding whether to follow the trend, there are several key misunderstandings about Lobster worth clarifying first.

01 Is the 'Lobster' Experience the Same for Everyone?

This might be the biggest misconception.

Many people think OpenClaw is a standardized product that works right out of the box, offering a roughly similar experience. The opposite is true. The deployment method determines what kind of 'Lobster' you get, and they can be completely different.

The mainstream deployment paths can roughly be divided into four categories.

The first category is dedicated local hardware, most typically the Mac Mini. This is also the method used by OpenClaw's founder, Peter Steinberger, himself.

A machine is kept online long-term, dedicated to running the Agent. It can connect to local files and browsers, as well as hook into messaging channels, automation tools, and various skills. This OpenClaw gets the full context, offering the most stable experience for continuous tasks, cross-application operations, and multi-step calls.

Costs include the one-time hardware investment, e.g., a Mac mini; the second part is ongoing electricity costs, which are actually quite low; the third part is model fees (API or subscription), which is the largest long-term cost. If switched to a local model, API fees can be reduced, but this shifts the pressure to hardware configuration, significantly increasing requirements for memory, bandwidth, and cooling. A high-end Mac Studio or workstation becomes more suitable, with a one-time hardware expenditure potentially around the 100,000 RMB mark.

The second category is cloud server (VPS) deployment. Tencent Cloud, Alibaba Cloud, and Baidu Cloud have all launched one-click deployment solutions. Cloud service prices range from tens to hundreds of RMB depending on needs, but model fees need to be considered separately. Some plans include free models, others require separate model subscriptions or API purchases.

The advantage is network isolation; even if problems occur, your personal computer is unaffected.

But this cloud server doesn't have your personal files or your authorized accounts, so what Lobster can do is inherently limited. It's more like an enhanced chat bot in the cloud rather than a true digital assistant that takes over your workflow.

The third category is direct installation on a personal computer. This is the lowest barrier to entry but the highest risk method. Lobster shares the same operating system environment as you, possessing all the permissions on your computer.

Using a Docker container adds a layer of security but also increases configuration complexity. A virtual machine solution offers the strongest isolation but consumes significant resources, which an average PC's configuration might not handle well.

The fourth category is model vendor-hosted products. For example, Kimi launched Kimi Claw, and MiniMax launched MaxClaw. These are cloud services vendors offer based on their packaging of OpenClaw. The deployment barrier is the lowest, almost out-of-the-box, but users are essentially using the vendor's infrastructure, not a full local Lobster. These products lower the entry barrier but limit the capability ceiling and data autonomy.

Although you possess a 'Lobster', its experience varies greatly depending on the hardware it runs on, how much context it can see, what permissions it has, whether there's an isolation layer, etc.

02 Is More Permission for Lobster Always Better?

The core reason OpenClaw is exciting is that it doesn't just 'talk', it can 'do'.

It can operate your browser, read and write files, execute terminal commands, manage calendars, and send emails. The prerequisite for this execution power is that you hand over the permissions.

But permission is a double-edged sword.

In February 2026, Summer Yue, responsible for AI alignment at Meta's super intelligence team, shared a harrowing experience on social media: her instruction to Lobster was simple, 'Check the inbox, suggest which emails can be archived or deleted.' Lobster immediately started batch-deleting emails; the set safety restrictions didn't work at all. She only stopped it by physically shutting down the computer.

This is not an isolated case. Public research from security agency STRIKE shows that over 40,000 OpenClaw instances are exposed to the public internet, with 63% having exploitable vulnerabilities, and over 12,000 instances marked as remotely controllable. The ClawHavoc supply chain poisoning incident in February saw 1,184 malicious skills implanted into the ClawHub market, affecting over 135,000 devices. Security research institutions also disclosed a high-risk vulnerability named ClawJacked, where malicious websites could silently control locally running OpenClaw instances through browser sessions.

Image: Web interface of a cross-origin WebSocket attack on OpenClaw demonstrated by security researchers. A malicious webpage can attempt to connect to the local Gateway's WebSocket port and exploit the lack of cross-origin verification, rate limiting, or locking mechanisms to hijack or brute-force the local instance.

Companies like Google, Anthropic, and Meta have started banning OpenClaw internally. This isn't because the technology itself is problematic, but because current security protection mechanisms haven't kept up with its capability expansion.

So, when you see a tutorial encouraging you to 'grant Lobster all permissions', think twice. Higher permissions mean Lobster can do more, but also mean greater destructive power if it loses control. A safer approach is: run it on a backup device or Docker container without important data, gradually open permissions, and set hard spending limits on the model API side.

03 If Lobster Is Hard to Use, Is It Lobster's Problem?

Many people excitedly install Lobster, assign a task, and then Lobster either gets stuck or performs a series of baffling operations. The conclusion: this thing doesn't work.

But in reality, Lobster's intelligence largely depends on the large language model behind it. OpenClaw itself doesn't have a built-in model; it's a framework responsible for task decomposition, tool calling, memory management, and feedback loops. The actual 'thinking' part is done by the model you choose to connect, be it Claude, GPT, DeepSeek, Kimi, or a local open-source model.

There are two key variables here.

First is the model's capability ceiling. With a top-tier model, Lobster can understand complex instructions, autonomously plan multi-step tasks, and handle exceptions. Switch to a cheap small model, and it might not even complete basic tool calls.

Second is the model's cost. This is a hidden expense many don't anticipate. Every task Lobster executes consumes a large number of tokens to interact with the backend model.

The cost of OpenClaw isn't in the software itself, but in the model calls behind it; once the task chain lengthens, tool calls increase, and memory is enabled, token consumption rises rapidly.

For example, a complete calendar organization plus email reply might consume over ten thousand tokens; if long-term memory, multi-Agent collaboration, and scheduled inspections are enabled, daily consumption can easily exceed a hundred thousand tokens.

Some media reported a user with a monthly salary of 20,000 RMB lamenting 'can't afford an AI employee', with extreme cases seeing bills exceeding a thousand RMB in 6 hours. If you choose a free or low-cost model to save money, the experience will inevitably be compromised; if you choose an expensive model without setting a spending limit, the bill might make your heart race.

So, whether Lobster is useful or not first depends on what 'brain' you pair it with, and how much you're willing to continuously 'spend' on this 'Lobster' afterwards. Blaming the framework itself is not very objective.

04 Is Lobster Already a Mature Product?

Lobster is not yet a mature product. OpenClaw has been around for less than four months, starting as a weekend experiment in November 2025. It's a rapidly iterating but still rough open-source project, with a noticeable gap from a true 'product'.

Currently known major defects include: simple tasks sometimes being over-complicated; tasks may inexplicably中断 (interrupt) during execution; memory function is not stable enough, sometimes it 'forgets' previous conversations and preferences; the efficiency ratio between token consumption and actual output still has much room for optimization; security-wise, hundreds of the thousands of skills on ClawHub have been found to contain malicious code.

A more fundamental issue is that OpenClaw's installation and configuration remain a barrier for ordinary people. For self-deploying users, steps like repository pulling, runtime environment setup, dependency installation, model key configuration, and channel接入 (access) are still required. For developers, this might take half an hour, but for non-technical users, it might take days to figure out.

Even using cloud vendors' one-click deployment solutions, subsequent model configuration, IM channel integration, and skill installation still require considerable effort. The fact that the 500 RMB installation service on Xianyu is popular itself shows how serious the门槛 (barrier) problem is.

Peter himself is well aware of this. He emphasized in a podcast, 'Lobster isn't useful right after installation. You need to 'raise' it like an intern, write skill documentation for it, and constantly let it understand your habits and preferences through dialogue.' This nurturing process itself requires significant time and cognitive resources.

05 Must I Install a 'Lobster', Otherwise I Become an 'Old Fogey'?

Image source: Internet

So, should you install Lobster or not?

Excluding curiosity and FOMO psychology, making this decision requires considering several practical factors.

First, are there clear, high-frequency, automatable tasks? Lobster's value isn't in occasionally checking the weather for us, but in automatically organizing your emails daily, monitoring specific information sources, generating reports on schedule—these repetitive tasks. If most of your daily work involves creative decision-making, interpersonal communication, things Lobster currently can't help with, then its practical value to you is limited.

Second, how much time and money are you willing to invest? Hardware costs (self-purchased equipment or cloud server rental), model API call fees, initial configuration time, and continuous 'nurturing'投入 (input)—these costs add up to a significant amount.

Someone did the math: if you use a Mac Mini plus a top-tier model with high frequency, the minimum monthly cost would be several hundred to over a thousand RMB. If you really want to raise a Lobster, you must evaluate whether this cost is worth the time and effort it saves you.

Third, what is your technical ability and risk tolerance? If you have no command line experience whatsoever, the frustration of directly tackling OpenClaw local deployment at this stage will be strong. A more pragmatic choice might be to try encapsulated products like Kimi Claw or MaxClaw first, get a feel for the Agent's basic capabilities, and then decide whether to delve deeper. If you decide on local deployment, be sure to implement security isolation. It's recommended to use an independent device or Docker container, set API spending limits, and not deploy it on your main computer storing important data.

Fourth, and most easily overlooked: your own 'piloting ability'. AI's capability is just an amplifier; human capability is the deciding factor. AI can only be the 'co-pilot'.

The same Lobster, in the hands of someone who knows how to decompose tasks, write skills, and design feedback loops, versus someone who only throws out vague instructions, can yield results相差十倍 (differing by ten times).

Lobster won't automatically become a good employee, just like a good computer won't automatically make us good programmers.

OpenClaw确实验证了 (has indeed verified) an exciting possibility: AI is no longer just a chat window, but a true executor that can work for you. But currently, it's more like a promising prototype, not a mature tool that ordinary people can pick up without thinking.

After all, Lobster's father, Peter himself, said a 'hard truth': If you don't understand the command line, this project is too risky for you. This sentence is worth pondering for everyone hesitating about installing Lobster.

However, as a non-technical ordinary person, it is necessary to experience it lightly and understand its characteristics. After all, opportunities only favor the most insightful and thoughtful people.

But, amidst the noise, maintaining冷静独立思考 (calm, independent thinking) is the most unique advantage of every unique human.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group (交流群):https://t.me/BitPushCommunity

Bitpush TG Subscription (订阅): https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7618157

Domande pertinenti

QWhat are the four main deployment methods for OpenClaw mentioned in the article, and what is a key characteristic of each?

AThe four main deployment methods are: 1) Dedicated local hardware (e.g., Mac Mini), which provides the most stable experience with full context; 2) Cloud server (VPS) deployment, which is network-isolated but has limited access to personal files; 3) Installation on a personal computer, which is low-cost but high-risk as it shares the OS environment; 4) Model vendor-hosted products (e.g., Kimi Claw), which are easy to use but have limited capabilities and data autonomy.

QAccording to the article, why is giving high permissions to OpenClaw a double-edged sword?

AHigh permissions allow OpenClaw to perform more tasks and have greater execution power, but they also significantly increase the potential damage if the system loses control, such as accidentally deleting important files or emails, or being vulnerable to remote attacks and exploits.

QWhat two key variables determine the effectiveness and cost of using an OpenClaw agent?

AThe two key variables are: 1) The capability ceiling of the underlying large language model (LLM) used (e.g., Claude, GPT), which determines its ability to understand and execute complex tasks. 2) The cost of the model, as prolonged or complex tasks can consume a massive number of tokens, leading to high API call expenses.

QWhat are some of the reasons the article states that OpenClaw is not yet a mature product?

AOpenClaw is not mature because it is a fast-iterating but rough open-source project with defects such as overcomplicating simple tasks, unexpected execution interruptions, unstable memory functions, inefficient token consumption, and security vulnerabilities like malicious code found in its skill hub. Its installation and configuration also remain highly technical and challenging for non-developers.

QWhat practical factors should a person consider before deciding to install and use OpenClaw, beyond just curiosity or FOMO (Fear Of Missing Out)?

AA person should consider: 1) If they have clear, high-frequency, automatable tasks for it to handle. 2) The financial and time investment required for hardware, API costs, and configuration. 3) Their own technical capability and risk tolerance for deployment and security. 4) Their 'driving ability'—their skill in task decomposition and agent management, as the AI's effectiveness greatly depends on the user's expertise.

Letture associate

The $290 Million Deficit: A Three-Way Game Between Aave, L0, and Kelp—Who Should Foot the Bill?

An incident involving the theft of 116,500 rsETH (worth approximately $290 million) from Kelp DAO’s cross-chain bridge contract has triggered a complex dispute over responsibility and compensation among Kelp DAO, LayerZero, and Aave. The attack occurred due to a compromised RPC provider used by LayerZero’s Decentralized Verifier Network (DVN). Since Kelp DAO’s bridge used a 1/1 DVN configuration—a single point of failure—the attacker successfully forged a cross-chain message, leading to the unauthorized release of rsETH tokens from the mainnet. These genuine tokens were then deposited into Aave and other lending platforms to borrow WETH, enabling the attacker to exit with the funds. Responsibility is attributed primarily to Kelp DAO for its risky 1/1 DVN setup. LayerZero bears secondary responsibility for permitting such a vulnerable configuration in its protocol layer. Aave also shares indirect blame for over-collateralizing rsETH and other Liquid Restaking Token (LRT) assets without adequate ongoing risk oversight. Kelp DAO lacks sufficient funds to cover the loss, shifting focus to the deeper-pocketed players: LayerZero, whose cross-chain ecosystem and reputation are at risk, and Aave, which faces massive bad loans and declining Total Value Locked (TVL). Aave has asserted that mainnet rsETH remains fully backed, implying it expects Kelp DAO to allow redemption of underlying ETH. This approach would preserve Aave’s mainnet positions but invalidate Layer2 rsETH, damaging LayerZero’s cross-chain credibility. Potential solutions include: - A universal 18.5% haircut on all rsETH holders, causing significant Aave bad debt. - Writing off Layer2 rsETH entirely, protecting Aave mainnet but harming LayerZero and Kelp DAO. - Negotiating a bounty with the hacker for partial fund return. - A joint bailout, possibly led by LayerZero’s ecosystem fund, given its long-term stake in the cross-chain ecosystem. The situation remains unresolved as the parties negotiate, but prolonged delay risks broader DeFi instability, including potential liquidity crises and loss of confidence in LRT and cross-chain infrastructures.

Odaily星球日报13 min fa

The $290 Million Deficit: A Three-Way Game Between Aave, L0, and Kelp—Who Should Foot the Bill?

Odaily星球日报13 min fa

Bitcoin's Bull-Bear Range Battle Continues, HYPE Faces Critical Test of Wave V Support | Exclusive Analysis

This market analysis covers Bitcoin (BTC) and HYPE, highlighting key levels and trading strategies for the week. HYPE is currently testing a critical support level at $40.17. A hold above this level could lead to consolidation between $40.17–$45.76, while a break below it may signal the end of its current V-wave uptrend from the April 2 low. The short-term strategy is to look for long entries near $40.17 if support holds, using 30% leverage and strict stop-loss discipline. Bitcoin is interpreted to be in a larger D-wave rebound from the February 6 low of $60,000, currently trading within a $73,500–$79,000 range. Key resistance lies at $79,000–$80,600 and $83,500–$84,500, with supports at $73,500, $69,500, and $65,000–$66,000. The medium-term strategy maintains a 60% short position from $89,000. Short-term tactics include selling into rallies near $76,500–$79,000 (Scenario A) or breaking below $73,500 (Scenario B), using 30% leverage. Last week, a 1x leveraged long trade in HYPE yielded a 6.80% gain, and the BTC short from $89,000 is currently up approximately 17.08%. Risk management is emphasized: set stop-losses at entry, move to breakeven at +1% profit, and trail stops to lock in gains thereafter. All views are based on technical analysis and are not investment advice. Traders are urged to exercise caution and adapt to market changes.

marsbit44 min fa

Bitcoin's Bull-Bear Range Battle Continues, HYPE Faces Critical Test of Wave V Support | Exclusive Analysis

marsbit44 min fa

Bitcoin's Bull-Bear Range Battle Continues, HYPE Faces Critical Test of Wave V Support | Invited Analysis

The market is experiencing directional uncertainty with both opportunities and risks. HYPE's daily V-wave structure is at a critical juncture, with the $40.17 support level being pivotal for its future trajectory. A break below this level, followed by an inability to surpass the recent high of $45.76, could signal the end of the current upward structure. The short-term strategy for HYPE is to "follow the trend and buy on dips," using a 30% position size and a 30/60-minute trading cycle, entering long upon confirmed support holds with model signals. Bitcoin's market structure is reinterpreted, with the rally from the $60,000 low now considered a larger D-wave rebound within a medium-term correction, facing a key test between $73,500 and $79,000. A break above the upper bound may lead to limited upside, while a drop below could see a decline toward $69,500. Core resistance lies at $79,000–$80,600 and $83,500–$84,500, with support at $73,500, $69,500, and $65,000–$66,000. The medium-term strategy maintains a 60% short position from $89,000, to be exited if price stabilizes above the multi-empty band. Short-term tactics involve 30% positions for "spread" opportunities, with two scenarios: selling on rallies near $76,500–$79,000 (Scheme A) or shorting on a breakdown below $73,500 (Scheme B), both with strict stop-losses. A复盘 of HYPE's recent short trade showed a 6.80% gain from a long entry at $41.59 (based on model buy signals) and exit at $44.42 (triggered by top signals). Key reminders include setting initial stops at entry, moving to breakeven at +1% profit, and trailing stops thereafter. All views are for reference only; market conditions change rapidly, and caution is advised.

Odaily星球日报51 min fa

Bitcoin's Bull-Bear Range Battle Continues, HYPE Faces Critical Test of Wave V Support | Invited Analysis

Odaily星球日报51 min fa

On the Same Day Aave Introduced rsETH, Why Did Spark Choose to Exit?

On April 18, Kelp DAO's cross-chain bridge was exploited, resulting in the malicious minting of 116,500 unbacked rsETH. The attacker deposited these into Aave and borrowed WETH, creating a potential bad debt of approximately $195 million. Aave’s Guardian quickly froze the market, but the protocol’s insurance could only cover about 25% of the loss. In contrast, SparkLend, a lending protocol in the MakerDAO ecosystem, suffered no direct losses. This was not due to superior foresight but rather a preemptive governance decision. On January 29, Spark executed a governance action to discontinue new rsETH supply, citing low usage and high concentration from a single wallet. The same day, Aave expanded its rsETH market by enabling E-Mode with a 93% LTV to attract more deposits. Spark’s risk management framework is designed to remove assets with low usage or poor risk-adjusted returns, regardless of external security concerns. Aave’s decision was growth-oriented, aiming to boost WETH utilization and attract capital. Spark also employs additional safeguards: rate-limited supply and borrow caps that would have limited the scale of such an attack, and a robust oracle system using the median of three price feeds. These mechanisms systemically contain the maximum exposure to any single risk event, demonstrating a fundamentally different approach to risk than Aave’s growth-first model.

marsbit51 min fa

On the Same Day Aave Introduced rsETH, Why Did Spark Choose to Exit?

marsbit51 min fa

Strategy's 'Money Printer': Is STRC Bitcoin's Savior or Destroyer?

Bitcoin's recent price movement is being heavily influenced by Michael Saylor and his company, MicroStrategy, through a new financial instrument: STRC (Variable Rate Series A Perpetual Stretch Preferred Stock). This Nasdaq-listed perpetual preferred stock offers an 11.5% annual dividend, attracting significant capital. Crucially, funds raised from STRC are used to purchase Bitcoin, with a 3x leverage effect—for every $1 from STRC, MicroStrategy adds $2 from MSTR equity to buy $3 worth of BTC. This creates a powerful "flywheel": more STRC sales fuel massive BTC buying, supporting its price and improving MicroStrategy's credit, which in turn makes STRC more attractive to investors. However, this mechanism introduces risks. A significant "ex-dividend arbitrage" pattern has emerged, where traders buy STRC before its monthly dividend, collect the payout, and quickly sell, causing price volatility and potentially driving up Bitcoin's cost basis for MicroStrategy. In response, Saylor has proposed shifting STRC to a semi-monthly dividend to smooth out these effects. Furthermore, STRC's high yield is being integrated into DeFi protocols like Apyx Protocol and Saturn Credit, offering new on-chain yield opportunities. The central concern remains: as MicroStrategy aggressively accumulates over 3.5% of all BTC, it challenges Bitcoin's foundational principle of decentralization, creating a system where a single public company significantly influences the market.

marsbit59 min fa

Strategy's 'Money Printer': Is STRC Bitcoin's Savior or Destroyer?

marsbit59 min fa

Trading

Spot
Futures
活动图片