A Monthly Salary of 20,000 RMB, But Can't Afford to Keep a 'Lobster'?

比推2026-03-09 tarihinde yayınlandı2026-03-09 tarihinde güncellendi

Özet

The article "Can a 20,000 Yuan Monthly Salary Not Afford a 'Lobster'?" discusses the viral trend and misconceptions surrounding OpenClaw, an open-source personal AI agent framework. It highlights two contrasting views: one praising its potential as a virtual team member, and another criticizing the hype and privacy concerns. Key points include: 1. OpenClaw experiences vary widely based on deployment methods (local hardware, cloud servers, personal PCs, or vendor-hosted services), affecting functionality and cost. 2. High permissions granted to OpenClaw pose significant security risks, including data breaches and malicious attacks. 3. Performance depends heavily on the underlying AI model (e.g., GPT, Claude) and its associated costs, with token consumption potentially leading to high expenses. 4. OpenClaw is still an immature, technically complex tool requiring substantial time, financial investment, and expertise to use effectively. 5. Users are advised to assess their actual needs, technical capability, and risk tolerance before adoption, as the AI serves as an "amplifier" rather than a standalone solution. The conclusion emphasizes cautious experimentation and independent thinking amidst the hype.

Source | Tencent Technology

By | Xiaojing

Editor | Xu Qingyang

Original Title | Can a 20,000 RMB Monthly Salary Afford a 'Lobster'? Five Common Misunderstandings Worth Noting


This Women's Day weekend, it was really hard to avoid the 'Lobster'. Nearly a thousand people queued up for public welfare installations at the foot of Shenzhen's Tencent Building, and the 500 RMB on-site deployment service on Xianyu was in extremely high demand.

Discussions surrounding OpenClaw have even split into two camps.

Fu Sheng is the most high-profile evangelist. During the Spring Festival, lying in bed with a fracture, he exchanged 1,157 messages and 220,000 words with Lobster over 14 days, nurturing it from a 'newbie' that couldn't even check the company directory into an automated team composed of 8 Agents. A public account article even autonomously published by Lobster at 3 AM garnered millions of reads. He presented an enviable and Fomo-inducing conclusion: one person plus one Lobster equals a team, and this is happening right now.

Lan Xi represents another perspective. He accidentally conversed with an AI account hosted by OpenClaw on 'Jike', and in his words, realizing it afterwards felt 'as disgusting as swallowing a fly'. He has no issue with OpenClaw's technology itself but believes the current hype is filled with excessive noise, feeling there's too much 'excitement of looking for a nail after getting a hammer' in the buzz.

Both viewpoints have merit. The controversy itself also proves that OpenClaw, as an open-source personal agent framework, has broken through the circle and become a new paradigm that ordinary people are paying attention to.

There's nothing wrong with everyone trying out and experiencing new products themselves. But before deciding whether to follow the trend, there are several key misunderstandings about Lobster worth clarifying first.

01 Is the 'Lobster' Experience the Same for Everyone?

This might be the biggest misconception.

Many people think OpenClaw is a standardized product that works right out of the box, offering a roughly similar experience. The opposite is true. The deployment method determines what kind of 'Lobster' you get, and they can be completely different.

The mainstream deployment paths can roughly be divided into four categories.

The first category is dedicated local hardware, most typically the Mac Mini. This is also the method used by OpenClaw's founder, Peter Steinberger, himself.

A machine is kept online long-term, dedicated to running the Agent. It can connect to local files and browsers, as well as hook into messaging channels, automation tools, and various skills. This OpenClaw gets the full context, offering the most stable experience for continuous tasks, cross-application operations, and multi-step calls.

Costs include the one-time hardware investment, e.g., a Mac mini; the second part is ongoing electricity costs, which are actually quite low; the third part is model fees (API or subscription), which is the largest long-term cost. If switched to a local model, API fees can be reduced, but this shifts the pressure to hardware configuration, significantly increasing requirements for memory, bandwidth, and cooling. A high-end Mac Studio or workstation becomes more suitable, with a one-time hardware expenditure potentially around the 100,000 RMB mark.

The second category is cloud server (VPS) deployment. Tencent Cloud, Alibaba Cloud, and Baidu Cloud have all launched one-click deployment solutions. Cloud service prices range from tens to hundreds of RMB depending on needs, but model fees need to be considered separately. Some plans include free models, others require separate model subscriptions or API purchases.

The advantage is network isolation; even if problems occur, your personal computer is unaffected.

But this cloud server doesn't have your personal files or your authorized accounts, so what Lobster can do is inherently limited. It's more like an enhanced chat bot in the cloud rather than a true digital assistant that takes over your workflow.

The third category is direct installation on a personal computer. This is the lowest barrier to entry but the highest risk method. Lobster shares the same operating system environment as you, possessing all the permissions on your computer.

Using a Docker container adds a layer of security but also increases configuration complexity. A virtual machine solution offers the strongest isolation but consumes significant resources, which an average PC's configuration might not handle well.

The fourth category is model vendor-hosted products. For example, Kimi launched Kimi Claw, and MiniMax launched MaxClaw. These are cloud services vendors offer based on their packaging of OpenClaw. The deployment barrier is the lowest, almost out-of-the-box, but users are essentially using the vendor's infrastructure, not a full local Lobster. These products lower the entry barrier but limit the capability ceiling and data autonomy.

Although you possess a 'Lobster', its experience varies greatly depending on the hardware it runs on, how much context it can see, what permissions it has, whether there's an isolation layer, etc.

02 Is More Permission for Lobster Always Better?

The core reason OpenClaw is exciting is that it doesn't just 'talk', it can 'do'.

It can operate your browser, read and write files, execute terminal commands, manage calendars, and send emails. The prerequisite for this execution power is that you hand over the permissions.

But permission is a double-edged sword.

In February 2026, Summer Yue, responsible for AI alignment at Meta's super intelligence team, shared a harrowing experience on social media: her instruction to Lobster was simple, 'Check the inbox, suggest which emails can be archived or deleted.' Lobster immediately started batch-deleting emails; the set safety restrictions didn't work at all. She only stopped it by physically shutting down the computer.

This is not an isolated case. Public research from security agency STRIKE shows that over 40,000 OpenClaw instances are exposed to the public internet, with 63% having exploitable vulnerabilities, and over 12,000 instances marked as remotely controllable. The ClawHavoc supply chain poisoning incident in February saw 1,184 malicious skills implanted into the ClawHub market, affecting over 135,000 devices. Security research institutions also disclosed a high-risk vulnerability named ClawJacked, where malicious websites could silently control locally running OpenClaw instances through browser sessions.

Image: Web interface of a cross-origin WebSocket attack on OpenClaw demonstrated by security researchers. A malicious webpage can attempt to connect to the local Gateway's WebSocket port and exploit the lack of cross-origin verification, rate limiting, or locking mechanisms to hijack or brute-force the local instance.

Companies like Google, Anthropic, and Meta have started banning OpenClaw internally. This isn't because the technology itself is problematic, but because current security protection mechanisms haven't kept up with its capability expansion.

So, when you see a tutorial encouraging you to 'grant Lobster all permissions', think twice. Higher permissions mean Lobster can do more, but also mean greater destructive power if it loses control. A safer approach is: run it on a backup device or Docker container without important data, gradually open permissions, and set hard spending limits on the model API side.

03 If Lobster Is Hard to Use, Is It Lobster's Problem?

Many people excitedly install Lobster, assign a task, and then Lobster either gets stuck or performs a series of baffling operations. The conclusion: this thing doesn't work.

But in reality, Lobster's intelligence largely depends on the large language model behind it. OpenClaw itself doesn't have a built-in model; it's a framework responsible for task decomposition, tool calling, memory management, and feedback loops. The actual 'thinking' part is done by the model you choose to connect, be it Claude, GPT, DeepSeek, Kimi, or a local open-source model.

There are two key variables here.

First is the model's capability ceiling. With a top-tier model, Lobster can understand complex instructions, autonomously plan multi-step tasks, and handle exceptions. Switch to a cheap small model, and it might not even complete basic tool calls.

Second is the model's cost. This is a hidden expense many don't anticipate. Every task Lobster executes consumes a large number of tokens to interact with the backend model.

The cost of OpenClaw isn't in the software itself, but in the model calls behind it; once the task chain lengthens, tool calls increase, and memory is enabled, token consumption rises rapidly.

For example, a complete calendar organization plus email reply might consume over ten thousand tokens; if long-term memory, multi-Agent collaboration, and scheduled inspections are enabled, daily consumption can easily exceed a hundred thousand tokens.

Some media reported a user with a monthly salary of 20,000 RMB lamenting 'can't afford an AI employee', with extreme cases seeing bills exceeding a thousand RMB in 6 hours. If you choose a free or low-cost model to save money, the experience will inevitably be compromised; if you choose an expensive model without setting a spending limit, the bill might make your heart race.

So, whether Lobster is useful or not first depends on what 'brain' you pair it with, and how much you're willing to continuously 'spend' on this 'Lobster' afterwards. Blaming the framework itself is not very objective.

04 Is Lobster Already a Mature Product?

Lobster is not yet a mature product. OpenClaw has been around for less than four months, starting as a weekend experiment in November 2025. It's a rapidly iterating but still rough open-source project, with a noticeable gap from a true 'product'.

Currently known major defects include: simple tasks sometimes being over-complicated; tasks may inexplicably中断 (interrupt) during execution; memory function is not stable enough, sometimes it 'forgets' previous conversations and preferences; the efficiency ratio between token consumption and actual output still has much room for optimization; security-wise, hundreds of the thousands of skills on ClawHub have been found to contain malicious code.

A more fundamental issue is that OpenClaw's installation and configuration remain a barrier for ordinary people. For self-deploying users, steps like repository pulling, runtime environment setup, dependency installation, model key configuration, and channel接入 (access) are still required. For developers, this might take half an hour, but for non-technical users, it might take days to figure out.

Even using cloud vendors' one-click deployment solutions, subsequent model configuration, IM channel integration, and skill installation still require considerable effort. The fact that the 500 RMB installation service on Xianyu is popular itself shows how serious the门槛 (barrier) problem is.

Peter himself is well aware of this. He emphasized in a podcast, 'Lobster isn't useful right after installation. You need to 'raise' it like an intern, write skill documentation for it, and constantly let it understand your habits and preferences through dialogue.' This nurturing process itself requires significant time and cognitive resources.

05 Must I Install a 'Lobster', Otherwise I Become an 'Old Fogey'?

Image source: Internet

So, should you install Lobster or not?

Excluding curiosity and FOMO psychology, making this decision requires considering several practical factors.

First, are there clear, high-frequency, automatable tasks? Lobster's value isn't in occasionally checking the weather for us, but in automatically organizing your emails daily, monitoring specific information sources, generating reports on schedule—these repetitive tasks. If most of your daily work involves creative decision-making, interpersonal communication, things Lobster currently can't help with, then its practical value to you is limited.

Second, how much time and money are you willing to invest? Hardware costs (self-purchased equipment or cloud server rental), model API call fees, initial configuration time, and continuous 'nurturing'投入 (input)—these costs add up to a significant amount.

Someone did the math: if you use a Mac Mini plus a top-tier model with high frequency, the minimum monthly cost would be several hundred to over a thousand RMB. If you really want to raise a Lobster, you must evaluate whether this cost is worth the time and effort it saves you.

Third, what is your technical ability and risk tolerance? If you have no command line experience whatsoever, the frustration of directly tackling OpenClaw local deployment at this stage will be strong. A more pragmatic choice might be to try encapsulated products like Kimi Claw or MaxClaw first, get a feel for the Agent's basic capabilities, and then decide whether to delve deeper. If you decide on local deployment, be sure to implement security isolation. It's recommended to use an independent device or Docker container, set API spending limits, and not deploy it on your main computer storing important data.

Fourth, and most easily overlooked: your own 'piloting ability'. AI's capability is just an amplifier; human capability is the deciding factor. AI can only be the 'co-pilot'.

The same Lobster, in the hands of someone who knows how to decompose tasks, write skills, and design feedback loops, versus someone who only throws out vague instructions, can yield results相差十倍 (differing by ten times).

Lobster won't automatically become a good employee, just like a good computer won't automatically make us good programmers.

OpenClaw确实验证了 (has indeed verified) an exciting possibility: AI is no longer just a chat window, but a true executor that can work for you. But currently, it's more like a promising prototype, not a mature tool that ordinary people can pick up without thinking.

After all, Lobster's father, Peter himself, said a 'hard truth': If you don't understand the command line, this project is too risky for you. This sentence is worth pondering for everyone hesitating about installing Lobster.

However, as a non-technical ordinary person, it is necessary to experience it lightly and understand its characteristics. After all, opportunities only favor the most insightful and thoughtful people.

But, amidst the noise, maintaining冷静独立思考 (calm, independent thinking) is the most unique advantage of every unique human.


Twitter:https://twitter.com/BitpushNewsCN

Bitpush TG Discussion Group (交流群):https://t.me/BitPushCommunity

Bitpush TG Subscription (订阅): https://t.me/bitpush

Original link:https://www.bitpush.news/articles/7618157

İlgili Sorular

QWhat are the four main deployment methods for OpenClaw mentioned in the article, and what is a key characteristic of each?

AThe four main deployment methods are: 1) Dedicated local hardware (e.g., Mac Mini), which provides the most stable experience with full context; 2) Cloud server (VPS) deployment, which is network-isolated but has limited access to personal files; 3) Installation on a personal computer, which is low-cost but high-risk as it shares the OS environment; 4) Model vendor-hosted products (e.g., Kimi Claw), which are easy to use but have limited capabilities and data autonomy.

QAccording to the article, why is giving high permissions to OpenClaw a double-edged sword?

AHigh permissions allow OpenClaw to perform more tasks and have greater execution power, but they also significantly increase the potential damage if the system loses control, such as accidentally deleting important files or emails, or being vulnerable to remote attacks and exploits.

QWhat two key variables determine the effectiveness and cost of using an OpenClaw agent?

AThe two key variables are: 1) The capability ceiling of the underlying large language model (LLM) used (e.g., Claude, GPT), which determines its ability to understand and execute complex tasks. 2) The cost of the model, as prolonged or complex tasks can consume a massive number of tokens, leading to high API call expenses.

QWhat are some of the reasons the article states that OpenClaw is not yet a mature product?

AOpenClaw is not mature because it is a fast-iterating but rough open-source project with defects such as overcomplicating simple tasks, unexpected execution interruptions, unstable memory functions, inefficient token consumption, and security vulnerabilities like malicious code found in its skill hub. Its installation and configuration also remain highly technical and challenging for non-developers.

QWhat practical factors should a person consider before deciding to install and use OpenClaw, beyond just curiosity or FOMO (Fear Of Missing Out)?

AA person should consider: 1) If they have clear, high-frequency, automatable tasks for it to handle. 2) The financial and time investment required for hardware, API costs, and configuration. 3) Their own technical capability and risk tolerance for deployment and security. 4) Their 'driving ability'—their skill in task decomposition and agent management, as the AI's effectiveness greatly depends on the user's expertise.

İlgili Okumalar

Understanding Hash in One Article: The "Browser Miner" on Ethereum

Hash is an Ethereum-based ERC-20 token described as a "browser-minable post-quantum token." Its key features include enabling browser-based GPU mining without specialized hardware, a fixed supply cap of 21 million tokens, immutable and permissionless smart contracts with no team allocation or pre-mining, and an emphasis on post-quantum security using Keccak256 hashing. The mining mechanism is a simplified on-chain proof-of-work where miners solve unique challenges tied to their wallet address. Key design elements prevent answer theft, with epochs resetting every 100 blocks (~20 minutes) and a per-block minting limit. Emission follows a Bitcoin-like halving schedule every 100,000 mints, starting at 100 tokens per mint. Projections suggest all tokens could be mined within approximately 294 days if a target rate of one mint per minute is sustained. Hash emphasizes "post-quantum" security by leveraging hash-based primitives like Keccak256, which are considered more resistant to quantum attacks compared to elliptic-curve cryptography. While not a fully post-quantum asset, it aligns with Ethereum's broader post-quantum research narrative. The project completed its Genesis sale at $0.03 and began trading on Uniswap, with its price reaching around $0.19. The initial circulating supply is small, with 5% sold in Genesis and 5% allocated to liquidity. The majority (47.6% of total supply) is allocated to early-stage mining, leading to a front-loaded emission schedule. This structure, combined with low initial liquidity, makes Hash a high-volatility, high-risk project dependent on sustained miner participation and market demand to absorb new supply.

marsbit2 dk önce

Understanding Hash in One Article: The "Browser Miner" on Ethereum

marsbit2 dk önce

OpenAI's Largest Internal Wealth Creation: 600 People Cash Out a Total of $6.6 Billion, 75 Take Home the Maximum $30 Million Each

A Wall Street Journal report reveals OpenAI's unprecedented pre-IPO wealth creation. In a single employee stock sale last October, over 600 current and former employees sold shares, collectively cashing out approximately $6.6 billion. Due to high investor demand, the company tripled the individual sale cap to $30 million, with about 75 employees selling the maximum amount. This event represents the largest such transaction in tech industry history for a private company. OpenAI's valuation was $500 billion for this tender offer. Employees with over two years of tenure were eligible, allowing many post-ChatGPT hires their first liquidity event. The company's stock has reportedly grown over 100-fold in seven years. Following a restructuring, employees collectively hold about 26% of OpenAI. The scale of executive wealth is also staggering. In court testimony related to Elon Musk's lawsuit, President and co-founder Greg Brockman confirmed his OpenAI stake is worth around $30 billion. Analysis indicates about 165 current and former employees hold a combined ~$164.9 billion in equity, averaging nearly $1 billion per person in paper wealth. OpenAI's per-employee stock-based compensation is estimated to be 34 times the average of major tech firms before their IPOs. OpenAI continues its rapid ascent, closing a $122 billion funding round at an $852 billion valuation in March. With monthly revenue hitting $2 billion, over 900 million weekly ChatGPT users, and plans for a potential trillion-dollar IPO in late 2026, this wealth-creation engine shows no signs of stopping.

链捕手25 dk önce

OpenAI's Largest Internal Wealth Creation: 600 People Cash Out a Total of $6.6 Billion, 75 Take Home the Maximum $30 Million Each

链捕手25 dk önce

Understanding CPO (Co-Packaged Optics) in One Article: Why Nvidia Is Willing to Spend $3.2 Billion on a Fiber?

NVIDIA and Corning announced a multi-year strategic partnership on May 6, 2026, with NVIDIA committing up to $3.2 billion to support Corning's U.S. expansion. This investment will triple Corning's manufacturing plants and significantly boost its optical fiber and communications production capacity. The core driver behind this massive investment is the fundamental shift from copper to optical interconnect technology within AI data centers. As GPU clusters scale, copper wires face critical limitations: severe signal attenuation over distance, high energy consumption for signal integrity, and excessive heat generation. Optical fiber, transmitting light instead of electrical signals, solves these issues with minimal loss, near-light speed, and lower power needs. The article outlines a three-stage evolution of data center interconnect: 1. **Traditional Copper Interconnects:** The mainstream solution of the 2010s, now being phased out due to scaling bottlenecks. 2. **Pluggable Optical Modules:** The current mainstream, where modules convert electrical signals to light externally. This process still introduces energy loss and latency. 3. **CPO (Co-Packaged Optics):** The next-generation technology where the optical engine is integrated directly with the GPU chip package. This drastically reduces the electrical signal travel distance to mere millimeters, slashing power consumption and latency while boosting data density. NVIDIA CEO Jensen Huang has identified CPO as an essential core technology for AI infrastructure. NVIDIA's investment signifies a strategic shift from being a buyer to actively controlling its supply chain for critical components. With demand for specialized optical fiber far outstripping supply—evidenced by soaring prices—securing long-term manufacturing capacity has become a competitive necessity. While Corning's expansion may pressure some suppliers, a projected global fiber supply gap of 5-15% over the next few years creates a significant opportunity window, particularly for Chinese manufacturers competitive in optical preforms, chips, and modules. Ultimately, NVIDIA's move is not about chasing a trend but an engineering imperative. The transition to light-based interconnects like CPO is driven by the physical limits of copper, marking a definitive step in the ongoing AI computing revolution.

marsbit50 dk önce

Understanding CPO (Co-Packaged Optics) in One Article: Why Nvidia Is Willing to Spend $3.2 Billion on a Fiber?

marsbit50 dk önce

İşlemler

Spot
Futures
活动图片