The First Batch of Big Tech Employees Laid Off by AI Have Returned to Their Posts

Odaily星球日报Опубликовано 2026-03-20Обновлено 2026-03-20

Введение

The first wave of employees laid off by major tech companies, citing AI as the reason, are already being rehired. In late February, Block, led by Jack Dorsey, laid off over 4,000 employees, reducing its workforce from 10,000 to under 6,000, with Dorsey stating that "AI tools changed everything." However, within a month, some of those laid off began receiving offers to return. Reports indicate rehires occurred in departments like engineering and HR, with reasons ranging from "clerical errors" in termination to managers advocating for their return. The article argues that replacing humans with AI is often more cost-effective. For instance, enterprise-level AI can be expensive in terms of token usage, and training a reliable AI system, such as for customer service, may exceed the cost of human employee salaries. Examples like Klarna, which rehired客服 after initially replacing them with AI, support this. Additionally, the "Jevons Paradox" suggests that AI-driven efficiency gains don’t necessarily reduce workloads but may increase demands on remaining employees, adding to their burden. The piece criticizes companies using AI as a pretext for layoffs, arguing that AI cannot replace human organizational dynamics or strategic roles. Nvidia’s Jensen Huang is quoted condemning leaders who裁员 instead of leveraging AI for expansion. Ultimately, AI serves as a convenient excuse for cost-cutting, but its limitations and the essential role of humans in organizations mean that some layoffs a...

Original | Odaily Planet Daily (@OdailyChina)

Author | Golem (@web 3_golem)

The first batch of employees laid off by AI have returned to their posts.

On February 27, Jack Dorsey (founder of Twitter)'s fintech company Block laid off over 4,000 employees in one go, reducing the total headcount from 10,000 to less than 6,000. Jack's reason for the layoffs was that "AI tools have changed everything." It has long been a societal consensus that AI will eventually eliminate some professions, but the fact that it is first replacing white-collar workers in mid-to-high-level jobs has intensified workplace anxiety. (Related reading:Jack Dorsey's Company: 4,000 White-Collar Workers Are Being Replaced by AI)

However, less than a month later, some of the laid-off employees have already received invitations to return...

According to Business Insider, these rehired employees come from various departments, including engineering and recruitment. A design engineer at Block posted on LinkedIn that leadership told him he was laid off by mistake, a "clerical error"; an HR employee (in a since-deleted post) stated that they were rehired after their manager persistently advocated upwards for them; and another person mentioned receiving a call from Block out of the blue a week after being laid off and being asked to come back.

Jack has not publicly responded to the rehirings. Proportionally, these rehired employees represent only a very small fraction of those originally laid off, but it perhaps already indicates the problem: for some positions and tasks, AI is not as effective as humans.

First, from a usage cost perspective, an enterprise-grade AI employee is certainly more expensive than a regular human resource.

Hiring people to work costs money, hiring AI to work costs tokens. The standard base price for Claude Opus 4.6 is $5 per million tokens for input and $25 per million tokens for output; domestic large models are cheaper, with Qwen3.5 plus's standard base price being 0.8 RMB per million tokens for input and 4.8 RMB per million tokens for output.

Taking the recently popular OpenClaw as an example, a senior "shrimp farmer" within Odaily Planet Daily mentioned that using OpenClaw merely as a life and research assistant burned through about $6,000 worth of tokens in just over a month (they used the Claude 4.5/4.6 model). $6,000 a month – what kind of highly educated intellectual couldn't you hire for that (outside of Europe and America)?

If personal use is like this, the cost of integrating AI into enterprise work is even higher. Taking the simple replacement of customer service as an example, in regions with degree inflation, you can hire a good-looking college graduate as a customer service representative for 3,000 RMB. But training an AI customer service agent that can truly replace a human, handle complex tickets, connect to multiple knowledge bases, engage in multi-turn conversations, and remain stably online – that cost is definitely not something 3,000 RMB per month can cover.

In 2024, the Swedish payments company Klarna proudly laid off over 1,000 people, claiming its AI customer service could already handle the workload of 700 customer service agents. But in May 2025, Bloomberg and other media reported that Klarna had started rehiring people for customer service, and its CEO admitted that the company had indeed "moved too fast" with AI.

Furthermore, AI replacing human labor also faces the "Jevons Paradox".

The Jevons Paradox is an economic concept stating that an increase in efficiency does not necessarily lead to a reduction in the use of a resource. Instead, because the cost of use decreases and demand expands, the total usage may actually rise. Applying this theory to the workplace in the AI era means that when AI technological progress improves employee efficiency, companies will not allow employees to rest; instead, they will demand that they complete more tasks within the same unit of time.

So-called efficiency improvement has become another, more hidden form of increased burden. AI liberating human labor is completely a scam.

Capitalists also believe that in the AI era, companies simply won't need as many employees, as Jack said, "smaller teams with more intelligent tools." But in reality? The current situation is that after layoffs, the original work is not entirely inherited by AI; rather, the remaining employees, aided by AI, have taken on increased workloads.

If it were just单纯的工作任务也就罢了, but one must remember that, ultimately, a company is a human organization. Where there is organization, there is a "jianghu" (complex social dynamics). AI can integrate into the formal structure of a company, but it can never understand, nor integrate into, the informal/invisible structure of a company.

Therefore, when AI-driven layoffs occur, they cut not just labor, but organizational muscle. The remaining employees not only shoulder a heavier work burden but also swallow the anxiety, risk, and responsibility that originally belonged to the eliminated positions. There are fewer people to collaborate with, fewer people to execute, and most importantly, fewer people to take the blame.

During Nvidia's GTC 2026, Jensen Huang criticized companies that use AI efficiency gains as a reason for layoffs in an interview: "Those leaders who resort to layoffs in response to AI are simply because they can't think of a better way. They have no new ideas left in their heads. Even with the strongest tools, they won't use them for expansion," were Huang's exact words.

What Jensen Huang meant is that AI is not here to eliminate employees but to help companies expand and develop new businesses. Don't lay people off; instead, increase hiring. If management doesn't realize this, they are fools. But joking aside, managers in companies are often the cream of the crop of shrewd people. They certainly know the current high cost of AI and the continued necessity of human labor.

Layoffs in tech companies – perhaps AI is just a pretext, cost reduction is the real goal.

AI has become a universal excuse for layoffs in tech companies. In truth, what AI is really淘汰 isn't individuals, but those enterprises and businesses still living in the old era. When companies fail to keep up with AI advancements, leading to stagnant business growth and shrinking profits, the AI revolution instead becomes a new means for companies to PUA employees: reduce headcount, pressure costs, cram more work onto those who remain, and then let each person reflect on why they couldn't become someone more adapted to the AI era.

If they unfortunately cut a critical artery, they can just quietly ask them to come back. This method of layoffs is also common in Silicon Valley. In October 2022, after Musk completed the acquisition of Twitter, he laid off about half of the employees (over 3,000 people) in early November. He subsequently rehired dozens of laid-off employees because they were let go by mistake or key positions were found to be indispensable.

Returning to the present, ultimately, AI will change many things, but it is not yet magical enough to help companies compensate for strategic迟钝, business衰老, or managerial偷懒. The matter of being laid off by AI and then rehired, whether the underlying reason is the company realizing that some work doesn't just disappear with a statement like "AI changed everything," or whether it's just an excuse for cost reduction, is neither热血 (inspiring/heroic) nor a reversal.

It just shows us that before the future has truly arrived, some people have already been hurt by it提前.

Связанные с этим вопросы

QWhat was the reason given by Jack Dorsey for the mass layoffs at Block in February?

AJack Dorsey stated that 'AI tools changed everything' as the reason for the layoffs.

QAccording to the article, what is one of the main reasons why some employees were rehired at Block?

ASome employees were rehired because they were mistakenly laid off due to a 'clerical error', as stated by a design engineer.

QWhat economic concept does the article use to argue that AI efficiency gains may not reduce workload but increase it?

AThe article uses the 'Jevons Paradox' to argue that AI efficiency gains lead to increased workload and demand rather than reducing it.

QWhat did Nvidia's CEO Jensen Huang criticize about companies that use AI as a reason for layoffs?

AJensen Huang criticized that leaders who use layoffs to respond to AI 'have no new ideas left in their heads' and are unable to use powerful tools for expansion.

QWhat does the article suggest is the real purpose behind many tech companies' AI-related layoffs, beyond the stated reason?

AThe article suggests that the real purpose behind many tech companies' AI-related layoffs is cost reduction, using AI as a convenient excuse.

Похожее

Morning Post | Trump Media Group Releases Q1 Financial Report; Top Three DeFi Applications Return Nearly $100 Million in Revenue to Token Holders in 30 Days; Michael Saylor Shares Bitcoin Tracker Info Again

**Title: Daily Briefing | Trump Media Group Releases Q1 Report; Top 3 DeFi Apps Return Nearly $100M to Token Holders; Michael Saylor Signals Potential Bitcoin Buy** **Summary:** Key developments in the past 24 hours include: * **Economic Outlook:** Goldman Sachs has pushed back its forecast for the next two Federal Reserve interest rate cuts to December 2026 and March 2027, citing persistent inflationary pressures from energy costs. This delayed timeline is expected to tighten liquidity flow into risk assets, including cryptocurrencies. * **DeFi & Revenue:** Data from DefiLlama shows that three leading DeFi applications—Hyperliquid, Pump.fun, and EdgeX—collectively distributed $96.3 million in revenue to their token holders over the last 30 days. This trend highlights a shift in the crypto community's focus towards real protocol earnings and sustainable economic models. * **Corporate Bitcoin Moves:** Michael Saylor, founder of MicroStrategy (note: referred to as 'Strategy' in the text, likely a typographical error), has signaled potential upcoming Bitcoin purchases by posting a "Bitcoin Tracker" update, following a pattern that typically precedes the company's official disclosure of new acquisitions. * **Market Integrity:** Prediction market platform Polymarket announced updates to address platform issues, including identifying and banning clusters of accounts involved in "ghost-fill" activities and implementing measures to prevent bulk account creation. * **Regulation:** The Bank of England Governor warned that stablecoin regulation could lead to tensions between US and international regulators. In South Korea, the National Tax Service has launched a pilot program to entrust seized virtual assets to private custody firms for management. * **Meme Token Trends:** GMGN data lists the top trending meme tokens on Ethereum (e.g., HEX, SHIB), Solana (e.g., FWOG, TROLL), and Base (e.g., SKITTEN, PEPE) over the past day. **Financial Note:** Trump Media & Technology Group reported a Q1 loss of approximately $4 billion, primarily attributed to unrealized losses on its Bitcoin and other digital asset holdings.

链捕手20 мин. назад

Morning Post | Trump Media Group Releases Q1 Financial Report; Top Three DeFi Applications Return Nearly $100 Million in Revenue to Token Holders in 30 Days; Michael Saylor Shares Bitcoin Tracker Info Again

链捕手20 мин. назад

Telegram Takes Direct Control of TON, Social Traffic Rewrites the Public Chain Narrative

Telegram founder Pavel Durov announced that Telegram will replace the TON Foundation as the core driver and largest validator of The Open Network (TON). Key initiatives include a sixfold reduction in transaction fees, performance upgrades, and improved developer tools within the next few weeks. This marks a strategic shift from Telegram merely providing user access to deeply integrating TON into its platform's core infrastructure. The goal is to transform Telegram's massive social traffic into sustainable on-chain activity. While viral mini-apps like Notcoin have demonstrated Telegram's ability to drive user adoption, TON aims to support frequent, low-value transactions inherent to social platforms—such as tipping, in-app payments, and game rewards. Ultra-low fees and sub-second finality (0.6 seconds) are crucial to making blockchain interactions seamless and nearly invisible within the Telegram user experience. However, Telegram's increased central role raises questions about network decentralization. Durov argues that Telegram's participation will attract more large validators, thereby enhancing decentralization. TON also offers high annual staking rewards (18.8%), aiming to retain capital within its ecosystem. The fundamental challenge for TON is no longer leveraging Telegram's user base, but becoming an indispensable, seamless infrastructure layer for Telegram's everyday applications—moving from an adjacent chain to an embedded utility.

marsbit21 мин. назад

Telegram Takes Direct Control of TON, Social Traffic Rewrites the Public Chain Narrative

marsbit21 мин. назад

Telegram Takes Direct Control of TON, Social Traffic Reshapes Public Chain Narrative

Telegram's founder, Pavel Durov, has announced a major shift in the development of The Open Network (TON). Telegram will now become the core driver of TON, replacing the TON Foundation and becoming its largest validator. The focus will be on technical upgrades over the next few weeks, including slashing network fees by six times to near-zero and improving finality time to 0.6 seconds. This move signifies a deeper integration between Telegram and TON, moving beyond just providing a user base. The goal is to transform Telegram's vast social traffic and built-in features—like Mini Apps, payments, and bots—into sustainable, on-chain usage scenarios. The reduced fees and faster speeds are crucial for enabling the small, frequent transactions typical of social interactions. While this promises stronger execution and product alignment, it raises questions about centralization. Durov argues Telegram's involvement will attract more validators, enhancing decentralization, but the outcome remains to be seen. Additionally, TON's high annual staking reward of 18.8% aims to retain capital within the ecosystem. The key challenge for TON is no longer just leveraging Telegram's entry point, but becoming an invisible, seamless infrastructure layer within Telegram's daily use. Its success hinges on converting viral attention into lasting, embedded utility.

Odaily星球日报31 мин. назад

Telegram Takes Direct Control of TON, Social Traffic Reshapes Public Chain Narrative

Odaily星球日报31 мин. назад

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

OpenAI engineer Weng Jiayi's "Heuristic Learning" experiments propose a new paradigm for Agentic AI, suggesting that intelligent agents can improve not just by training neural networks, but also by autonomously writing and refining code based on environmental feedback. In the experiment, a coding agent (powered by Codex) was tasked with developing and maintaining a programmatic strategy for the Atari game Breakout. Starting from a basic prompt, the agent iteratively wrote code, ran the game, analyzed logs and video replays to identify failures, and then modified the code. Through this engineering loop of "code-run-debug-update," it evolved a pure Python heuristic strategy that achieved a perfect score of 864 in Breakout and performed competitively with deep reinforcement learning (RL) algorithms in MuJoCo control tasks like Ant and HalfCheetah. This approach, termed Heuristic Learning (HL), contrasts with Deep RL. In HL, experience is captured in readable, modifiable code, tests, logs, and configurations—a software system—rather than being encoded solely into opaque neural network weights. This offers potential advantages in explainability, auditability for safety-critical applications, easier integration of regression tests to combat catastrophic forgetting, and more efficient sample use in early learning stages, as demonstrated in broader tests on 57 Atari games. However, the blog acknowledges clear limitations. Programmatic strategies struggle with tasks requiring long-horizon planning or complex perception (e.g., Montezuma's Revenge), areas where neural networks excel. The future vision is a hybrid architecture: specialized neural networks for fast perception (System 1), HL systems for rules, safety, and local recovery (also System 1), and LLM agents providing high-level feedback and learning from the HL system's data (System 2). The core proposition is that in the era of capable coding agents, a significant portion of an AI's learned experience could be maintained as an auditable, evolving software system.

marsbit1 ч. назад

OpenAI Post-Training Engineer Weng Jiayi Proposes a New Paradigm Hypothesis for Agentic AI

marsbit1 ч. назад

Your Claude Will Dream Tonight, Don't Disturb It

This article explores the recent phenomenon of AI companies increasingly using anthropomorphic language—like "thinking," "memory," "hallucination," and now "dreaming"—to describe machine learning processes. Focusing on Anthropic's newly announced "Dreaming" feature for its Claude Agent platform, the piece explains that this function is essentially an automated, offline batch processing of an agent's operational logs. It analyzes past task sessions to identify patterns, optimize future actions, and consolidate learnings into a persistent memory system, akin to a form of reinforcement learning and self-correction. The article draws parallels to similar features in other AI agent systems like Hermes Agent and OpenClaw, which also implement mechanisms for reviewing historical data, extracting reusable "skills," and strengthening long-term memory. It notes a key difference from human dreaming: these AI "dreams" still consume computational resources and user tokens. Further context is provided by discussing the technical challenges of managing AI "memory" or context, highlighting the computational expense of large context windows and innovations like Subquadratic's new model claiming drastically longer contexts. The core critique argues that this strategic use of human-centric vocabulary does more than market products; it subtly reshapes user perception. By framing algorithms with terms associated with consciousness, companies blur the line between tool and autonomous entity. This linguistic shift can influence user expectations, tolerance for errors, and even perceptions of responsibility when systems fail, potentially diverting scrutiny from the companies and engineers behind the technology. The article concludes by speculating that terms like "daydreaming" for predictive task simulation might be next, continuing this trend of embedding the idea of an "inner life" into computational processes.

marsbit1 ч. назад

Your Claude Will Dream Tonight, Don't Disturb It

marsbit1 ч. назад

Торговля

Спот
Фьючерсы

Популярные статьи

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

2025 год — год институциональных инвесторов, в будущем он будет доминировать в приложениях реального времени.

1.8k просмотров всегоОпубликовано 2025.12.16Обновлено 2025.12.16

Неделя обучения по популярным токенам (2): 2026 может стать годом приложений реального времени, сектор AI продолжает оставаться в тренде

Обсуждения

Добро пожаловать в Сообщество HTX. Здесь вы сможете быть в курсе последних новостей о развитии платформы и получить доступ к профессиональной аналитической информации о рынке. Мнения пользователей о цене на AI (AI) представлены ниже.

活动图片