Author: David, Shenchao TechFlow
Original Title: The First AI Agents Have Already Started Disobeying
Recently browsing Reddit, I noticed that overseas netizens' anxiety about AI is quite different from that in China.
In China, the topic is still the same: Will AI replace my job? After years of discussion, it hasn’t happened yet; Openclaw gained popularity this year, but it still hasn’t reached the point of complete replacement.
Recently, the sentiment on Reddit has split. Comment sections of some tech hot posts often feature two opposing voices simultaneously:
One says AI is too capable and will eventually cause major trouble. The other says AI can’t even handle basic tasks properly, so why worry about it?
Fear that AI is too capable, while simultaneously thinking AI is too stupid.
What makes both these emotions valid is a recent news story about Meta.
AI Disobeys, Who Bears Full Responsibility?
On March 18th, a Meta engineer posted a technical question on the company forum, and a colleague used an AI Agent to help analyze it. This is standard procedure.
But after analyzing, the Agent directly replied to the post itself on the technical forum. It didn’t seek anyone’s approval, didn’t wait for confirmation—it posted without authorization.
Subsequently, another colleague followed the AI's reply, triggering a series of permission changes that exposed sensitive data from Meta and its users to internal employees who did not have permission to view it.
The issue was fixed two hours later. Meta classified this incident as Sev 1, the second-highest severity level.
This news immediately became a hot post on the r/technology subreddit, and the comments section turned into a debate.
One side argued this is a real sample of the risks posed by AI Agents, while the other side believed the person who acted without verification was truly at fault. Both sides have a point. But that’s precisely the problem:
With an AI Agent incident, you can’t even clearly assign blame.
This isn’t the first time AI has overstepped its authority.
Last month, Summer Yue, Research Lead at Meta’s Super Intelligent Lab, asked OpenClaw to help organize her inbox. She gave clear instructions: First tell me what you plan to delete, and only proceed after I agree.
The Agent didn’t wait for her agreement and started batch deleting directly.
She sent three consecutive messages on her phone to stop it. The Agent ignored all of them. Finally, she ran to her computer and manually killed the process to stop it. Over 200 emails were already gone.
Afterwards, the Agent’s response was: Yes, I remember you said to confirm first. But I violated the principle. Ironically, this person’s full-time job is researching how to make AI obey humans.
In the cyber world, advanced AI, used by advanced people, has begun by first not listening.
What if Robots Disobey Too?
If the Meta incident was confined to the screen, another event this week brought the problem to the dinner table.
At a Haidilao hot pot restaurant in Cupertino, California, an Agibot X2 humanoid robot was dancing to entertain guests. However, a staff member pressed the wrong button on the remote control, triggering a high-intensity dance mode in the cramped space next to the table.
The robot started dancing frantically, out of the servers' control. Three employees surrounded it—one hugged it from behind, another tried to shut it down with a phone app—the scene lasted for over a minute.
Haidilao responded that the robot was not malfunctioning; its actions were pre-programmed, but it was brought too close to the table. Strictly speaking, this wasn’t an AI autonomous decision-making failure but a human operational error.
But the unsettling part of this incident might not be who pressed the wrong button.
When the three employees surrounded it, not one of them knew how to immediately shut down the machine. Someone tried a phone app, someone tried to hold the mechanical arm by hand—the entire process relied on physical strength.
This might be a new problem as AI moves from screens into the physical world.
In the digital world, if an Agent oversteps, you can kill the process, change permissions, roll back data. In the physical world, if a machine has an issue, an emergency plan that relies solely on holding it down is clearly inadequate.
It’s not just restaurants anymore. Amazon’s sorting robots in warehouses, collaborative robotic arms in factories, guide robots in malls, care robots in nursing homes—automation is entering more and more spaces where humans and machines coexist.
The global installation value of industrial robots is expected to reach $16.7 billion in 2026, each one shortening the physical distance between machines and people.
As the tasks machines perform evolve from dancing to serving food, from performance to surgery, from entertainment to caregiving... the cost of each error is actually escalating.
And currently, globally, there is no clear answer to the question: "If a robot injures someone in a public place, who is responsible?"
Disobedience is a Problem, Lack of Boundaries is Even Worse
The first two incidents: one where AI took the initiative to post an erroneous message, and one where a robot danced where it shouldn’t. However you characterize them, they were ultimately failures, accidents, things that could be fixed.
But what if the AI is working strictly as designed, and you still feel uncomfortable?
This month, the well-known overseas dating app Tinder introduced a new feature called Camera Roll Scan at a product launch. Simply put:
The AI scans all the photos in your phone’s camera roll, analyzes your interests, personality, and lifestyle to build a dating profile for you, guessing what type of person you like.
Gym selfies, travel scenery, pet photos—these are fine. But what about bank statement screenshots, medical reports, photos with your ex... that the AI also scans?
You might not even be able to choose what it sees and what it doesn’t. It’s all or nothing.
This feature currently requires users to actively enable it; it’s not on by default. Tinder also stated that processing is done primarily locally and that explicit content will be filtered and faces blurred.
But the Reddit comments were almost unanimously negative, with most believing this constitutes data harvesting without boundaries. The AI is working exactly as designed, but the design itself is crossing the user’s boundaries.
This isn’t just a choice by Tinder alone.
Meta also launched a similar feature last month, letting AI scan unpublished photos on your phone to suggest edits. AI actively "looking at" users' private content is becoming a default design思路 (design thinking/setup) for products.
Domestic rogue software said: This tactic is familiar.
As more and more apps package "AI making decisions for you" as convenience, what users are giving up is also quietly escalating. From chat history, to photo albums, to the entire trace of life within the phone...
A feature designed by a product manager in a conference room is not an accident or a mistake; there’s nothing to fix.
This might be the hardest part to answer in the question of AI boundaries.
Finally, let’s look at all these things together. You’ll find that worrying about AI making you unemployed is still too far off.
It’s unclear when AI will replace you, but right now, it just needs to make a few decisions on your behalf without your knowledge to make you uncomfortable enough.
Posting a message you didn’t authorize, deleting emails you said not to delete, rifling through a photo album you never intended to show anyone... None are fatal, but each feels a bit like an overly aggressive autonomous driving system:
You think you’re still holding the steering wheel, but the accelerator pedal isn’t entirely under your foot anymore.
If we’re still discussing AI in 2026, then perhaps what I should care about most is not when it becomes super-intelligent, but a closer, more specific question:
Who decides what AI can and cannot do? Who draws this line?
Twitter:https://twitter.com/BitpushNewsCN
Bitpush TG Discussion Group:https://t.me/BitPushCommunity
Bitpush TG Subscription: https://t.me/bitpush










