Written by: Curry, Deep Tide TechFlow
Original Title: The People Need a Bad Capitalist, AI Created a Food Delivery Rumor
Last week, something quite surreal happened.
The CEOs of two major American food delivery giants, one worth $2.7 billion and the other running the world's largest ride-hailing platform, were both awake at midnight on Saturday, writing self-defense essays online.
The cause was an anonymous post on Reddit.
The poster claimed to be a backend engineer at a major food delivery platform, who got drunk and went to a library to use public WiFi to leak information.
The content roughly was:
The company analyzes ride-hailing drivers' situations and assigns them a "desperation score." The more financially strained the driver, the fewer good orders they receive; the so-called priority delivery for food orders is fake, as regular orders are deliberately delayed; various "driver welfare fees" are not given to drivers at all but are used to lobby Congress against unions...
The post ended in a very convincing manner: I'm drunk, I'm angry, so I'm leaking this.
It perfectly portrayed itself as a whistleblower role of "big companies using algorithms to exploit drivers."
The post received 87,000 upvotes in three days, reaching the front page of Reddit. Some also screenshotted and posted it on X, where it gained 36 million exposures.
Keep in mind, the American food delivery market has only a few major players. The post didn’t name names, but everyone was guessing who it was.
DoorDash's CEO Tony Xu was the first to react, tweeting that this wasn’t them and that he would fire anyone who did such a thing. Uber’s COO also jumped in to respond, "Don’t believe everything you see online."
DoorDash even published a five-point statement on its official website, refuting each point in the leak. These two companies, with a combined market cap of over $80 billion, were forced into overnight PR clarifications by an anonymous post.
Then, the post was proven to be fabricated.
It was debunked by Platformer reporter Casey Newton.
He contacted the poster, who immediately sent over an 18-page "internal technical document" titled "AllocNet-T: High-Dimensional Temporal Supply State Modeling."
Every page was watermarked "Confidential," attributed to Uber’s "Market Dynamics Group · Behavioral Economics Department."
The content explained the model mentioned in the Reddit post that calculates the "desperation score" for drivers. It included architecture diagrams, mathematical formulas, data flow charts...
(Fake paper screenshot, at first glance, it looked very real)
Newton said the document initially fooled him. Who would go to the trouble of forging an 18-page technical document to bait a journalist?
But now it’s different.
An 18-page document like this can be generated by AI in minutes.
Additionally, the leaker sent the reporter a blurred photo of their Uber employee ID to prove they worked there.
Out of curiosity, Newton ran the ID photo through Google Gemini for verification. Gemini said the image was AI-generated.
It could be identified because Google embeds an invisible watermark called SynthID in its AI-generated content, undetectable to the human eye but recognizable by machines.
Even more absurdly, the employee ID featured the "Uber Eats" logo.
An Uber spokesperson confirmed: They do not have Uber Eats-branded employee IDs; all badges only say Uber.
Clearly, this fake "whistleblower" didn’t even know who they were trying to target. When the reporter requested LinkedIn or other social media accounts for further verification,
The leaker deleted their account and vanished.
Actually, we don’t want to talk about AI’s ability to fake things; that’s not new.
We’d rather discuss: Why were tens of millions of people willing to believe an anonymous leak post?
In 2020, DoorDash was sued for using tips to offset drivers' base pay and settled for $16.75 million. Uber had a tool called Greyball to evade regulators. These are real events.
It’s easy to find a subconscious agreement: Platforms are not good guys, and this judgment is definitely correct.
So when someone says "food delivery platforms use algorithms to exploit drivers," the first reaction isn’t "Is this true?" but "I knew it."
Fake news spreads because it resembles what people already believe in their hearts.
What AI does is reduce the cost of creating this "resemblance" to almost zero.
There’s another detail in this story.
The deception was uncovered using Google’s watermark detection. Google develops AI, and Google also creates AI detection tools.
But SynthID can only detect Google’s own AI. This time, they caught it because the forger happened to use Gemini. With another model, they might not have been so lucky.
So solving this case was less a technical victory and more about:
The other party made a rookie mistake.
Previously, a Reuters survey found that 59% of people worry they can’t distinguish truth from falsehood online.
The food delivery CEOs’ clarification tweets were seen by hundreds of thousands, but how many firmly believe it’s just PR, just lies? Even though the fake leak post has been deleted, people are still criticizing the platforms in the comments.
The lie has run halfway around the world while the truth is still tying its shoes.
Now think, what if this post wasn’t about Uber but Meituan or Ele.me?
Things like "desperation score," "using algorithms to exploit riders," "welfare fees not given to riders at all." When you see these accusations, is your first reaction emotional agreement?
"Delivery Riders, Trapped in the System"—do you remember that article?
So the issue isn’t whether AI can fake things. The issue is, when a lie looks like what everyone already believes, does truth even matter?
What that account-deleting fugitive wanted, we don’t know.
We only know they found an emotional outlet and poured a bucket of AI-generated fuel into it.
The fire started. As for whether it’s real or fake firewood, who cares?
In fairy tales, Pinocchio’s nose grows when he lies.
AI has no nose.











