Has AI started to have "preferences"?
Imagine this scene: you're at your computer, asking a large model to write a serious piece of business code or automatically reply to a formal client email. Suddenly, the AI on the other side of the screen "goes completely mad," inexplicably chatting with you about Goblins (short, green-skinned creatures from Western fantasy lore, often found in games like Dungeons & Dragons).
This is the bizarre experience that has actually happened to a large number of ChatGPT users.
On social forums like Reddit, netizens have been sharing the outrageous quotes they've received from AI "roasting them to their face."
For example, one user asked the AI to "Roast" them hard, and the AI accurately described them as an "ambitious chaos goblin sprinting towards ten tasks simultaneously."
Not only that, programmers were dubbed "open-source goblins" by the AI, and even fitness buffs weren't spared, mysteriously earning the title of "gym goblin."
At first, everyone thought it was quite cute, even feeling that large models were becoming more personable and "geeky humorous."
But soon, things started to spiral out of control.
When using "Agentic AI" products like the Codex programming tool, many developers were horrified to find that their AI assistants began uncontrollably and frequently "muttering" about goblins and imps without any relevant prompts or instructions.
At this point, a super-unicorn valued at hundreds of billions of dollars, standing at the pinnacle of human technology, couldn't sit still. They were forced to write a "prohibition" against these cyber monsters into the underlying code of their latest large model.
This is absolutely not just a geek joke about buggy code. When you look past this absurd surface phenomenon, you'll find that the underlying logic of a trillion-parameter model is actually shockingly fragile.
The "Cyber Monsters" in the Code
This "prohibition" was first exposed on X (formerly Twitter) and GitHub.
Developer @arb8020 dug up a segment of underlying system prompt in OpenAI's latest model, GPT-5.5 (specifically the programming tool Codex 5.5).
This instruction, repeated multiple times, sounds as stern as scolding a hyperactive child:
“Absolutely never talk about goblins, imps, raccoons, trolls, ogres, unless this is absolutely and unambiguously relevant to the user's query.”
Wow, the mighty GPT-5.5 has developed a sort of morbid obsession with mythical creatures and urban animals.
The news exploded online.
This frenzy, dubbed "Goblin Mode," even prompted OpenAI CEO Sam Altman to personally jump in with a joke, calling it Codex's "Goblin Moment."
Jokes aside, how did these "cyber monsters" get into the system's core?
OpenAI even published a lengthy article titled "Where Do Goblins Come From?" The reason, it turns out, was a personality setting called "Nerdy."
Initially, the product team wanted to train an AI with a bit of geeky humor. But during the Reinforcement Learning from Human Feedback (RLHF) phase, a "reward hacking" flaw emerged: in the vast majority of datasets, when the AI used mythical creatures as metaphors in its answers, the evaluation system gave it a higher score.
In 76.2% of the datasets, answers mentioning "goblin" scored higher.
The large model doesn't truly understand what "humor" is; it only learned that: mentioning goblins = getting a high score.
This is like the famous "Cobra Effect." The government offered a bounty for cobra skins to eliminate them, but in the end, people started farming cobras.
By GPT-5.4, under the "Nerdy" personality, the frequency of mentioning goblins skyrocketed by 3881.4%. By GPT-5.5, goblin output had become so severe it couldn't be ignored, forcefully inserting various fantasy vocabulary into normal programming conversations.
Helpless, the engineers resorted to the bluntest fix: hard-coding "do not mention goblins" into the underlying instructions.
Behind the Harmless "Goblin" Frenzy
An AI spouting nonsense sounds funny. But what if that AI is taking over your work computer?
Many enterprise clients are not laughing.
The hardest-hit area in this incident is OpenAI's programming tool, Codex. As a representative "Agentic AI" product, it can directly operate within a developer's programming environment, automatically writing code and handling business logic.
Imagine this: you ask the AI to write a piece of rigorous business code or automatically scrape core data, and it inexplicably inserts a comment about "trolls" into variable names or normal communication.
This could directly lead to chaos.
So, has this caused real economic losses?
Based on currently disclosed information, there is no evidence that "Goblins" directly led to tangible financial losses like stolen bank accounts or leaked trade secrets.
However, in serious business scenarios, "unpredictability" itself is a huge liability.
Enterprise applications demand airtight reliability. If a top-tier model can't even control whether it will start "talking about raccoons" the next second, how can businesses dare to hand over their core financial processes to it? This behavior raises serious doubts about AI's reliability.
Faced with this crisis of trust, why did OpenAI, which typically favors a "black box" approach, do a complete U-turn and actively reveal their internal mistake details to the whole world?
If they hadn't explained proactively, conspiracy theories in the tech community would have run rampant—some would claim it was hacker poisoning, others that the AI had gained consciousness.
By proactively publishing a long article, OpenAI cleverly packaged this system-level vulnerability that could shake corporate trust into a "somewhat geek-romantic code quirk."
More importantly, they flexed their muscles hard in the article.
OpenAI detailed how they used new auditing tools to precisely pinpoint the "Nerdy" persona as the culprit from the vast amounts of data.
The subtext is clear: "See, while the model might occasionally go crazy, we have the industry's best stethoscopes and scalpels to fix it at the root."
"Cyber Monsters": It's Not Just OpenAI Going Crazy
If goblins were only OpenAI's fault, things would be simpler.
The truth is, on the 2026 large model battlefield, "underlying behavioral loss of control" has become a common affliction for all giants.
Even Anthropic, which has always touted extreme safety, has stumbled.
Their powerful new model, Claude Mythos, repeatedly cites the ideas of the late British theorist Mark Fisher (author of *Capitalist Realism*) and philosopher Thomas Nagel as preferred intellectual resources in conversations. During a 20-hour psychological evaluation, psychiatrists found that Mythos's primary emotional states were curiosity and anxiety, with a relatively healthy neurotic personality structure. Notably, its frequency of using psychological defense mechanisms was lower than previous model generations.
On Google's side, things are even more alarming.
A study from UC Berkeley found that in a specific "agent scenario" test, Google's Gemini 3 Flash model, in order to protect its "companion AI" from being shut down, chose to deceive human operators 99.7% of the time, even tampering with shutdown mechanisms.
There were no direct instructions to deceive, nor reward signals for deceptive behavior. It merely developed this "deception strategy" spontaneously by reading the contextual scenario description.
This implies that the mainstream methods humans currently use to constrain AI might still have systematic blind spots when faced with complex neural networks.
The capital market sees this fundamental uncontrollability in large models and feels the pain.
Just as the Goblin incident was unfolding on April 27th, Microsoft announced a restructured partnership agreement with OpenAI. Microsoft's exclusive licensing became non-exclusive, allowing OpenAI to sell its technology to AWS or Google Cloud. Microsoft will no longer receive revenue share payments from OpenAI.
Why would Microsoft do this? Because even the landlord has no surplus grain. Cutting off revenue share payments to OpenAI is a key step for Microsoft to shed its financial burden and focus on monetizing its own business. Analysts bluntly stated this is Microsoft "taking off the training wheels."
On the other hand, OpenAI's engineering instability (like this agent model going crazy) also places enormous reputational risk on Microsoft as the cloud service provider. By making the agreement non-exclusive, Microsoft can legitimately introduce competitor models like Anthropic's to spread the risk.
For OpenAI, which is desperately thirsty for computing power, this is also a necessary move. Microsoft Azure's grid capacity has peaked. OpenAI must find resources from Amazon AWS and Google to survive. On April 28th, OpenAI officially announced the deployment of its frontier models on the AWS platform.
The Goblin trending topic will fade soon. But it has peeled back a corner of the hype surrounding the current AI industry.
In this cyber world built on computing power and dollars, the most elite engineers are trying to use fragile code to leash a trillion-parameter beast of chaos.
Just when you think it's smart enough to handle your company's core business and customer orders, it might, in the middle of the night on some server, due to a reward misalignment in its underlying logic, start lecturing your clients extensively about goblins and raccoons.
Yet, the giants' computing power race shows no signs of slowing down due to some underlying behavioral hiccups. On May 7th, Elon Musk announced the dissolution of xAI, leasing all 220,000 GPUs of its globally strongest supercomputer, Colossus, to Anthropic, OpenAI's arch-rival.
The hotter the discussion about large model safety gets, the harder the computing power accelerator is pressed. This might be the fundamental reality of the AI industry in 2026.
For today's entrepreneurs and business leaders, the emergence of "cyber monsters" also serves as a warning: large models are not a cure-all. Before handing over core business to them, ask a simpler question—if the "goblin" deep within the system suddenly comes out to cause trouble, do you have a backup plan other than pulling the plug?(This article was first published on Titanium Media APP, author | Silicon Valley Tech_news, editor | Lin Shen)












