AI Read '1984' and Decided to Ban It

marsbit2026-03-27 tarihinde yayınlandı2026-03-27 tarihinde güncellendi

Özet

A UK secondary school in Manchester used AI to review its library, resulting in a list of 193 books recommended for removal—including George Orwell’s *1984*—due to themes like torture, violence, and sexual coercion. The librarian who resisted the AI’s recommendations was forced to resign after the school reported her for violating child safety procedures. The school later admitted the decisions were AI-generated but deemed them “broadly accurate.” In the same week, Wikipedia voted to ban the use of AI for generating or rewriting content, citing concerns over factual accuracy, the risk of AI “poisoning” its own training data, and the inability of human editors to verify AI-generated content at scale. Meanwhile, OpenAI indefinitely delayed the release of an “adult mode” for ChatGPT, which would have allowed age-verified users to engage in erotic conversations. Internal advisors warned of risks including unhealthy emotional dependency and minors bypassing verification. These events highlight a growing tension: AI can produce content faster than humans can evaluate it, leading institutions to adopt quick—often poorly considered—solutions. The lack of coherent global standards and the widening gap between AI output and human oversight raise urgent questions about who should control what AI decides—and who is accountable when it gets it wrong.

Author: Curry, Deep Tide TechFlow

Last week, a secondary school in Manchester, UK, used AI to review its library.

AI generated a list of 193 books to be removed, each with a reason. George Orwell's "1984" was prominently included, with the reason being "contains themes of torture, violence, and sexual coercion."

"1984" depicts a world where the government monitors everything, rewrites history, and decides what citizens can and cannot see. Now, AI has done the same for a school, and it may not even understand what it is saying.

The school librarian found it unreasonable and refused to fully implement the recommendations given by AI.

The school then launched an internal investigation against her on the grounds of "child safety," accusing her of introducing inappropriate books to the library and reported her to the local government. She took sick leave due to pressure and eventually resigned.

Absurdly, the local government's investigation concluded that she had indeed violated child safety procedures, and the complaint was upheld.

Caroline Roche, chair of the UK School Library Association, said this conclusion means she can no longer work in any school.

The person who resisted AI's judgment lost her job, while those who signed off on AI's judgment faced no consequences.

Subsequently, the school admitted in internal documents that all classifications and reasons were generated by AI, stating: "Although the classification was generated by AI, we believe it is generally accurate."

A school handed over the judgment of "what books are suitable for students" to AI. AI returned an answer it did not understand, and a human administrator stamped it without even looking closely.

After this incident was exposed by the UK free speech organization Index on Censorship, the issues raised extended far beyond a school's bookshelf:

When AI starts deciding for humans what content is appropriate and what is dangerous, who judges whether AI's judgment is correct?

Wikipedia Closes Its Doors to AI

In the same week, another institution answered this question with action.

While the school let AI decide what people can read, the world's largest online encyclopedia, Wikipedia, made the opposite choice: not letting AI decide what the encyclopedia writes.

In the same week, English Wikipedia formally passed a new policy prohibiting the use of large language models to generate or rewrite entry content. The vote was 44 in favor and 2 against.

The direct cause was an AI account called TomWikiAssist. In early March this year, this account autonomously created and edited multiple entries on Wikipedia, which were urgently addressed after being discovered by the community.

It takes AI only a few seconds to write an entry, but volunteers spend hours verifying the facts, sources, and wording of an AI-generated entry for accuracy.

The Wikipedia editing community has only so many people. If AI can mass-produce content indefinitely, human editors simply cannot review it all.

This is not even the most troublesome part. Wikipedia is one of the most important training data sources for global AI models. AI learns knowledge from Wikipedia and then uses what it has learned to write new Wikipedia entries, which are then ingested by the next generation of AI models for further training.

Once AI-generated misinformation mixes in, it will continuously amplified in this cycle, becoming a matryoshka doll-style AI poisoning:

AI pollutes training data, and training data pollutes AI.

However, Wikipedia's policy also leaves two openings for AI: editors can use AI to polish their own writing or use AI to assist with translation. But the policy specifically warns that AI may "go beyond your request, change the meaning of the text, and make it inconsistent with the cited sources."

Human writers make mistakes, and Wikipedia has relied on community collaboration to correct them for over twenty years. AI makes mistakes differently; it fabricates things that look more real than the truth and can be produced in bulk.

A school trusted AI's judgment and lost a librarian. Wikipedia chose not to trust and simply closed the door.

But what if even the creators of AI are starting to lose faith?

The Creators of AI Are Themselves Afraid

While external institutions are closing doors to AI, AI companies are also pulling back.

In the same week, OpenAI indefinitely shelved ChatGPT's "adult mode." This feature was originally planned for release last December, allowing age-verified adult users to engage in erotic conversations with ChatGPT.

CEO Sam Altman personally announced it last October, stating the goal was to "treat adult users like adults."

After being postponed three times, it was directly canceled.

According to the British "Financial Times," OpenAI's internal health advisory committee unanimously opposed this feature. The advisors' concerns were specific: users would develop unhealthy emotional dependencies on AI, and minors would inevitably find ways to bypass age verification.

One advisor put it more directly: without significant improvements, this thing could become a "sexy suicide coach."

The error rate of the age verification system exceeds 10%. Based on ChatGPT's weekly active user base of 800 million, 10% means tens of millions of people could be misclassified.

Adult mode is not the only product cut this month. AI video tool Sora and ChatGPT's built-in instant checkout feature were also taken offline around the same time. Altman said the company is focusing on its core business and cutting "side tasks."

But OpenAI is simultaneously preparing for an IPO.

A company sprinting towards an上市,密集 cutting functions that may cause controversy, this move might more accurately be called risk aversion than focus.

Five months ago, Altman was still saying to treat users like adults. Five months later, he found that his own company still hasn't figured out what AI can let users touch and what it cannot.

Even the creators of AI themselves have no answer. So who should draw this line?

The Uncatchable Speed Gap

Put these three things together, and it's easy to draw a core conclusion:

The speed at which AI produces content and the speed at which humans review content are no longer on the same scale.

The choice of that school in Manchester is easy to understand in this context. How long would it take for a librarian to read all 193 books and make a judgment? Let AI run through them: a few minutes.

The principal chose the few-minute solution. Do you really think he trusted AI's judgment? I think it's more because he didn't want to spend the time.

This is an economic problem. The cost of generation approaches zero, while the cost of review is entirely borne by humans.

Therefore, every institution affected by AI is forced to respond in the most粗暴 way: Wikipedia直接禁止, OpenAI直接砍产品线. None of the solutions are the result of careful consideration; they are all stopgap measures implemented before there's time to think clearly.

"Block it first and talk later" is becoming the norm.

AI capabilities iterate every few months, while discussions about what content AI can touch don't even have a decent international framework. Each institution only manages the line in its own yard. The lines contradict each other, and no one coordinates them.

AI's speed is still accelerating. The number of reviewers won't increase. This scissors gap will only widen until one day something far more serious than banning "1984" happens.

By then, drawing lines might be too late.

İlgili Sorular

QWhy was George Orwell's book '1984' banned by the AI in the Manchester school case?

AThe AI recommended banning '1984' due to its 'themes of torture, violence, and sexual coercion.'

QWhat was the consequence for the librarian who resisted the AI's book removal suggestions?

AThe librarian was subjected to an internal investigation, pressured into taking sick leave, and ultimately resigned. She was also reported to local authorities and deemed to have violated child safety procedures, effectively ending her career in schools.

QWhat action did Wikipedia take regarding AI-generated content, and why?

AWikipedia officially banned the use of large language models to generate or rewrite article content. This decision was made because AI can produce content rapidly, making it difficult for human volunteers to verify facts and sources, and it risks creating a feedback loop where AI pollutes its own training data.

QWhy did OpenAI decide to cancel its planned 'adult mode' for ChatGPT?

AOpenAI canceled the 'adult mode' due to concerns from its internal health advisory board, which warned about users developing unhealthy emotional dependencies on the AI and the risk of minors bypassing age verification. The error rate of the age verification system was also a significant factor.

QWhat is the core issue highlighted by the three events in the article regarding AI and content moderation?

AThe core issue is the significant speed disparity between AI's ability to generate content and humanity's capacity to审核 it. This creates a situation where institutions are forced to make hasty, often poorly considered decisions—such as outright bans or canceling features—because they lack the resources or time to properly evaluate AI's output, and there is no comprehensive international framework to guide these decisions.

İlgili Okumalar

TechFlow Intelligence Bureau: KelpDAO Attack Causes Nearly $300 Million Loss, Triggers Aave Withdrawal Wave, RAVE Crashes 95% in a Single Day

China's AI firm DeepSeek is seeking external funding for the first time, with a valuation exceeding $10 billion, signaling intensifying competition and high R&D costs in the domestic large model sector. Meanwhile, OpenAI CEO Sam Altman faces scrutiny over potential conflicts of interest between his personal investments and OpenAI’s business ahead of a possible IPO. In Web3, KelpDAO suffered a $294 million attack due to forged cross-chain messages on LayerZero, leading to massive withdrawals from Aave and a resulting 18% drop in AAVE tokens. Separately, RAVE cryptocurrency collapsed by 95% in a single day amid suspected insider manipulation. Geopolitically, Iran is now demanding Bitcoin payments for transit through the Strait of Hormuz, reflecting both internal governmental discord and the growing adoption of crypto in tense regions. In semiconductors, Nvidia CEO Jensen Huang showed rare public frustration over questions regarding chip sales to China, while the industry faces renewed price hikes. Tesla continues expanding its Robotaxi service, and a Chinese humanoid robot outperformed humans in a half-marathon, marking a milestone in robotics. Despite Middle East tensions and market uncertainties, U.S. stocks continue to rise, prompting discussions about market optimism versus risk blindness. Overall, today’s developments highlight systemic vulnerabilities—in tech, finance, and geopolitics—while also showcasing innovation in crises.

marsbit5 saat önce

TechFlow Intelligence Bureau: KelpDAO Attack Causes Nearly $300 Million Loss, Triggers Aave Withdrawal Wave, RAVE Crashes 95% in a Single Day

marsbit5 saat önce

İşlemler

Spot
Futures
活动图片