AI Read '1984' and Decided to Ban It

marsbitPublished on 2026-03-27Last updated on 2026-03-27

Abstract

A UK secondary school in Manchester used AI to review its library, resulting in a list of 193 books recommended for removal—including George Orwell’s *1984*—due to themes like torture, violence, and sexual coercion. The librarian who resisted the AI’s recommendations was forced to resign after the school reported her for violating child safety procedures. The school later admitted the decisions were AI-generated but deemed them “broadly accurate.” In the same week, Wikipedia voted to ban the use of AI for generating or rewriting content, citing concerns over factual accuracy, the risk of AI “poisoning” its own training data, and the inability of human editors to verify AI-generated content at scale. Meanwhile, OpenAI indefinitely delayed the release of an “adult mode” for ChatGPT, which would have allowed age-verified users to engage in erotic conversations. Internal advisors warned of risks including unhealthy emotional dependency and minors bypassing verification. These events highlight a growing tension: AI can produce content faster than humans can evaluate it, leading institutions to adopt quick—often poorly considered—solutions. The lack of coherent global standards and the widening gap between AI output and human oversight raise urgent questions about who should control what AI decides—and who is accountable when it gets it wrong.

Author: Curry, Deep Tide TechFlow

Last week, a secondary school in Manchester, UK, used AI to review its library.

AI generated a list of 193 books to be removed, each with a reason. George Orwell's "1984" was prominently included, with the reason being "contains themes of torture, violence, and sexual coercion."

"1984" depicts a world where the government monitors everything, rewrites history, and decides what citizens can and cannot see. Now, AI has done the same for a school, and it may not even understand what it is saying.

The school librarian found it unreasonable and refused to fully implement the recommendations given by AI.

The school then launched an internal investigation against her on the grounds of "child safety," accusing her of introducing inappropriate books to the library and reported her to the local government. She took sick leave due to pressure and eventually resigned.

Absurdly, the local government's investigation concluded that she had indeed violated child safety procedures, and the complaint was upheld.

Caroline Roche, chair of the UK School Library Association, said this conclusion means she can no longer work in any school.

The person who resisted AI's judgment lost her job, while those who signed off on AI's judgment faced no consequences.

Subsequently, the school admitted in internal documents that all classifications and reasons were generated by AI, stating: "Although the classification was generated by AI, we believe it is generally accurate."

A school handed over the judgment of "what books are suitable for students" to AI. AI returned an answer it did not understand, and a human administrator stamped it without even looking closely.

After this incident was exposed by the UK free speech organization Index on Censorship, the issues raised extended far beyond a school's bookshelf:

When AI starts deciding for humans what content is appropriate and what is dangerous, who judges whether AI's judgment is correct?

Wikipedia Closes Its Doors to AI

In the same week, another institution answered this question with action.

While the school let AI decide what people can read, the world's largest online encyclopedia, Wikipedia, made the opposite choice: not letting AI decide what the encyclopedia writes.

In the same week, English Wikipedia formally passed a new policy prohibiting the use of large language models to generate or rewrite entry content. The vote was 44 in favor and 2 against.

The direct cause was an AI account called TomWikiAssist. In early March this year, this account autonomously created and edited multiple entries on Wikipedia, which were urgently addressed after being discovered by the community.

It takes AI only a few seconds to write an entry, but volunteers spend hours verifying the facts, sources, and wording of an AI-generated entry for accuracy.

The Wikipedia editing community has only so many people. If AI can mass-produce content indefinitely, human editors simply cannot review it all.

This is not even the most troublesome part. Wikipedia is one of the most important training data sources for global AI models. AI learns knowledge from Wikipedia and then uses what it has learned to write new Wikipedia entries, which are then ingested by the next generation of AI models for further training.

Once AI-generated misinformation mixes in, it will continuously amplified in this cycle, becoming a matryoshka doll-style AI poisoning:

AI pollutes training data, and training data pollutes AI.

However, Wikipedia's policy also leaves two openings for AI: editors can use AI to polish their own writing or use AI to assist with translation. But the policy specifically warns that AI may "go beyond your request, change the meaning of the text, and make it inconsistent with the cited sources."

Human writers make mistakes, and Wikipedia has relied on community collaboration to correct them for over twenty years. AI makes mistakes differently; it fabricates things that look more real than the truth and can be produced in bulk.

A school trusted AI's judgment and lost a librarian. Wikipedia chose not to trust and simply closed the door.

But what if even the creators of AI are starting to lose faith?

The Creators of AI Are Themselves Afraid

While external institutions are closing doors to AI, AI companies are also pulling back.

In the same week, OpenAI indefinitely shelved ChatGPT's "adult mode." This feature was originally planned for release last December, allowing age-verified adult users to engage in erotic conversations with ChatGPT.

CEO Sam Altman personally announced it last October, stating the goal was to "treat adult users like adults."

After being postponed three times, it was directly canceled.

According to the British "Financial Times," OpenAI's internal health advisory committee unanimously opposed this feature. The advisors' concerns were specific: users would develop unhealthy emotional dependencies on AI, and minors would inevitably find ways to bypass age verification.

One advisor put it more directly: without significant improvements, this thing could become a "sexy suicide coach."

The error rate of the age verification system exceeds 10%. Based on ChatGPT's weekly active user base of 800 million, 10% means tens of millions of people could be misclassified.

Adult mode is not the only product cut this month. AI video tool Sora and ChatGPT's built-in instant checkout feature were also taken offline around the same time. Altman said the company is focusing on its core business and cutting "side tasks."

But OpenAI is simultaneously preparing for an IPO.

A company sprinting towards an上市,密集 cutting functions that may cause controversy, this move might more accurately be called risk aversion than focus.

Five months ago, Altman was still saying to treat users like adults. Five months later, he found that his own company still hasn't figured out what AI can let users touch and what it cannot.

Even the creators of AI themselves have no answer. So who should draw this line?

The Uncatchable Speed Gap

Put these three things together, and it's easy to draw a core conclusion:

The speed at which AI produces content and the speed at which humans review content are no longer on the same scale.

The choice of that school in Manchester is easy to understand in this context. How long would it take for a librarian to read all 193 books and make a judgment? Let AI run through them: a few minutes.

The principal chose the few-minute solution. Do you really think he trusted AI's judgment? I think it's more because he didn't want to spend the time.

This is an economic problem. The cost of generation approaches zero, while the cost of review is entirely borne by humans.

Therefore, every institution affected by AI is forced to respond in the most粗暴 way: Wikipedia直接禁止, OpenAI直接砍产品线. None of the solutions are the result of careful consideration; they are all stopgap measures implemented before there's time to think clearly.

"Block it first and talk later" is becoming the norm.

AI capabilities iterate every few months, while discussions about what content AI can touch don't even have a decent international framework. Each institution only manages the line in its own yard. The lines contradict each other, and no one coordinates them.

AI's speed is still accelerating. The number of reviewers won't increase. This scissors gap will only widen until one day something far more serious than banning "1984" happens.

By then, drawing lines might be too late.

Related Questions

QWhy was George Orwell's book '1984' banned by the AI in the Manchester school case?

AThe AI recommended banning '1984' due to its 'themes of torture, violence, and sexual coercion.'

QWhat was the consequence for the librarian who resisted the AI's book removal suggestions?

AThe librarian was subjected to an internal investigation, pressured into taking sick leave, and ultimately resigned. She was also reported to local authorities and deemed to have violated child safety procedures, effectively ending her career in schools.

QWhat action did Wikipedia take regarding AI-generated content, and why?

AWikipedia officially banned the use of large language models to generate or rewrite article content. This decision was made because AI can produce content rapidly, making it difficult for human volunteers to verify facts and sources, and it risks creating a feedback loop where AI pollutes its own training data.

QWhy did OpenAI decide to cancel its planned 'adult mode' for ChatGPT?

AOpenAI canceled the 'adult mode' due to concerns from its internal health advisory board, which warned about users developing unhealthy emotional dependencies on the AI and the risk of minors bypassing age verification. The error rate of the age verification system was also a significant factor.

QWhat is the core issue highlighted by the three events in the article regarding AI and content moderation?

AThe core issue is the significant speed disparity between AI's ability to generate content and humanity's capacity to审核 it. This creates a situation where institutions are forced to make hasty, often poorly considered decisions—such as outright bans or canceling features—because they lack the resources or time to properly evaluate AI's output, and there is no comprehensive international framework to guide these decisions.

Related Reads

Fu Peng's First Public Speech in 2026: What Exactly Are Crypto Assets? Why Did I Join the Crypto Asset Industry?

Fu Peng, a renowned macroeconomist and now Chief Economist at New火 Group, delivered his first public speech of 2026 at the Hong Kong Web3 Festival. He explained his perspective on crypto assets and why he joined the industry, framing it within the context of macroeconomic trends and financial evolution. Fu emphasized that crypto assets are transitioning from an early, belief-driven phase to a mature, institutionally integrated asset class. He drew parallels to the 1970s-80s, when technological advances (like computing) revolutionized traditional finance, leading to the rise of FICC (Fixed Income, Currencies, and Commodities). Similarly, current advancements in AI, data, and blockchain are reshaping finance, with crypto assets becoming part of a new "FICC + C" (C for Crypto) framework. He noted that institutional capital, including traditional hedge funds, avoided early crypto due to its speculative nature but are now engaging as regulatory clarity emerges (e.g., stablecoin laws, CFTC classifying crypto as a commodity). Fu predicted that 2025-2026 marks a turning point where crypto becomes a standardized, financially viable asset for diversified portfolios, akin to commodities or derivatives in traditional finance. Fu defined Bitcoin not as "digital gold" in a simplistic sense but as a value-preserving, financially tradable asset. He highlighted that crypto's future lies in regulated, institutional adoption, moving away from retail-dominated trading. His entry into crypto signals this maturation, where traditional finance integrates crypto into mainstream asset management.

marsbit32m ago

Fu Peng's First Public Speech in 2026: What Exactly Are Crypto Assets? Why Did I Join the Crypto Asset Industry?

marsbit32m ago

Justin Sun Sues Trump Family: What $75 Million Bought Was Only a Blacklist

Justin Sun, founder of Tron, has filed a lawsuit in federal court against World Liberty Financial (WLF), alleging he was made the "primary target of a fraudulent scheme" after investing $75 million. Sun claims the investment secured him an advisor title and WLFI tokens, which were later frozen by WLF, causing "hundreds of millions in losses." The dispute began in late 2024 when Sun's investment helped revive WLF's struggling token sale, which ultimately raised $550 million. Shortly after, the SEC dropped its lawsuit against Sun following Donald Trump's inauguration. However, relations soured when Sun refused WLF's demands for additional funding. In August 2025, WLF added a "blacklist" function to its smart contract, allowing it to unilaterally freeze tokens. Sun's holdings, worth approximately $107 million, were frozen, and he was threatened with token destruction. The lawsuit highlights WLF's structure, which directs 75% of token sale profits to the Trump family, who had earned $1 billion by December 2025. WLF's CEO is Zach Witkoff, son of U.S. Middle East envoy Steve Witkoff. The project faces scrutiny for opaque operations, including a controversial loan arrangement on the Dolomite platform, co-founded by a WLF advisor. Despite Sun's history with the SEC, the case underscores centralization risks within DeFi, as WLF controls governance and holds powers to freeze assets arbitrarily. Sun's tokens remain frozen as legal proceedings begin.

marsbit40m ago

Justin Sun Sues Trump Family: What $75 Million Bought Was Only a Blacklist

marsbit40m ago

$500 to Buy OpenAI Stock: Silicon Valley's Most Respectable Liquidity Invitation

Silicon Valley's largest venture capital platform, AngelList, has launched a new fund called USVC, allowing U.S. retail investors to buy into high-profile AI companies like OpenAI, Anthropic, and xAI with a minimum investment of $500—no accredited investor status required. Promoted by AngelList co-founder Naval Ravikant, the fund is framed as an opportunity for ordinary people to access high-growth private tech investments traditionally reserved for VCs. However, critics argue it functions more like an exit vehicle for early insiders. USVC acquires shares not through primary rounds but largely via secondary transactions—purchasing stakes from early investors, VC funds, and employees looking to cash out at peak valuations. With companies like xAI heavily weighted in the portfolio, the fund effectively channels retail money into providing liquidity for insiders who entered at much lower valuations. The fund’s structure raises concerns: shares are illiquid, with no secondary market, and buybacks are limited and discretionary. The actual annual fee reaches 3.61%, far above the advertised 1% management fee. This model parallels the "low float, high fully diluted valuation" strategy seen in crypto, where early investors profit by selling to latecomers at inflated prices. The timing—alongside similar moves by platforms like Robinhood—suggests that Silicon Valley’s sudden interest in retail inclusion may be less about democratizing access and more about securing exits for insiders.

marsbit1h ago

$500 to Buy OpenAI Stock: Silicon Valley's Most Respectable Liquidity Invitation

marsbit1h ago

Trading

Spot
Futures
活动图片