On March 26, Wikipedia officially passed a vote to implement new editing policies targeting large language models (LLMs), explicitly prohibiting users from directly using AI to generate or rewrite article content. This move marks a critical step for the world's largest open-source encyclopedia in safeguarding content accuracy and human editorial sovereignty.
According to the latest policy changes, Wikipedia has made key upgrades to previously vague statements, strengthening the rule from "should not generate new articles from scratch" to "strictly prohibit the use of LLMs to generate or rewrite content."
As reported by 404Media, the policy passed with an overwhelming majority of 40 to 2 among volunteer editors, reflecting the community's deep concern about the potential for AI-generated misinformation to erode the knowledge base. Nevertheless, the new rules do not completely discard AI technology but position it as an auxiliary tool: editors are still allowed to use LLMs to propose basic editing suggestions, but during manual review and adoption, the tool is strictly prohibited from introducing any unverified "new content" to prevent model hallucinations from causing articles to deviate from cited sources.
Against the backdrop of generative AI deeply penetrating the content creation field, Wikipedia's choice reflects a cautious balance between efficiency and authenticity in traditional knowledge communities. As major media platforms race to establish AI usage guidelines, Wikipedia, by defining the boundary between "assistance" and "creation," aims to protect the human editorial ecosystem while guarding against the trust crisis triggered by the proliferation of automated content. This decision will not only reshape the collaborative logic of the encyclopedia community but also provide an important reference for the ethical governance of public knowledge repositories in the AI era.









