Your cart is currently empty!
Humans Reclaim Wikipedia as AI Writing Gets Banned

For a quarter of a century, Wikipedia has stood as one of the internet’s most quietly radical ideas. It promised something simple but powerful: that ordinary people, working together, could build one of the largest collections of knowledge in human history. No central authority. No single voice. Just millions of contributors, guided by shared rules about truth, neutrality, and verifiability.
That promise has always depended on trust. Trust that the people writing the articles are acting in good faith. Trust that the sources behind each claim can be checked. Trust that the system, while imperfect, is grounded in human judgment.
But in recent years, a new kind of contributor began slipping into that system. It did not need sleep, did not verify sources, and did not truly understand the information it was producing. It could generate entire articles in seconds. And increasingly, it was doing exactly that.
Wikipedia has now responded in the clearest way possible. In a decisive vote, the platform has banned AI-generated text from its articles, drawing a firm boundary around what it believes knowledge should look like.
A Vote That Signals a Turning Point
On March 20, 2026, volunteer editors of Wikipedia’s English-language platform voted 40 to 2 in favor of a new policy that prohibits the use of large language models to generate or rewrite article content. According to multiple reports including TechCrunch, the decision came after months of internal debate and growing concern among editors.
The scale of the vote reflects how strongly the community felt about the issue. Wikipedia rarely moves quickly or unanimously. Its policies are typically shaped through long discussions and gradual consensus. In this case, the outcome was unusually decisive.

Editors had been watching a steady increase in AI-generated contributions since tools like ChatGPT became widely accessible. At first, there was curiosity about whether these tools could help improve efficiency. Over time, that curiosity gave way to concern as the volume of questionable content grew.
The final decision was not just about technology. It was about preserving the core identity of Wikipedia. The platform has always relied on the idea that knowledge should be built carefully, with each claim tied to a reliable source. AI-generated text, despite its fluency, repeatedly failed to meet that standard.
The Problem With AI-Written Knowledge

At the heart of the issue lies a fundamental mismatch between how AI systems generate text and how Wikipedia defines knowledge.
Large language models are designed to produce convincing language based on patterns in data. They do not verify facts in real time. They do not distinguish between reliable and unreliable sources in the way a human editor would. Instead, they predict what a plausible answer looks like.
This becomes a serious problem on a platform where every sentence is expected to be backed by evidence.
Editors began noticing recurring issues in AI-generated articles. Citations often pointed to sources that did not exist. Links led to irrelevant pages or dead ends. In some cases, entire references were fabricated but presented in a way that looked credible.
This phenomenon, often referred to as hallucination, created a significant burden for human editors. Every claim had to be checked manually. Verifying a single AI-generated paragraph could take far longer than writing one from scratch.
Jimmy Wales, Wikipedia’s co-founder, acknowledged these limitations in earlier comments to the BBC. While he has not ruled out future uses of AI, he emphasized that current models are not reliable enough for writing encyclopedia content. For a system built on precision, even small inaccuracies can undermine trust.
A Community That Saw It Coming

The push for stronger rules did not come from the top down. It emerged from within Wikipedia’s own editor community.
One of the key figures behind the policy is Ilyas Lebleu, an AI research student based in France who edits under the username Chaotic Enby. Lebleu helped establish WikiProject AI Cleanup, a volunteer initiative focused on identifying and removing AI-generated content.
According to interviews with NPR and other outlets, the turning point came when editors began noticing articles that felt off. The writing was polished but oddly generic. Phrases repeated across unrelated topics. Citations failed basic verification checks.
What began as isolated incidents quickly grew into a broader pattern. The volume of AI-generated content increased to a point where it became difficult to manage. Editors found themselves spending more time correcting or removing content than contributing new material.
Lebleu and others described what they called an asymmetry of effort. Generating an article with AI takes seconds. Verifying or fixing it can take hours. This imbalance placed increasing strain on the volunteer-driven system.
Over time, it became clear that existing guidelines were not enough. A more explicit rule was needed.
What the New Policy Allows

Despite the strict ban on AI-generated writing, Wikipedia has not completely closed the door on artificial intelligence.
The policy includes two limited exceptions. Editors are allowed to use AI tools for basic copyediting tasks, such as correcting grammar or formatting, but only on text they have written themselves. Even then, they are required to carefully review the output to ensure that no unintended changes have been introduced.
The second exception involves translation. AI can be used to help translate articles from other language versions of Wikipedia into English, provided the editor is fluent in both languages and verifies the accuracy of the result.
These exceptions reflect a cautious approach. AI can assist with mechanical tasks, but it cannot replace human judgment in creating or shaping content.
Wikipedia’s guidelines also warn editors that even permitted uses carry risks. AI tools can subtly alter meaning, introducing inaccuracies that may not be immediately obvious. In a system where every statement must align with its source, even minor shifts can create problems.
Detecting the Invisible Contributor

Before the ban was introduced, Wikipedia editors had already begun developing methods to identify AI-generated text.
Certain patterns became familiar over time. Articles would include overly generic language, repeated phrases, or unnecessarily complex explanations. Some entries contained abrupt shifts in tone, suggesting that different sections had been generated separately.
Fake citations remained one of the most reliable indicators. AI-generated references often appeared realistic at a glance but failed under closer inspection.
However, detection has never been perfect. Some human writers naturally produce text that resembles AI output. Wikipedia’s policy acknowledges this challenge and cautions against penalizing users based solely on writing style.
Instead, enforcement focuses on whether content meets the platform’s core standards. If a claim cannot be verified or a source cannot be traced, the content may be removed regardless of how it was created.
The Bigger Shift Happening Online
Wikipedia’s decision comes at a time when AI tools are rapidly changing how people access information.
Data from market research firm GWI shows that ChatGPT surpassed Wikipedia in monthly reach in late 2024. Usage of AI tools has grown quickly, particularly among students and younger users. Similarweb data also indicates that ChatGPT now ranks among the most visited platforms globally.
At the same time, Wikipedia has experienced a decline in traditional page views. Some of this trend predates AI, as search engines increasingly provide direct answers without requiring users to click through to external sites.
AI has accelerated this shift by offering conversational responses that feel immediate and personalized. Instead of navigating multiple sources, users can ask a question and receive a summary within seconds.
Despite these changes, the Wikimedia Foundation maintains that overall engagement remains strong. Page views have held relatively steady over the past several years, suggesting that Wikipedia continues to play a central role in the information ecosystem.
Still, the contrast is striking. One platform prioritizes speed and convenience. The other emphasizes verification and accountability.

An Uncomfortable Irony
There is a deeper layer to this story that has not gone unnoticed.
Wikipedia, with its vast collection of well-organized and sourced content, has been one of the key datasets used to train modern AI systems. The same platform that is now restricting AI-generated contributions helped make those systems possible.
This creates a complex relationship between the two. AI tools rely on Wikipedia’s data to generate responses, while also drawing users away from the site itself.
In response, the Wikimedia Foundation has asked AI companies to access its content through official channels rather than scraping it directly. This approach is intended to reduce strain on Wikipedia’s infrastructure while supporting its nonprofit mission.
Whether this model will be widely adopted remains uncertain. The broader question is how open platforms can sustain themselves in an environment where their content is continuously reused and repackaged.
Where AI Still Fits In
Despite the ban on writing, Wikipedia’s leadership has not dismissed AI entirely.
Jimmy Wales has explored ways in which AI could support editorial work without compromising quality. One example involves using AI to compare short articles against their cited sources, identifying gaps or inconsistencies.
There is also interest in using machine learning to analyze patterns across Wikipedia’s content. Studies have shown that certain topics, such as biographies of women, are underrepresented or treated differently. AI could help identify these imbalances at scale, providing data that human editors can act upon.
The Wikimedia Foundation has already established a dedicated machine learning team to develop tools that assist rather than replace contributors.
This approach reflects a broader principle. Technology can enhance the process, but it cannot substitute for the human responsibility of verifying and presenting knowledge.

A Signal to the Rest of the Internet
Wikipedia’s decision may appear specific to one platform, but its implications extend much further.
Online communities across the internet are grappling with similar questions. AI-generated content is faster and cheaper to produce, but it introduces new challenges around accuracy, authenticity, and trust.
Ilyas Lebleu has suggested that Wikipedia’s move could inspire other platforms to define their own boundaries. Each community will need to decide how much control to retain and how much to delegate to automated systems.
The debate is not simply about technology. It is about values. What does it mean for information to be reliable? Who is accountable when something goes wrong? And how much of the knowledge we rely on should be created by systems that do not truly understand it?
Wikipedia has offered one answer. It has chosen to prioritize human judgment, even if that means slower growth and greater effort.
A Reminder of What Knowledge Requires
The decision to ban AI-generated writing is not a rejection of innovation. It is a reaffirmation of a principle that has guided Wikipedia since its beginning.
Knowledge is not just about assembling words into coherent sentences. It requires context, verification, and responsibility. It requires people who are willing to check sources, challenge assumptions, and correct mistakes.
AI can generate language at an unprecedented scale. But scale alone does not create understanding.
By drawing a clear line, Wikipedia has reminded the internet that some things cannot be automated without consequence. In a digital landscape increasingly shaped by algorithms, the platform is holding onto the idea that human oversight still matters.
That choice may not slow the rise of AI. But it does offer a different vision of how knowledge can be built and maintained. One that values accuracy over speed, and trust over convenience.
