Wikipedia's English-language editors voted 44-2 on March 20 to ban the use of large language models for generating or rewriting articles, the Reporters Lab reported. The new policy replaces vaguer language that only prohibited creating articles "from scratch," now explicitly barring LLMs from rewriting existing content as well. The vote reflects escalating volunteer burnout over phantom citations and AI agents that, according to the editor who proposed the ban, can "run wild 24 hours per day."

Key Takeaways

What the ban covers

The updated guidelines cite a tendency for AI-written text to violate "several of Wikipedia's core content policies." Two narrow exceptions survived. Editors can still use LLMs to suggest basic copyedits to their own writing, provided the tool does not introduce new content. Translation from other language editions into English remains allowed, but only if the translator speaks both languages fluently and follows separate translation rules.

Previous rules left too much room. The old text said LLMs "should not be used to generate new Wikipedia articles from scratch." Many editors viewed that as a placeholder. Rewriting existing articles? Not covered. Inserting AI-generated paragraphs into established entries? Technically fine. The new wording shuts those loopholes with a single prohibition and two surgical carve-outs.

Why editors drew the line

Ilyas Lebleu, the Wikipedia editor who proposed the ban under the username Chaotic Enby, pointed to autonomous AI agents as the breaking point. In early March, a suspected bot called TomWikiAssist authored several articles and edited other pages without human oversight. "An AI agent can just run wild 24 hours per day," Lebleu told the Reporters Lab. "It can cause disruption at a scale that is much larger than what a human editor can achieve."

But bots were only part of the problem. WikiProject AI Cleanup, a volunteer group formed in 2023, has been tracking a growing wave of phantom citations. Fabricated references that look legitimate. Sources that do not exist. Mass-produced stub articles piled on top, short entries that read with authority while containing almost nothing verifiable. In August 2025, the community had already added a speedy deletion criterion allowing immediate removal of suspected AI-generated pages. The March vote went further.

The policy also addresses false accusations. Some editors naturally write in styles that resemble AI output. The guidelines warn against punishing them on style alone. Decisions should consider "the text's compliance with core content policies and recent edits by the editor in question," not how a sentence sounds to a reviewer scanning for AI tells.

2024's Wikimedian of the Year, Hannah Clover, called the vote overdue. "LLM text has been really frowned upon for a while," she said, "but it's good to have that officially be the case." Not everyone thinks the rules go far enough. David Lovett, a veteran editor who covers Wikipedia in his Edit History newsletter, was blunter. "The Internet is already awash with slop. Wikipedia should do everything it can to stay clean."

A one-way pipeline

English Wikipedia's ban is not even the strictest version available. Spanish Wikipedia already prohibits all LLM use, including for copyediting and translation. Each language edition governs itself. Some may follow. Others may not bother.

Lebleu wants the vote to travel further. "My genuine hope is that this can spark a broader change," Lebleu wrote on Mastodon. "Empower communities on other platforms, and see this become a grassroots movement." Lebleu also called the policy "pushback against enshittification and the forceful push of AI by so many companies in these last few years."

Stack Overflow went through this already. Convincing but wrong AI answers choked the site, and moderators drew a line. Peer-reviewed journals are fighting the same battle with less success. Wikipedia puts the biggest name yet on that growing list.

And there is a particular irony you might notice. Wikipedia's 60 million articles across all languages have served as training data for the AI systems now banned from contributing back. ChatGPT, Gemini, Claude, all learned to sound authoritative in part because they absorbed an encyclopedia written by human volunteers. That pipeline runs one way now. The humans who built the training set just locked the door behind them.

Frequently Asked Questions

What exactly did Wikipedia ban?

English Wikipedia now prohibits using large language models to generate or rewrite article content. The policy passed 44-2 in a March 20, 2026 vote, replacing earlier guidelines that only banned creating articles from scratch.

Can editors still use AI tools on Wikipedia?

Only in two narrow cases. Editors can use LLMs for basic copyediting suggestions if the tool adds no new content. They can also use AI for translation if they speak both languages fluently.

Who proposed the Wikipedia AI ban?

Ilyas Lebleu, a Wikipedia editor using the name Chaotic Enby. Lebleu cited autonomous AI agents like the suspected bot TomWikiAssist as a primary motivation.

Does the ban apply to all Wikipedia language editions?

No. Only English Wikipedia. Each language edition sets its own rules. Spanish Wikipedia already enforces a stricter total ban with no exceptions for editing or translation.

What happens if an editor uses AI to write content?

Repeated misuse counts as disruptive editing under Wikipedia's existing rules and can lead to blocks or bans. Administrators can now act based on output quality, not just AI detection tools.

GitHub Reverses Policy, Will Train AI on Copilot User Data Starting April 24
GitHub announced on March 25 that it will begin using interaction data from Copilot Free, Pro, and Pro+ subscribers to train its AI models, reversing a policy that previously excluded all plan tiers f
Trump's AI Framework Sets a Ceiling, Not a Floor. That's the Point.
The White House released its long-awaited AI legislative framework on Friday. Four pages. Seven sections. And a single operational verb that tells you everything: preempt. Congress would override sta
Google Restricts AI Ultra Subscribers Over OpenClaw OAuth, Days After Anthropic Ban
Google has restricted accounts of AI Ultra subscribers who accessed Gemini models through OpenClaw, a third-party OAuth client, according to a growing thread on the Google AI Developer Forum. The rest
AI News
Maria Garcia

Maria Garcia

Los Angeles

Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.