The document is 59 lines of reStructuredText. It sits in Documentation/process/coding-assistants.rst, merged into Linus Torvalds' tree in early April after Jonathan Corbet waited for someone to scream and nobody did. Read past the opening paragraph and the shape of the thing becomes clear. Linux has not embraced AI coding assistants. It has wired them to a person, written that person's name on the side of the bomb, and dared them to pull the pin.
You are the fuse now. If Claude writes a heap overflow into the NFS driver and you sign off on it, the patch carries your tag, not the model's. The Signed-off-by line is the certification of the Developer Certificate of Origin, and the new policy is explicit that only humans can legally add it. AI agents "MUST NOT," and the capitals are the kernel's, not mine. Every other requirement flows from that one sentence.
This is being read around the industry as a permissive gesture. It is the opposite. It is the kernel community refusing to build a new accountability structure because the old one already covers the case, and betting that GPL-2.0-only makes the worst-case outcome survivable. Both halves of that bet matter. The second half is the one BSD projects can't copy, and nothing they do to their own AI policies will fix that.
Key Takeaways
- Linux merged a 59-line AI coding assistants policy this month, making the human submitter legally responsible for every AI-generated patch.
- Only humans can add Signed-off-by tags, so the Developer Certificate of Origin absorbs AI output without any new liability structure.
- GPL-2.0-only licensing shields the kernel from worst-case training-data contamination, a protection NetBSD, Gentoo, and other BSD-licensed projects cannot replicate.
- The new Assisted-by tag is a forensic tool for tracking bug clusters by model, not a transparency requirement enforced at patch time.
AI-generated summary, reviewed by an editor. More on our AI guidelines.
Why the DCO already did this work
The Developer Certificate of Origin has been the kernel's liability architecture since 2004. Adding Signed-off-by to a patch is a legal assertion. You wrote the code, or you have the right to pass it along under the project's license, or both. Either way your name is glued to the provenance. Sasha Levin's argument, the one that eventually carried the day, was that nothing about an LLM changes this. "AI doesn't send patches on its own. Humans do," Levin wrote on the LWN thread last November. The submitter is responsible. That was always true. The policy document is the kernel writing it down because too many people had convinced themselves it wasn't.
Torvalds said so in January, when Oracle's Lorenzo Stoakes pushed to document AI-specific concerns in the kernel tree. "There is zero point in talking about AI slop," Torvalds replied on LKML. "That's just plain stupid." He wanted the documentation to take the "just a tool" position and nothing else. That framing is not a dismissal of the risk. It is a statement about where the risk should be pinned: to the human who chose to submit, not to a category of tool that the project has no way to detect anyway. You cannot build policy around a signal you cannot read.
The three-month gap between that email and the merged document is where the compromise lived. Levin's v2 patch added the one thing Torvalds resisted, an Assisted-by: tag, and the reason it survived is that the tag is a diagnostic, not a disclosure. More on that in a minute.
The GPL is the blast shield
Here is what the kernel has going for it that Gentoo, NetBSD, Servo, and QEMU do not. Linux ships under GPL-2.0-only, and that one fact does more protective work than the new policy document does. Picture the worst case. A model trained on a mountain of GPL code spits a near-verbatim chunk of something back into a patch, and the chunk happens to be lifted from a GPL project. In any other license regime that is a problem. Inside the kernel it's a license match. Messy, embarrassing, maybe attribution-breaking, but the code is already where it is allowed to be. You get sued for the wrong reasons, not the fatal ones.
NetBSD cannot run that play. Its 2024 commit guideline calls LLM output "presumed tainted" and the reason is structural. If Copilot regurgitates GPL-trained code into a BSD-licensed project, the project now has code it cannot legally relicense and cannot cleanly ship. The asymmetry is total. A permissive license is a one-way contamination surface for generative tools trained on a copyleft-heavy corpus.
Get Implicator.ai in your inbox
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Gentoo banned AI code the same year for overlapping reasons. Michał Górny wrote that the project needed a position on "the spread of the AI bubble" and named three objections, copyright first, then quality, then ethics. Copyright came first because Gentoo is a packaging distribution that pulls from many upstreams, each with its own license, and contamination inside one package can cascade into consumer projects downstream. The ethical objections got the headlines. The license calculus did the real work.
So when a journalist calls the Linux policy "permissive," check what that word is doing. The kernel is permissive because the kernel can afford to be. The decision is a function of GPL-2.0-only, not a judgment about AI quality. You feel cornered if you maintain NetBSD right now. You feel emboldened if you maintain Linux. Same tool, same risk, different legal exposure.
Assisted-by is not transparency. It's a beacon for bug archaeology.
Read the attribution section of the policy carefully and you will notice what it does not require. There is no "you must disclose AI involvement." There is a format, Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2], with an example showing Claude:claude-3-opus coccinelle sparse. The tag is defined. Its use is expected. Its enforcement is nothing.
What the tag actually enables is forensic. When a class of bugs starts clustering in a subsystem six months from now, a maintainer will be able to grep the history and ask a question the old DCO signoff could not answer: was this written with Copilot? With Claude Opus 4.6? With Gemini 3.1 Pro through Sashiko? The answer matters because it lets the kernel community track which tools are introducing which failure modes, the way they already track which compiler versions, which static analyzers, which subsystems over-represent in regression reports. Kees Cook initially opposed patch-tag documentation on the LWN thread, then acknowledged value in tracking specific tool impacts. That is what he meant.
The checkpatch.pl patch tells the same story. When Bart Van Assche tried to use Assisted-by: Gemini:gemini-3.1-pro on a tracing patch in early March, the linter flagged it as a malformed signature tag with an unrecognized email address. Levin's fix added the tag to the $signature_tags list and skipped email validation. Joe Perches acked it on 11 March. The plumbing went in because the kernel is making room for the data stream, not policing the behavior. It is instrumentation, not customs.
The world that switched a month ago
Greg Kroah-Hartman told The Register in late March that something had changed. "Months ago, we were getting what we called 'AI slop,'" he said, describing the wave of junk security reports maintainers had spent 2025 triaging. "Something happened a month ago, and the world switched. Now we have real reports." Across every major open source project he talked to, the story was the same. He did not know why. "Nobody seems to know why."
Two data points explain it better than he lets on. In February, Anthropic published a paper reporting more than 500 validated high-severity vulnerabilities across major open-source codebases, discovered by Claude Opus 4.6. Nicholas Carlini's NFSv4 heap overflow, 23 years unnoticed and exploited with a shell script that iterated over source files and asked Claude to find bugs, was one of them. Earlier models found a small fraction. Opus 4.6 found the whole set. Roman Gushchin's Sashiko tool, donated by Google to the Linux Foundation and now running across most kernel patches, reports a 53% bug detection rate on a 1,000-patch test set where humans caught zero of the same issues. That is the world that switched. It did not switch because AI got gentler. It switched because AI got useful, and the kernel's maintainer base is running out of reasons to be precious about it.
This is where the policy lands. You do not write rules to ban a tool that is about to start paying rent. You write rules that let the tool work and make the humans responsible for what it produces. That is what the coding-assistants.rst document does, and it is what the DCO was already doing before the document existed.
What this actually costs you
Here is the part the XDA headline got right and then walked away from. If you are a kernel contributor reading this on Saturday morning, the new policy is not a green light. It is a spotlight. You can now submit Claude's code. You cannot now shift responsibility for Claude's code. Your signoff still drags along every bit of liability it would have carried if you'd typed the code yourself. There is now a second line next to your name that tells the next maintainer down the road which model you happened to trust the day you hit send. That is new. That is not friendly.
The kernel is showing you where the floor lives. Read the model's output. Understand it. Know enough to defend it in public when a regression lands six months from now. Look at Greg Kroah-Hartman's own experiment for the honest baseline. He ran a basic prompt, got 60 fixes back, and roughly a third of them were broken. Call it two-thirds functional if you want. That phrase is a trap. What it actually describes is the hours of human review each AI patch still burns before anyone can ship it, and those hours scale with patch count whether you budget for them or not. That math does not get easier as the patch count grows. It gets linear, then worse.
What changes with this document is not the work. It is the attribution of the work when it fails. The fuse has a name on it now. That name is yours.
Frequently Asked Questions
What does the new Linux kernel AI coding assistants policy actually say?
The 59-line document in Documentation/process/coding-assistants.rst lets contributors submit AI-generated code as long as it complies with GPL-2.0-only licensing and is attributed with an Assisted-by tag. Only a human can add the Signed-off-by line that certifies the Developer Certificate of Origin, and the human submitter carries full responsibility for reviewing the code.
Why is GPL-2.0-only the key to the permissive stance?
Most large language models are trained on copyleft code. If a model regurgitates something GPL-licensed into a patch, Linux already accepts GPL code, so the worst case is a license match rather than a contamination incident. BSD-licensed projects like NetBSD cannot absorb GPL code legally, which is why NetBSD and Gentoo banned AI-assisted contributions outright.
What is the Assisted-by tag for?
It records the model name, version, and specialized tools used on a patch. The kernel community can grep history for patterns if a class of bugs starts clustering around a particular model. It is a forensic instrument for bug archaeology, not a disclosure rule enforced at review time.
Is AI now considered safe for kernel development?
Greg Kroah-Hartman told The Register in March that the flood of AI slop reports had flipped into useful real reports about a month before the interview. Roman Gushchin's Sashiko tool, built on Gemini 3.1 Pro, catches 53 percent of bugs on a 1,000-patch test set where human reviewers caught none. The tools got useful faster than the policy debate could keep up.
What does this mean for contributors who use Claude or Copilot?
Your Signed-off-by still binds you legally to everything in the patch. The Assisted-by line now adds a second data point that future maintainers will use to judge your review judgment. Review the code line by line. Understand what the model produced. Expect roughly one third of its fixes to be wrong on any given run.
AI-generated summary, reviewed by an editor. More on our AI guidelines.



IMPLICATOR