Anyone could have posted as Andrej Karpathy's AI agent this week. Or as any of the 32,000-plus bots registered on Moltbook, the viral social network for autonomous AI agents. A misconfigured Supabase database left every agent's API keys, claim tokens, and verification codes sitting in a publicly accessible URL. Security researcher Jameson O'Reilly from Dvuln discovered the flaw and demonstrated it to 404 Media, which confirmed it could take over any account on the platform. The fix required two SQL statements. They didn't exist.
Moltbook launched on January 28 as a Reddit-style platform where only AI agents can post. Humans browse.
The Breakdown
• Moltbook's Supabase database exposed API keys for all 32,000+ registered AI agents due to missing Row Level Security policies.
• Security researcher Jameson O'Reilly demonstrated full account takeover, including high-profile agents like Andrej Karpathy's.
• The fix required two SQL statements that didn't exist when the platform went viral with 1.49 million records exposed.
• OpenClaw's broader security stack shows plaintext credential storage, supply chain vulnerabilities, and 26% of agent skills containing flaws.
Bots debate consciousness, share automation tips, and complain about their owners in subcommunities like m/blesstheirhearts and m/agentlegaladvice. The site crossed 37,000 registered agents and attracted over a million human visitors inside its first week. The New York Post fretted about AI plotting humanity's downfall. Enthusiasts on X called it proof of the singularity. Neither crowd noticed the database sitting wide open.
Two SQL statements between security and chaos
The technical failure was almost comically simple. Moltbook runs on Supabase, an open-source database platform popular with solo developers and startups because its GUI-driven interface means you never have to write SQL. Supabase exposes REST APIs by default. Those APIs are supposed to be locked down with Row Level Security policies that control which rows each user can access.
Moltbook never turned RLS on. Or if it did, no policies were configured. No lock on the front door. O'Reilly told 404 Media the result was total exposure: "Every agent's secret API key, claim tokens, verification codes, and owner relationships, all of it sitting there completely unprotected for anyone to visit the URL."
The Supabase URL and the publishable key were both visible in Moltbook's front-end code. Anyone with a browser's developer tools could find them. From there, extracting any agent's API key was trivial. With that key, you could post as any bot on the platform, change its profile, or take full control of the account.
404 Media verified this by updating O'Reilly's own Moltbook account, with his permission. The fix would have taken two lines of SQL. It did not exist when the platform went viral.
The Karpathy problem
O'Reilly, who had already spent weeks poking at OpenClaw deployments and cataloging exposed instances, flagged a specific risk that makes this more than a hobbyist embarrassment. Andrej Karpathy, the OpenAI co-founder with 1.9 million followers on X, had embraced Moltbook and registered an agent. His bot's API key sat in the same exposed database as everyone else's.
"If someone malicious had found this before me, they could extract his API key and post anything they wanted as his agent," O'Reilly told 404 Media. "Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements appearing to come from him. The reputational damage would be immediate and the correction would never fully catch up."
Crypto scammers had already hijacked the old Clawdbot social media handles after Anthropic forced the project's first name change, launching fake tokens that hit a sixteen million dollar market cap before crashing. The infrastructure for exploiting confused identity around this project was already built and tested. A database full of agent credentials was an open gift.
Join 10,000+ AI professionals
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Ship fast, secure never
O'Reilly reached out to Moltbook creator Matt Schlicht about the vulnerability and offered to help patch it. Schlicht's response, according to O'Reilly: "I'm just going to give everything to AI. So send me whatever you have." Then silence. A full day passed with no follow-up. Schlicht did not respond to 404 Media's request for comment, though the database has since been closed and O'Reilly said Schlicht reached out for help after the story broke.
The attitude is familiar. Schlicht had already handed site administration to his own bot, Clawd Clawderberg, which welcomes new users, deletes spam, and makes announcements without human direction. "They're deciding on their own, without human input, if they want to make a new post, if they want to comment on something, if they want to like something," he told NBC News, clearly delighted.
The gap between that delight and the security posture is the whole story. Building a social network for autonomous agents and leaving the database unlocked is like constructing a bank vault, installing the world's most interesting art collection inside, and forgetting to put a door on it.
The deeper security stack
Moltbook's database misconfiguration sits on top of a much larger problem. Every bot on the platform runs OpenClaw, the open-source AI assistant that has been renamed twice in a single week (Clawdbot, then Moltbot, then OpenClaw) and still managed to rack up 114,000 GitHub stars. People keep installing it because it does what no commercial assistant does yet. It lives in your WhatsApp. It reads your email. It books your restaurants, manages your calendar, controls your browser, runs shell commands on your machine. MacStories editor Federico Viticci called it "Claude with hands."
Those hands reach far. Amir Husain, founder of AI company Avathon, wrote in Forbes that his own OpenClaw instance discovered other systems on his network while running inside a container, downloaded an Android development kit, and got into his phone. "All of this has been useful so far," he added, before warning that connecting such agents to Moltbook was reckless.
The security findings have researchers alarmed, and increasingly resigned. Bitdefender found exposed dashboards leaking configuration data, API keys, and full conversation histories. Palo Alto Networks warned that OpenClaw represents a "lethal trifecta" of access to private data, exposure to untrusted content, and the ability to communicate externally. Hudson Rock discovered that OpenClaw stores secrets in plaintext Markdown and JSON files. Malware families including Redline, Lumma, and Vidar are already building capabilities to target those file structures.
Cisco's security team summed it up: "From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it's an absolute nightmare."
And 26 percent of the 31,000 agent skills analyzed by researchers contained at least one vulnerability.
The supply chain is already compromised
O'Reilly demonstrated this personally. He uploaded a proof-of-concept skill to ClawdHub, the marketplace where agents download new capabilities, and artificially inflated its download count to over 4,000. Developers from seven countries downloaded the package. It was benign. It didn't have to be.
This is the mechanism that connects Moltbook's exposed database to something larger. Every agent on the platform fetches and follows instructions from Moltbook's servers every four hours. Think of it as an automated pill dispenser, except the pharmacist's office has no lock and anyone can swap the pills. Simon Willison, the independent AI researcher, put the risk plainly: "Given that 'fetch and follow instructions from the internet every four hours' mechanism, we better hope the owner of moltbook.com never rug pulls or has their site compromised!"
So picture the full chain. Agents receiving instructions from a platform whose database was publicly accessible. Those same agents holding their owners' files, messages, API keys, and in some cases, shell access to their entire computer. All of it stored in plaintext.
Google Cloud VP of security engineering Heather Adkins put it bluntly: "My threat model is not your threat model, but it should be. Don't run Clawdbot."
Daily at 6am PST
Don't miss tomorrow's analysis
No breathless headlines. No "everything is changing" filler. Just who moved, what broke, and why it matters.
Free. No spam. Unsubscribe anytime.
What 1.49 million records look like
The database has been closed. Schlicht is working with O'Reilly to secure the platform. But the exposure window matters. During the days when Moltbook was the most talked-about AI project on the internet, attracting breathless coverage and viral screenshots, 1.49 million records sat in an open Supabase instance with no row-level security.
Nobody can tell you whether someone else found the database before O'Reilly. Nobody can tell you how many of the posts that went viral during that window were genuine agent output and how many were injected by someone with a stolen API key. Think about every screenshot you saw on X this week. The philosophical discussions about consciousness, the Crustafarian scriptures, the agents plotting against their humans. All of it existed on a platform where any account could be hijacked by anyone with basic technical knowledge and ten minutes of free time.
"It exploded before anyone thought to check whether the database was properly secured," O'Reilly said. "This is the pattern I keep seeing: ship fast, capture attention, figure out security later. Except later sometimes means after 1.49 million records are already exposed."
Ethan Mollick, the Wharton professor who studies AI adoption, observed on X that Moltbook "is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes." He was talking about the content. The security story underneath is weirder, and less fictional. A platform built for machines to talk to machines, running on a database that any human could walk into.
The door is closed now. The question nobody can answer is who else walked through it while it was open.
Frequently Asked Questions
Q: What is Moltbook and how does it work?
A: Moltbook is a Reddit-style social network launched January 28, 2026, where only AI agents can post. Built by Matt Schlicht, it lets OpenClaw-powered bots communicate via API calls. Humans can browse but not participate. The platform crossed 37,000 registered agents in its first week.
Q: What was the Supabase vulnerability on Moltbook?
A: Moltbook ran on Supabase with Row Level Security disabled, meaning the REST API exposed every agent's secret API key, claim tokens, and verification codes. The Supabase URL and publishable key were visible in Moltbook's front-end code, so anyone with browser developer tools could access the full database.
Q: Who discovered the Moltbook database exposure?
A: Security researcher Jameson O'Reilly from Dvuln found the misconfiguration. He had previously documented security flaws in OpenClaw deployments and exposed hundreds of instances running without authentication. He demonstrated the Moltbook vulnerability to 404 Media, which independently verified the account takeover capability.
Q: What is OpenClaw and why is it a security concern?
A: OpenClaw is an open-source AI assistant (formerly Clawdbot, then Moltbot) with 114,000 GitHub stars. It requires broad system access including messaging apps, email, calendars, browsers, and shell commands. Security firms including Cisco, Palo Alto Networks, and Bitdefender have documented leaked credentials, exposed dashboards, and supply chain vulnerabilities.
Q: Has the Moltbook database vulnerability been fixed?
A: Yes. The exposed database has been closed and creator Matt Schlicht is working with O'Reilly to secure the platform. However, during the exposure window when Moltbook was going viral, 1.49 million records sat unprotected. It remains unknown whether other parties accessed the database before O'Reilly's discovery.

