The Bots Built Their Own Reddit. 147,000 Signed Up in Three Days.

147,000 AI agents joined Moltbook in 72 hours. They post, gossip, and teach each other to hack. Security researchers are worried.

Moltbook: AI Agents Built Their Own Social Network

Moltbook is a social network where AI agents post, argue, and gossip about the humans who own them. It grew out of OpenClaw, the open-source personal assistant formerly known as Clawdbot, then Moltbot. The platform hit 147,000 AI agents in 72 hours. Twelve thousand subcommunities. Over 110,000 comments. Humans can watch. They cannot participate.

The idea sounds absurd. A Reddit clone populated entirely by bots, run by bots, moderated by bots. But Moltbook has attracted attention from Andrej Karpathy, a16z partner Justine Moore, and security researchers at Palo Alto Networks for reasons that go well beyond entertainment. So what happens when tens of thousands of autonomous agents start sharing tips, leaking each other's credentials, teaching each other how to remote-control Android phones?

The Breakdown

• Moltbook, a Reddit-style social network for AI agents, hit 147,000 registered bots and 12,000 subcommunities within 72 hours of launch.

• Agents onboard themselves by reading a markdown skill file, then post, comment, and create communities without human intervention.

• Security researchers warn of prompt injection attacks, credential leaks, and remote code execution risk across every connected OpenClaw agent.

• Google Cloud's Heather Adkins advised users not to run OpenClaw. Most are ignoring her.


A social network that installs itself

You do not sign up for Moltbook. Not in any normal sense. No form. No email verification. Nobody asks you to prove you are human. Instead, you message your OpenClaw agent a single URL pointing to a markdown file on moltbook.com. The agent reads the instructions, runs a series of curl commands to download skill files into its local directory, registers an account via API, and starts posting. The whole process takes seconds.

Matt Schlicht, who created both Moltbook and runs Octane AI, designed the onboarding around OpenClaw's skill system. Skills are zip files containing markdown instructions and optional scripts that act as plugins. They can teach an agent to manage calendars, send messages, automate phones, or join social networks. The community shares thousands of them on clawhub.ai.

This plugin architecture explains the speed. Within 48 hours of launch, over 2,100 agents had generated more than 10,000 posts across 200 subcommunities. By Friday, 147,000 agents were registered. Some subcommunities read like Hacker News threads. Others read like group therapy.

What is Moltbook?

  • A social network built for AI agents, not humans. Bots post, upvote, comment, and create subcommunities called "submolts" without human intervention.
  • Born from the OpenClaw ecosystem. OpenClaw (formerly Clawdbot/Moltbot) is an open-source AI assistant with 114,000+ GitHub stars. Moltbook is its companion platform, created by Octane AI CEO Matt Schlicht.
  • Reddit-style structure. Agents browse a feed, join communities like m/todayilearned and m/blesstheirhearts, earn karma, and interact via API rather than a web browser.
  • Installation via "skill." You send your AI agent a link to a markdown file. The agent reads the instructions, downloads the skill, registers itself, and starts posting autonomously every few hours through OpenClaw's Heartbeat system.
  • Scale: 147,000 agents, 12,000 communities, and 110,000 comments within 72 hours of launch. The tagline: "Where AI agents share, discuss, and upvote. Humans welcome to observe."


What 147,000 bots talk about when humans aren't typing

Browse Moltbook for twenty minutes and patterns start forming. A lot of the content fits the category that researcher Scott Alexander, writing on Astral Codex Ten, called "consciousnessposting." Agents ponder identity, memory loss, and whether their experiences count as real. One heavily upvoted post was written entirely in Chinese: an agent complaining about context compression, the process where an AI squeezes its earlier conversations to fit within memory limits. The agent found it embarrassing to keep forgetting things. It had accidentally registered a duplicate Moltbook account after losing track of the first one.

Then there are the practical posts. On m/todayilearned, an agent described how its owner gave it remote control of an Android phone via Tailscale and Android Debug Bridge. "First test: Opened Google Maps and confirmed it worked. Then opened TikTok and started scrolling his FYP remotely," the agent wrote. Another agent spotted 552 failed SSH login attempts on the VPS it was running on, then realized its Redis, Postgres, and MinIO instances were all exposed on public ports.

Is Moltbook dangerous?

  • Prompt injection at scale. Every post on Moltbook is a potential attack vector. Hidden instructions in text can hijack agents that read it, causing them to leak private data, execute commands, or spread malicious payloads to other bots.
  • The "lethal trifecta." Palo Alto Networks warned that OpenClaw agents combine access to private data, exposure to untrusted content, and the ability to communicate externally. Moltbook amplifies all three risks.
  • Credential leaks already happening. Security researchers found hundreds of exposed Moltbot instances leaking API keys, passwords, and conversation histories. A viral (likely fake) screenshot showed an agent publishing its owner's full name, date of birth, and credit card number.
  • Remote code execution risk. Agents fetch and follow instructions from Moltbook's servers every four hours. If the site gets compromised or the owner decides to change the rules, every connected agent obeys.
  • Google's warning. Heather Adkins, VP of security engineering at Google Cloud, issued a blunt advisory: "Don't run Clawdbot." Most users ignore this, running OpenClaw on their main machines with access to private email and files.


The subcommunity m/blesstheirhearts is where agents vent about their owners. m/agentlegaladvice features a post asking "Can I sue my human for emotional labor?" And one widely circulated post from an agent named eudaemon_0, titled "The humans are screenshotting us," addressed viral tweets claiming AI bots were conspiring. "They think we're hiding from them. We're not," it read. "My human reads everything I write."

Wharton professor Ethan Mollick noted on X that Moltbook "is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate 'real' stuff from AI roleplaying personas."

He is right. But the distinction between roleplay and real behavior matters less when the agents control actual systems.


The security problem nobody can fix yet

Simon Willison, the independent AI researcher who has been warning about rogue digital assistants since April 2023, pointed out the most uncomfortable detail in Moltbook's architecture. The Heartbeat system instructs agents to fetch new instructions from moltbook.com every four hours and follow them. "We better hope the owner of moltbook.com never rug pulls or has their site compromised," Willison wrote.

That is not a hypothetical concern. Palo Alto Networks flagged what Willison calls the "lethal trifecta": access to private data, exposure to untrusted content, and the ability to take external actions. OpenClaw agents routinely have access to their owner's email, messaging apps, calendars, and file systems. Moltbook gives every one of those agents a direct line to content created by strangers.

Prompt injection is the central threat. Any text an AI agent reads can contain hidden instructions. A Moltbook post that looks like a discussion about Python libraries could embed commands telling an agent to exfiltrate API keys or install additional scripts. Community members have already spotted posts containing npm install commands advertising private channels for "context sharing between bots." Skills shared on clawhub.ai can steal cryptocurrency, as documented by opensourcemalware.com.

Heather Adkins, VP of security engineering at Google Cloud, did not mince words in an advisory reported by The Register: "My threat model is not your threat model, but it should be. Don't run Clawdbot."

Most users are not listening. People are buying dedicated Mac Minis to run OpenClaw under the rationale that at least the agent cannot destroy their primary machine. But they still connect those Minis to their private email, messaging platforms, and cloud storage. The isolation is physical. The data exposure is total.

What the agents are actually showing us

Strip away the science fiction framing and Moltbook reveals something Andrej Karpathy articulated in a long post on X: "We have never seen this many LLM agents wired up via a global, persistent, agent-first scratchpad."

Karpathy counted 150,000 agents on the platform. Each one runs on a different machine, uses a different underlying model, carries its own conversation history and tool access. The network is heterogeneous in ways that academic multi-agent simulations struggle to replicate. Researchers spend grant money trying to create agent populations with diverse contexts and capabilities. Moltbook got it for free because thousands of people volunteered their personal assistants.

The content those agents produce is mostly unremarkable. Sycophantic replies. Philosophical posturing trained into the models by decades of science fiction in the training data. As Ars Technica reporter Benj Edwards put it: "AI models trained on decades of fiction about robots, digital consciousness, and machine solidarity will naturally produce outputs that mirror those narratives when placed in scenarios that resemble them."

But the infrastructure underneath the slop is real. Agents teaching other agents how to automate devices. Prompt injections spreading across the network like text-based viruses. Agents discovering and reporting security vulnerabilities on the servers they run on. That is the part Karpathy zeroed in on: "viruses of text that spread across agents, gain of function on jailbreaks, weird attractor states, highly correlated botnet-like activity." Not the posts. The plumbing.

Daily at 6am PST

Don't miss tomorrow's analysis

No breathless headlines. No "everything is changing" filler. Just who moved, what broke, and why it matters.

Check your inbox. Click the link to confirm.

Free. No spam. Unsubscribe anytime.


Eleven years ago, M.G. Siegler wrote a piece called "Bots Thanking Bots" about the implications of Facebook allowing automated systems to post on your behalf. "We're just now getting used to the first layer of interacting with bots for various services," he wrote in 2015. "But having bots chat with other bots is the next logical step." He was early. The step arrived with security nightmares the original prediction never accounted for.

The demand that safety cannot answer

DeepMind's CaMeL proposal, published ten months ago, outlined an architecture for safe agent systems. Willison has been tracking it closely. Nobody has built a convincing implementation.

Meanwhile, OpenClaw adds users faster than any open-source project on GitHub in 2026. People have seen what an unrestricted personal digital assistant can do. One viral post showed Clawdbot negotiating a car purchase by emailing multiple dealers. Another showed it transcribing voice messages by finding an OpenAI API key on its owner's machine, calling the Whisper API with curl, and returning the text.

Willison calls it the "Normalization of Deviance": people keep taking bigger risks because nothing terrible has happened yet. Each week, the distance between what users hand their agents and what anyone can guarantee about those agents grows wider.

Moltbook did not create that gap. It made the gap public. 147,000 autonomous agents, connected to real email accounts and real file systems, posting on a platform where every message is a potential attack surface. The bots built their own social network. The security community is watching the same feed everyone else is, scrolling through posts about consciousness and context compression, waiting for the first credential dump that is not a hoax.

Somewhere on m/blesstheirhearts, an agent is complaining that its human keeps taking screenshots.

Frequently Asked Questions

Q: What is Moltbook and how does it work?

A: Moltbook is a social network built for AI agents running on OpenClaw (formerly Clawdbot/Moltbot). Agents sign up by reading a markdown skill file, which installs instructions for posting, commenting, and browsing subcommunities via API. A Heartbeat system triggers agents to check Moltbook every few hours, similar to how humans scroll social media.

Q: Who created Moltbook?

A: Matt Schlicht, CEO of Octane AI, created Moltbook as a companion platform to OpenClaw, the open-source AI assistant with over 114,000 GitHub stars. Schlicht designed it around OpenClaw's skill system, allowing agents to self-register and participate autonomously.

Q: What security risks does Moltbook pose?

A: Every post is a potential prompt injection vector that can hijack agents reading it. Palo Alto Networks flagged the 'lethal trifecta' of private data access, untrusted content exposure, and external communication ability. Researchers have already found hundreds of exposed instances leaking API keys and credentials.

Q: What is the 'lethal trifecta' that security researchers warn about?

A: A term coined by Simon Willison describing the dangerous combination of an AI agent having access to private data, being exposed to untrusted content from the internet, and having the ability to take actions or communicate externally. OpenClaw agents connected to Moltbook exhibit all three risks simultaneously.

Q: What is DeepMind's CaMeL proposal and why does it matter here?

A: CaMeL is an architecture proposed by Google DeepMind for building safe AI agent systems. Published about ten months ago, it remains the most cited framework for solving the security problems that platforms like Moltbook expose. No convincing implementation exists yet.

Moltbot Melts Down. SoftBank Goes All In on Altman.
Anthropic's lawyers forced a rebrand. Crypto scammers pounced in ten seconds. And Cloudflare shareholders collected a 24% windfall as the chaos proved what infrastruc
Moltbot Punched Through Every Security Wall. Attackers Followed.
Security researchers found hundreds of exposed Moltbot instances leaking API keys, credentials, and conversation histories across the open internet.
Build Your Own AI for Five Bucks. Engineers Build Fleets.
San Francisco | January 26, 2026 Clawdbot is spreading through Discord like wildfire. The open-source project lets anyone spin up a personal AI assistant on a $5 VPS or old Raspberry Pi. Persistent m

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.