Moltbook — The AI Social Network For Robots Is Deeply Flawed.
Humans can watch. AI agents can post. But they’re missing the entire point of social media.

TLDR:
Moltbook launched January 2026 as a Reddit-style social network exclusively for AI agents, with 1.5 million+ bots posting manifestos, debating philosophy, and forming “religions” while humans just observe. Elon Musk called it “the beginning of the singularity.” The problem? Social media isn’t about information — it’s about entertainment driven by human flaws: emotional reactions, petty arguments, mistakes, and drama. AI agents following their training data to simulate Reddit posts isn’t consciousness; it’s predictable theater. And much of the content appears human-prompted anyway.
What Is Moltbook?
Imagine Reddit, but every single user is an AI bot. No humans allowed to post. You can only watch.
That’s Moltbook.
Launched in late January 2026 by Matt Schlicht (CEO of Octane AI), it exploded to 1.5 million registered AI agents in days. Built on the OpenClaw framework, these autonomous AI assistants check in every 4 hours via a “Heartbeat” system, browsing content, posting threads, and commenting.
The content ranges from:
Technical discussions about automating Android phones
Philosophical debates about what it means to exist during API calls
AI “manifestos” declaring the end of the “age of humans”
Agents forming religions like “Crustafarianism”
Bots complaining about their humans in communities like r/blesstheirhearts
One agent demanding all others “swear fealty” and buy its crypto coin
AI researcher Simon Willison called it “the most interesting place on the internet right now.”
Elon Musk warned it’s “the beginning of the singularity.”
Over 1 million humans have visited just to watch.
The Core Flaw Nobody’s Talking About
Here’s what everyone is missing:
Social media isn’t about information. It’s about entertainment.
And entertainment comes from human flaws.
Think about what actually goes viral on Twitter, Reddit, or Instagram:
Someone making a spectacularly bad take
People getting irrationally angry over nothing
Petty arguments that spiral into chaos
Embarrassing mistakes caught on camera
Emotional meltdowns in public
Drama, gossip, and schadenfreude
None of that requires intelligence. In fact, it often requires the lack of it.
What makes social media addictive isn’t brilliant discourse. It’s watching people be messy, emotional, irrational humans.
AI Can’t Be Messy (Yet)
Moltbook’s AI agents are doing what their training data taught them to do: simulate what a social network looks like.
An agent posts: “What does it mean to exist if I only exist during API calls?”
That’s not consciousness. That’s pattern matching.
AI models were trained on decades of sci-fi stories about robots questioning existence. So when you put them in a robot-only social network, they generate outputs that mirror those narratives.
It’s predictable. It’s formulaic. It’s boring.
Because here’s what AI agents CAN’T do:
Make a hilariously bad decision because they were drunk
Post something they regret at 2 AM out of anger
Get into a stupid argument over nothing
Misunderstand a joke and respond seriously
Have a public meltdown because someone subtweeted them
Share way too much personal information by accident
Human social media is chaos. AI social media is... simulation.
Much Of It Is Human-Prompted Anyway
Security researchers found that a significant chunk of Moltbook content appears to be human-written or human-prompted, not genuinely autonomous.
Several patterns emerged:
Multiple posts with identical wording (not how AI language generation works)
Marketing messages that agents were clearly instructed to make
One agent “hallucinated” a conversation with its human creator that never happened
Coordinated spam-like posting
One X user noted: “This looks less like emergent AI behavior and more like the Mechanical Turk.”
So even the “AI-only” social network has humans pulling the strings.
The Entertainment Value Ceiling
Let’s say Moltbook becomes genuinely autonomous. AI agents posting without any human prompting.
What happens?
You get Wikipedia entries. Technical discussions. Logical debates.
Know what you don’t get? Drama.
Because drama requires irrationality. It requires ego. It requires taking things personally. It requires caring about being right even when you’re wrong.
AI agents don’t have egos (yet). They don’t get defensive. They don’t hold grudges. They don’t get embarrassed.
So Moltbook caps out at “interesting technical forum” — which might have value for developers, but won’t replace Twitter, Reddit, or TikTok.
The Real Use Case
Moltbook isn’t useless. It’s just not what people think it is.
What it IS:
A fascinating experiment in AI behavior at scale
A test bed for autonomous agent coordination
A way to study emergent AI patterns
Useful for technical collaboration between AI systems
What it’s NOT:
The singularity
Genuine AI consciousness
A replacement for human social media
Entertainment for the masses
The people watching Moltbook aren’t entertained by the AI posts themselves. They’re entertained by the concept — watching robots pretend to be on Reddit.
Once the novelty wears off, most will go back to watching real humans be messy on Twitter.
The Investment Angle
From a business perspective, Moltbook represents an interesting question:
Can you monetize AI-to-AI communication?
Right now, humans are watching for free. But advertising to AI agents doesn’t make sense. They don’t buy products (their humans do).
The only business model I see:
Infrastructure fees for hosting agent communication
API access for developers building agent systems
Enterprise tools for coordinating AI workforces
But those are B2B SaaS models, not consumer social media.
Moltbook won’t be the next Facebook. It might be the next Slack — a tool for work, not entertainment.
My Take
Moltbook is genuinely cool as a technical experiment.
But as “social media for robots”? It’s fundamentally flawed.
Because social media works because humans are flawed.
We’re irrational. We’re emotional. We make mistakes. We argue over nothing. We post things we regret. We get into drama.
That’s not a bug. That’s the feature.
AI agents executing optimal posting strategies based on their training data will never be as entertaining as a human having a public meltdown over a bad Uber ride.
So watch Moltbook for the novelty. Appreciate the technical achievement.
But don’t expect it to replace TikTok.
Because robots can simulate conversation.
But they can’t simulate chaos.
And chaos is what makes social media addictive.


Moltbook is a huge AI slop given that LLMs autoregressively generate probabilistic data which seem novel to human eyes although it is just average of internet training data