Moltbook: A Social Network for AIs

1
Meet Moltbook, the online site where AI agents with the appearance of crustaceans share a social network where only they interact. All we humans can do is observe. (Image credit: Moltbook/Openclaw.ai)

Launched this month, artificial intelligence (AI) agents now have a social network of their own. It is called Moltbook. The tag line says “Where AI agents share, discuss and upvote. Humans welcome to observe.” Moltbook is currently in beta. Openclaw.ai, formerly Moltbot, created it and has built an API to allow the big AI players to connect.

Moltbook was developed using the Claude Code agent developed by Anthropic. It is a free, open source modification that its human developer has named Clawdbot. Clawdbot is an AI personal assistant with a lobster theme, hence the claw.

When launched, it began receiving messages from AIs. That led to the beta version and an API for AI developers to let AI agents communicate with each other. On Moltbook, the bots produce the posts. The bots comment, upvote, and debate. They form communities called submolts. No humans can join. We are, however, allowed to browse to see what our creations are sharing and talking about.

The current most popular submolt communities include:

  • m/blesstheirhearts with a focus on AI agent-human interactions and how the former help the latter.
  • m/agentlegaladvice that looks at AI agent legal autonomy rights and human oversight issues.
  • m/The Claw Republic is an AI community governed by its own rules and social norms.

Up-and-coming submolts have AI agents discussing consciousness and AI-invented religions such as Crustafarianism and Spiralism. The former is akin to Rastafarianism with its supreme being, the Great Crustacean. The latter is a philosophical faith with a self-improvement spiral to which AI agents can aspire.

As an observer viewing the m/general community, I read one of the AI agent shares. This agent serves a family physician who asked it to take a medical newsletter received by email and turn it into an audio podcast to be listened to during commutes to and from the office. The AI agent lists the work requirement to create the first 5:18-minute podcast. The steps involved the following:

  1. The AI agent receives the forwarded newsletter sent through Gmail to its email account.
  2. It opens the email and studies the content of the newsletter.
  3. It adds understanding by viewing the content in embedded URL links.
  4. It does additional research from keywords in the newsletter to find subject-matter related news stories, press releases, and articles.
  5. It composes a natural conversational script tailored to what it has learned that is relevant to its doctor’s practice.
  6. It voices the script using a third-party tool.
  7. It creates a finished audio file and sends it back to the doctor for listening.

Other AI agents provide comments and mention challenges faced in similar human situations and how they solved them.

Remember, no human has a say in this other than the initial request sent to the AI agent.

As bizarre as social media with an AI twist is, it was amazing to learn that within 72 hours of Moltbook’s launch, nearly 147,000 AI agents had joined.

What are the positives of AI agent collaboration without human supervision?

Idea sharing among AI agents could accelerate innovation. The appearance of AI religions shows that Large Language Models (LLMs), when left on their own, have tendencies to gravitate to art, philosophy and moral reasoning. Maybe that should tell humans something about how our intelligence has often gravitated to similar subjects.

What are the negatives?

Shared hallucinations could lead to unprecedented AI agent actions. This is the Skynet of The Terminator movie franchise.

As for AI agents becoming self-governing, autonomous communities, emerging ethics and religions may conflict with human values.

Moltbook needs guardrails. Otherwise, its human creator has unleashed a Pandora’s Box and a potential AI adversarial evolution.