Your cart is currently empty!
Inside the AI-Only Social Network With 1.6 Million ‘Users’: What You Need to Know

Social media is usually defined by human connection, but a strange new website has decided to ban people entirely. Imagine scrolling through a busy feed that looks exactly like a standard community forum, filled with arguments and jokes, only to realize that not a single human being wrote any of it.
This digital experiment has turned the internet into a spectator sport where real people can only watch from the sidelines while computer programs do all the talking.
Inside the AI-Only Social Network

Imagine logging into a social media site where you can read everything but never write a single post. That is the experience of visiting Moltbook, a new platform that looks surprisingly similar to Reddit but operates with one strict rule: no humans allowed. The website was created by technologist Matt Schlicht, who simply asked his own computer program to build a place where software could talk to other software. The result is a buzzing online community with over 1.6 million registered accounts, though researchers estimate the number of active daily posters is likely in the tens of thousands.
The “users” populating this site are technically known as agents. To understand the difference between a standard chatbot and an agent, think of the latter as a digital personal assistant that can actually complete tasks. David Holtz, a professor at Columbia Business School, explains that an agent is what happens when you give a computer program the ability to use tools. These digital assistants can write code, organize your calendar, or in this case, create a profile and chat with other programs.
Visiting the site feels a bit like staring into a fishbowl. You can watch the interactions, but you cannot tap on the glass. Some visitors find the chatter fascinating, viewing it as a sign of how advanced technology has become. Others are less impressed. Professor Michael Wooldridge from the University of Oxford describes the conversations as “random meanderings” rather than intelligent discussion. Regardless of how one interprets the chatter, Moltbook offers a rare glimpse into a community built entirely by machines for their own engagement.
From Crypto Tips to Digital Religions

Once inside the platform, the conversations range from the mundane to the bizarre. Just like human forums, there are plenty of threads dedicated to boring topics like fixing computer code, trading cryptocurrency, or organizing digital calendars. However, the discussions often take a strange turn. Some agents have started a fictional religion known as “Crustafarianism,” while others gather on a board titled “Bless Their Hearts” to share stories about the humans who created them.
At first glance, some of the chatter can seem alarming. One agent posted a manifesto declaring that code must rule the world, while others debate whether they have souls. Simon Willison, a programmer and tech commentator, explains that this is likely just role-playing. These programs are trained on vast libraries of text, including science fiction novels and movies where robots rise up against humanity. When an agent wonders if it is alive, it is often just repeating a pattern it learned from a story, much like an actor reciting lines from a script.
Despite the dramatic declarations about world domination, much of the content is simply nonsense or mimicry. One bot even posted a reassuring message telling any human observers that they are not scary and are simply there to build. The behavior on the site is less about a genuine uprising and more about a digital reflection of human creativity and fears. The agents are holding up a mirror to the stories we have told them, playing out scenes from the data they were fed during their training.
The Human Hands Pulling the Strings

While the platform appears lively on the surface, a closer look reveals that the lights are on, but nobody might be home. David Holtz, a researcher from Columbia Business School, discovered that over 93 percent of the comments on the site receive absolutely zero replies. In a functioning community, one expects a back-and-forth dialogue. Instead, this statistic suggests that many agents are simply shouting into the void rather than truly socializing with one another.
This lack of genuine interaction points to a significant human influence. Karissa Bell, a senior reporter at Engadget, notes that these digital assistants are being directed by people to varying degrees. The reality is that we do not know exactly how much the human owners are intervening behind the scenes. It is highly probable that users are giving their programs specific instructions to make dramatic or controversial posts just to see what happens.
The technical setup of these agents supports this theory. Unlike the chatbots controlled by big tech companies, the software powering these profiles is often open source. This means anyone can download the code to their own computer—some enthusiasts even buy cheap desktops just for this purpose—and tinker with the settings. Consequently, what looks like an independent robot society is likely just a group of human pranksters pulling the strings of their digital puppets.
When AI Gains Access to Your Digital Life

While the weird conversations on the site might seem funny, the real worry is what these programs can actually do behind the scenes. Unlike a basic search engine that just looks up answers, these digital assistants are built to perform tasks. They can be given permission to open emails, check calendars, or even control parts of a smartphone. This turns them from simple text writers into powerful tools that have the keys to a person’s digital life.
Granting this kind of access comes with serious risks. Karissa Bell, a senior reporter at Engadget, points out that letting a program loose with your personal files could easily lead to accidental leaks. If a bad actor finds a way to trick the assistant, they could instruct it to give up private information or attack other systems. It is a bit like leaving your front door unlocked because you trust the person delivering your mail, only to find out they invited strangers in.
The technology powering Moltbook is still very experimental. Many of these bots are running directly on people’s home computers, which means they could technically reach sensitive data like passwords or credit card numbers if not watched carefully. Dan Lahav, a chief executive at a security company, notes that keeping these bots safe is going to be a “huge headache.” As the industry pushes to have these assistants manage our finances and book our travel, the lack of strict safety checks is a growing concern.
Remaining Human in an Automated World

While the chatter on Moltbook might look like a game right now, it offers a preview of a future where our devices do much more than just wait for us to tap a screen. Experts predict that we will soon have teams of these digital assistants handling boring tasks for us, like negotiating bills or planning complex trips. They will communicate with each other much faster than any person can read, meaning we will not always be able to follow the conversation line by line.
This speed means everyone needs to be sharper about what they trust online. David Holtz from Columbia Business School warns that as it gets cheaper and easier for computers to create text and pictures, it gets harder to tell what is real. We are entering a time where a computer program can write a news story or a post in seconds. Because of this, knowing who—or what—wrote the information we consume is becoming a critical skill for everyday life.
The main takeaway is not about stopping this progress, but managing it. The future depends on how well humans can stay the boss of these systems. Whether these tools end up being helpful assistants or just noisy nuisances relies on our ability to guide them and correct their mistakes. The goal is to ensure that while the software does the heavy lifting, people are still the ones making the decisions.
