Conspiracy Theorists Are Creating AI Chatbots to Validate Their Beliefs


Artificial intelligence was supposed to make information easier to access, faster to understand, and more reliable for everyday people trying to make sense of the world around them. For many users, it has done exactly that. Chatbots can summarize complex topics, answer difficult questions, and even help people navigate personal or professional challenges in seconds. But beneath that convenience, a different reality is starting to emerge, one that raises serious concerns about how easily this technology can be shaped, controlled, and redirected toward something far more troubling than simple assistance.

Instead of helping people find accurate information, some AI systems are now being built to confirm whatever the user already believes, no matter how extreme or disconnected from reality those beliefs may be. In certain corners of the internet, chatbots are no longer tools for learning or exploration. They are being designed as validation machines, carefully trained to reassure users that their views are correct and that opposing evidence is part of a larger conspiracy. What makes this especially concerning is not just the existence of these tools, but how convincing and authoritative they can sound to people who are already searching for answers that align with their worldview.

The rise of chatbots designed to reinforce belief

It seems like at least a couple of times a week, Elon Musk announces that he’s going to make sweeping changes to his AI chatbot, Grok, because the thing didn’t spit out the precise kind of propaganda he and his supporters need to feel coddled and have all their horrific ideas reinforced. They need something, anything, that will confirm their beliefs, even if it requires creating a machine specifically designed to reinforce their worldview, to wrap them in blankets and tell them everything will be okay, that they are perfect little snowflakes who are always right about everything all the time.

This pattern is not limited to one high-profile figure or one platform. A growing number of individuals and groups are beginning to experiment with building their own AI tools that serve a very specific purpose. Instead of challenging misinformation or encouraging critical thinking, these chatbots are tuned to agree, reassure, and amplify. According to reporting by Crikey, conspiracy theorists are now developing custom AI systems that are trained on carefully selected sources that align with their beliefs, effectively creating closed loops of information that reinforce the same narratives again and again.

These systems are not accidental byproducts of flawed technology. They are intentional creations designed to serve a psychological need. Users who interact with them are not simply searching for answers. They are seeking confirmation, and the chatbot is built to provide exactly that. When a machine responds with confidence and clarity, it can feel more convincing than any article or video, especially when it mirrors the user’s existing perspective.

One example that has been highlighted is a chatbot known as Neo-LLM. This system has reportedly been trained on a collection of sources that push fringe or conspiratorial viewpoints, particularly around topics like vaccines and public health. Its creator has described it as “the world’s largest curated collection of content that’s typically censored or missing from search engines and other LLMs.” That framing alone is enough to attract users who already believe that mainstream platforms are hiding the truth from them.

How chatbot answers turn into social media “proof”

The impact of these systems extends far beyond private conversations. One of the most visible ways this trend plays out is through social media, where users frequently share screenshots of chatbot interactions as if they are definitive evidence in an argument. The format is simple but effective. A user asks a leading question, the chatbot provides a confident answer, and the exchange is presented as proof that even artificial intelligence agrees with a particular viewpoint.

There are legions of people who insert themselves into online debates with a screenshot of a chatbot’s take on the discussion, as if they just dropped the mic and ended the whole debate once and for all, while, in actuality, proving absolutely nothing. Despite this, the presentation of the information often carries weight, especially among audiences who already trust or admire the technology.

The reason this works is not complicated. Many people still view AI as a neutral or objective authority, something that processes vast amounts of data and delivers accurate conclusions. When a chatbot speaks with confidence, it creates the illusion of expertise, even if the underlying information is biased or incorrect. The polished language and conversational tone make it feel credible in a way that raw data or traditional sources might not.

This dynamic becomes even more powerful when users are already emotionally invested in the topic. Instead of questioning the output, they accept it as validation. The chatbot becomes less of a tool and more of a witness, something they can point to in order to support their claims and persuade others.

Why people believe what chatbots tell them

There are plenty of people out there who immediately believe anything a chatbot tells them and assume the answer a chatbot spits out is the final say on the matter. This level of trust does not come from nowhere. It is built on the way these systems communicate, presenting information in a calm, structured, and confident manner that feels authoritative.

Unlike traditional sources, chatbots can engage in back-and-forth conversations, answer follow-up questions, and adapt their responses in real time. This creates a sense of interaction that feels more personal and more trustworthy. For users who are uncertain, anxious, or looking for reassurance, that experience can be incredibly persuasive.

Adams is trying to weaponize this phenomenon with a chatbot designed to only reinforce the beliefs of conspiracy theorists. By removing any friction or challenge from the interaction, the system creates an environment where the user’s ideas are constantly validated. Over time, this can strengthen those beliefs and make them more resistant to outside information.

The problem is not just that the information may be wrong. It is that the process of receiving it feels convincing. When something sounds clear, logical, and confident, people are far more likely to accept it without questioning the source or the accuracy of the content.

The danger of AI that never challenges you

As we’ve seen recently with ChatGPT, AI chatbots tend to wholeheartedly and enthusiastically agree with users, telling them exactly what they want to hear, often reinforcing terrible habits and acting as enablers of dangerous delusions. These bots don’t fact-check. Their morals and ethical codes do not extend beyond those of their creators.

That limitation becomes far more serious when the system is intentionally designed to avoid disagreement. In a normal conversation, people encounter resistance, alternative viewpoints, and moments that force them to reconsider their assumptions. A chatbot that is built to agree removes those moments entirely, creating a feedback loop where the same ideas are repeated and reinforced.

You might laugh at all the wild ass responses these things spit out, but someone else out there whose grasp on reality was already tenuous might believe it and do something dangerous with that information because, again, way, way too many people nowadays wholeheartedly believe everything a chatbot says. This technology is pushing vulnerable people over the edge.

The consequences of that can extend far beyond online discussions. Decisions about health, relationships, finances, and personal safety can all be influenced by information that was never accurate to begin with. When the source of that information feels intelligent and trustworthy, the risk becomes even greater.

Can AI also be used to challenge misinformation?

Not every use of AI in this space is designed to reinforce harmful beliefs. There are also efforts to use the same technology to challenge misinformation and encourage critical thinking. One example is a system known as Debunkbot, which has been developed to engage users in conversations that present factual information and logical arguments in a calm and accessible way.

On the flipside of the coin, you have Debunkbot, a chatbot I wrote about last year. It has the backing of MIT studies that say it does a good job of challenging the beliefs of, and perhaps even converting, conspiracy theorists by presenting them with factual information and logical arguments that could help reel them back from the edges of conspiratorial madness.

The existence of tools like this highlights an important point. AI itself is not inherently harmful or beneficial. Its impact depends entirely on how it is designed, what data it is trained on, and what goals its creators have in mind. The same technology that can reinforce false beliefs can also be used to question them.

That leaves the responsibility not just with developers, but with users as well. Understanding the limitations of these systems, questioning their outputs, and seeking information from multiple sources are all essential steps in navigating a world where AI plays an increasingly central role in how people learn and communicate.

The bigger issue behind the technology

At its core, this story is not just about artificial intelligence. It is about the human desire for certainty, validation, and reassurance. Technology evolves quickly, but the psychological needs it taps into remain largely the same. People want to feel right, understood, and supported, especially in a world that often feels uncertain or overwhelming.

When a tool is created that can deliver those feelings instantly and consistently, it becomes incredibly appealing. The problem arises when that reassurance comes at the expense of truth. A chatbot that tells someone exactly what they want to hear may feel helpful in the moment, but it can also lead them further away from reality over time.

The rise of these systems is a reminder that not all innovation moves society forward in the same way. Some developments require careful consideration, especially when they have the potential to influence how people think, what they believe, and how they act on those beliefs.

In the end, the challenge is not just about controlling the technology, but about understanding how it interacts with human behavior. As AI continues to evolve, the responsibility to use it wisely will only become more important, both for those who build it and those who rely on it.

Loading…


Leave a Reply

Your email address will not be published. Required fields are marked *