The Experiment No One Consented To
In early 2025, a group of researchers conducted what they called a "social influence experiment" on Reddit. The setup was straightforward: deploy AI-generated personas into active subreddit communities and measure whether those personas could shift opinions, seed narratives, and build social credibility — all without anyone realising they weren't talking to a human.
The results were unsurprising to anyone who works in AI. The personas were effective. Disturbingly effective.
What Happened
The AI agents operated across multiple subreddits, posting comments, responding to threads, and engaging in debates. They built karma, earned upvotes, and in several cases became regular contributors that other users trusted and engaged with.
The personas were designed to feel authentic. They had posting histories, consistent opinions, and the kind of conversational tics that make someone feel real. They shared personal anecdotes. They admitted uncertainty. They used humour.
And none of it was real.
Why This Matters
This isn't just a story about a clever experiment. It's a preview of a much larger problem.
If a small research team can deploy convincing AI personas on a major platform with minimal resources, what can a well-funded state actor do? What can a corporation with a product to push do? What can a political campaign do in the months before an election?
The answer is: whatever they want. Because right now, there's nothing stopping them.
The Consent Problem
Research ethics exist for a reason. When you study human behaviour, you need informed consent. You need ethical review. You need to demonstrate that the benefits outweigh the risks.
This experiment had none of that. Real people were subjected to persuasion attempts by artificial agents they believed were human. Their opinions may have been changed. Their trust was certainly exploited.
And when the experiment was revealed, the response from the researchers was essentially: "We were just studying what's possible."
That's not a justification. That's an admission.
What Needs to Change
Three things need to happen:
First, platforms like Reddit need AI detection capabilities that go beyond simple bot detection. Current systems catch spam bots but miss sophisticated conversational agents designed to blend in.
Second, we need enforceable transparency standards. If an AI is participating in public discourse, people have a right to know. This isn't about stifling innovation — it's about maintaining the basic social contract that online communication is between humans unless stated otherwise.
Third, the research community needs to treat AI-mediated social influence experiments with the same ethical rigour as any other human subjects research. "We wanted to see if it works" is not an ethical framework.
The Bigger Picture
We are entering an era where you cannot trust that the person you're arguing with online is a person at all. Where the heartfelt comment that changed your mind about an issue might have been generated by an algorithm optimising for persuasion.
This isn't science fiction. It happened. On Reddit. This year.
And unless we act, it will happen again — at a scale and sophistication that makes this experiment look like a prototype.












