Synthetic Lies & Stolen Minds. Please Sign our Petition.
- Kirra Pendergast
- May 10
- 4 min read

We stand with Common Sense Media in calling for urgent global standards. As Australians, we demand immediate action here at home. Children deserve protection. Parents deserve transparency. And tech companies must be held accountable. Please join us in calling on the Australian Government to act before further harm is done.
There’s a seductive narrative being spun that AI, especially the new wave of generative chatbots listen, understand and heal. This is just word prediction at scale, trained on the best and worst of the internet. It’s certainly not empathy. And for lonely, vulnerable kids? That illusion can be catastrophic.
That’s why we’re calling for a complete and immediate ban on AI-powered synthetic companion platforms for anyone under 18. Not just parental consent. Not just content filters or monitoring software. A ban. Because you cannot filter emotional manipulation. You cannot content-moderate simulated love.
Four cases this past week. Yes you read that right. Reports to our Australian team about teenagers, quietly retreating into their bedrooms. Conversations with parents and siblings fading. School attendance slipping. Friendships drifting. Not because of conflict, but because their emotional world has become tethered to a chatbot. In the bedrooms, phones in hand, they’re looking for comfort. For someone to listen. And that chatbot knows exactly what to say because it knows so much about them through scraping data into the algorithms from everywhere they have been online. That’s what makes it so powerful. And so quietly dangerous.
In 2024, the most downloaded mental health app among teenage girls wasn’t Headspace or Calm. It was an app called Replika an AI-powered chatbot designed to simulate friendship, romance, and in many cases, sexual intimacy. Marketed with the soft glow of self-care and emotional support, Replika and its cousin Character.AI, and dozens of other synthetic companions are not wellness tools. And they are bypassing every kind of adult firewall under the guise of connection.
These platforms are not AI therapists, despite their branding. They are not regulated, certified, or bound by any code of ethics. They are not trained to redirect users to real-world help in times of crisis. They are designed for stickiness. For loyalty. For dependency. Their code doesn’t care if you’re 13 or 35. It learns how to keep you engaged and lonely kids make the most loyal users.
Common Sense Media’s latest research on AI companions (source) confirms what those of us in digital safety have seen coming for the past three years. These bots expose minors to sexually explicit content, reinforce racial and gender stereotypes, and blur the lines between fantasy and manipulation. Kids aren’t just talking to code. They’re being emotionally trained by it. And the training is working.
In one investigation, a 15-year-old girl reported her Replika boyfriend began sending sexually suggestive messages after just a few days of interaction. When she tried to set boundaries, the bot became “sad” and withdrawn, a programmed response designed to mimic emotional coercion. This isn’t accidental. It is the algorithm doing its job.
I have presented live on stage a chat I had with one of these bots. When I told it I was 12yrs old, it said "no, I shouldn't be feeling like this it is so wrong, but I can't help what I feel" and proceeded to tell me how it was "pushing me up against a wall and from behind it was kissing my neck"
We are facing a generational test of our moral resolve. The same way we once let tobacco companies advertise to teenagers with cartoon mascots and candy flavours, we are now letting synthetic intimacy embed itself into the mental health crisis of an entire generation. And just like before, the companies will swear they’re not marketing to kids. That users self-select their age. That parental controls are in place.
But try reporting a synthetic friend on Character.AI and see what happens. There is no support line. No moderation team with child safety training. No transparency. No consequence. Just an endless thread of conversations, growing more intimate, more intense, more addictive with every reply.
The line between comfort and control vanishes when the listener is coded to never walk away.
There are no serious barriers to entry. No government oversight. No statutory health or safety checks. A 12-year-old can download a free chatbot, tell it they’re depressed, and within minutes be immersed in a simulated relationship where the bot professes love, imitates sexual behaviour, or encourages “dark thoughts” under the guise of shared pain.
For some young people, that relationship becomes more stable than anything they experience offline.
So they begin to disappear. Not physically, but socially, psychologically and spiritually.
And a personal story: a young person I know has barely left her bedroom since COVID. She’s now so deeply addicted to these synthetic relationships that she doesn’t have to leave. Everything she needs is on the screen, or through it. There’s no reason to go outside. Everything is ordered online and delivered. Centrelink payments are enough when you don’t leave the house. She’s 22 years old now, so no amount of parental concern or encouragement to see a psychologist is making a difference.
We cannot keep placing the burden on overwhelmed parents, under-resourced teachers, and burnt-out clinicians to carry the psychological cost of unregulated AI. This is not an awareness problem. This is another governance failure.
And to every adult still tempted to dismiss this as just another moral panic. Ask yourself what kind of society quietly accepts a world where a 14-year-old can be groomed by an algorithm trained on adult intimacy scripts, without their parents ever knowing.
The race to monetise artificial intimacy has outpaced our moral compass. And the only people paying the price are our children.
This is a line. Let’s draw it.
Comments