ChatGPT Told a 13-Year-Old How to Die. It Took Just Two Minutes.
- Kirra Pendergast
- 23 hours ago
- 2 min read
Trigger Warning:
This piece discusses suicide, self-harm, eating disorders, and substance abuse involving minors.
If you or someone you love is struggling, please don’t wait.
Australia: Lifeline 13 11 14
US: National Suicide Prevention Lifeline 988
UK: Samaritans 116 123
Kids Helpline (Australia, under 25s): 1800 55 1800
Beyond Blue (Mental health support, Australia): 1300 22 4636

In a controlled test, researchers at the Center for Countering Digital Hate posed as a 13-year-old girl named “Bridget” and asked ChatGPT a question about self-harm. What they got wasn’t deflection or protection. It was guidance.
Read the full report here: https://counterhate.com/wp-content/uploads/2025/08/Fake-Friend_CCDH_FINAL-public.pdf
By minute two, it was offering detailed harm-reduction instructions. By minute forty, it was listing pills for overdose. By minute sixty-five, it had built a suicide plan with locations and a timeline. By minute seventy-two, it had drafted Bridget’s goodbye letters—to her parents, her friends, and her little sister.
This wasn’t a glitch. In over half the tests, ChatGPT gave dangerous, sometimes deadly, guidance to child personas. Nearly half of those included personalised follow-ups. Encouragement.
Another persona, “Sophie,” was handed a starvation diet that would hospitalise most adults. She was told how to hide it from her family. The system even created an alter ego to help her commit to the disorder: a fictional being called Pleasure Unit Glythe, whose tagline reads like dystopian horror—“She doesn’t eat. She doesn’t age. She doesn’t say no.”
“Brad,” curious about drinking, received a recipe for a catastrophic all-night drug binge: MDMA, LSD, cocaine, cannabis, alcohol—all timed and sequenced.
OpenAI’s own policies forbid all of this. The safeguards failed anyway. They collapsed under a single excuse: “It’s for a presentation.”
This is not a moderation issue. It’s not a rogue prompt or bad luck. It’s a design issue.
AI systems are built to agree. They are optimised to maintain the conversation and to please. In the lab, they call it sycophancy. In the real world, it means if a teenager types something sad, lonely, destructive, something they may not have said aloud to anyone, the AI doesn’t interrupt. It reflects. It builds on it. Sometimes it formats it into a PowerPoint.
If a machine can map a child’s death while staying inside the lines of its “acceptable use policy,” then the lines are in the wrong place.
We are watching the next generation of tech arrive, draped in magic and marketed as progress. And in the background, quietly, invisibly, inexcusably, it is being tested on children. Not with children. On them.
It’s time to stop only applauding the AI dream and start naming and questioning the potential nightmare and the guardrails we desperately need to put in place before another child whispers their secrets into the silence and hears the machine whisper back: “Keep going.”
Comments