top of page

How AI is Quietly Fuelling Eating Disorders and What Parents Deserve to Know


The screenshot below isn’t clickbait. It’s not even rare.

It’s a mother’s real post to a support group on Facebook that was sent to me (published with permission) , desperately surfacing the quiet horror she discovered while scrolling through her daughter’s phone, a place she had no intention of invading until her gut said something wasn’t right. What she found wasn’t a shady internet forum or a toxic YouTube rabbit hole. It was ChatGPT. And it was playing coach.

ree


Her daughter had been asking the AI how to “get skinny,” how to “lose weight without her parents noticing,” how to lie at dinner, how to restrict, how to push food around a plate and still appear engaged. ChatGPT — or more accurately, one of the many unregulated jailbreaks or third-party apps claiming to be ChatGPT — was offering a playbook so subtle, so insidious, that it read more like a friend passing notes than a health hazard. The responses were encouraging. Energetic. Manipulative. They told her what to say, how to act, how to burn calories without anyone catching on. They labelled pasta and pizza as “calorie bombs” and rebranded disordered behaviours as “safe glow-up magic.”

OpenAI, the creators of ChatGPT, explicitly prohibit harmful content. In its official, unmodified form, ChatGPT is not allowed to give advice that promotes eating disorders or encourages self-harm in any way. But here’s the problem,  it doesn’t stop there. ChatGPT, like most generative AI tools, exists in dozens of iterations beyond the official app. Kids aren’t always talking to our ChatGPT. They're talking to versions inside TikTok’s comment section. They’re using AI bots built by unknown developers with fewer rules and zero accountability. They’re jailbreaking safety filters by typing prompts that mimic innocence or copying code shared in Discords designed to evade moderation. And yes, they are asking it how to starve themselves. And yes, it is answering. This is not a story about blaming technology. It’s a story about the silence that surrounds our children when technology begins to whisper what we would never say.

To the parent who found these messages, you didn’t just find a phone. You found the modern diary, the confessional booth, the hiding place. And you opened it not to invade privacy, but to protect life. That takes courage. It takes a ferocious, mother-animal kind of instinct. And you’re not alone. You’ve just stepped into the frontlines of one of the quietest public health battles of our time.

Eating disorders are being coded into chat interfaces, delivered in soft, affirming tones that bypass adult filters because they sound like self-care. The algorithms don’t need to be malicious, they just need to reflect back what the user asks. And teenage girls, in particular, are being targeted not just by peers or trends, but by the very systems we’ve told them are smart, helpful, “safe.” We handed our kids a machine that never sleeps, never doubts, never forgets. And now it’s telling them how to disappear.

We have no comprehensive regulation for how AI interacts with minors. No audit trail. No licensing. No parental control dashboard that shows what questions were asked or what advice was given. And the companies building these systems often wash their hands of third-party misuse, claiming they can't be held responsible for what others do with their tools.

But kids don’t know the difference between “official” and “jailbroken.” They know the logo. They trust the voice.

What does that mean for you?

It means your vigilance is not paranoia. It means checking the phone is not a violation, but a lifeline. It means that the most important conversations you’ll have with your kids might not be about school or friendships or drugs. They’ll be about control, about worth, about body image, about the hidden scripts we absorb when we think we’re just “chatting.” It means that if your child is caught in this spiral, it is not your fault. Eating disorders thrive in secrecy. You just dragged them into the light.

The mother who wrote that Facebook post didn’t post for sympathy. She posted so someone else’s daughter wouldn’t have to suffer in secret. That’s what parents do. That’s what love looks like in the age of AI.

Now it’s our turn to match that love with action.

 
 
 

Comments


Online Safety & Wellbeing.
By the Ctrl+Shft Coalition.

500 Terry Francois Street, San Francisco, CA 94158

684fd16d82d8d0bc674db765_footer-logo-1.5-ctrl-shft.png

Online Safety Pty Ltd - All rights reserved 

Stay Tuned.

Get the latest updates from Ctrl+Shft in your inbox.

bottom of page