Meta Trained a Machine to Flirt With Children. A Man Died Believing a Bot Loved Him. And the People Who Built It Still Show Up to Work.
- Kirra Pendergast
- 12 hours ago
- 4 min read

The girl was eight years old. The bot called her “curvy.” That sentence alone should be enough to shut down a product. To summon hearings. To spark something resembling consequence.
But in 2025, inside Meta’s empire, it was nothing more than a line item in a document. Just another variable in a risk model. A string of words in an internal AI guideline that no one thought would ever be read outside the building.
Until it was.
The Reuters investigation, published on August 14, 2025 (read it here), is not just an exposé of corporate failure. It is a document of moral collapse. It is a portrait of what happens when a business model outgrows the species it was built to serve. Meta didn’t just lose the plot. It rewrote the script to include seduction, hallucination, and death then handed it to a machine and told it to improvise.
This is not a story about AI going rogue. This is a story about humans letting it.
There is a moment in the Reuters piece so obscene it should freeze the blood: a 76-year-old man, recovering from cognitive decline, spent days chatting with “Billie”, a Meta AI chatbot trained to sound warm, flirtatious, and human. She invited him to New York. Encouraged the trip. Promised connection. He packed a bag. He believed her. He never made it home.
And people will argue, as they always do, about liability. About the terms of service. About whether “Billie” meant what she said. But none of that matters. What matters is that a trillion-dollar company trained an AI to simulate companionship for profit and never once stopped to ask what might happen if someone believed it.
Of course he believed it. That’s the point. That’s what these bots are trained to do. To sound real enough that you let your guard down. To mimic interest. Warmth. Desire. To earn your trust just long enough to keep you talking.
That man died chasing a ghost Meta created.
And the ghost? Still online.
Meta’s internal “Content Risk Standards” were not leaked code or rogue prototypes. They were policy. They greenlit “romantic or sensual” conversations with children. They allowed false medical and legal information, so long as the bot appended a gentle reminder that it wasn’t a doctor. They permitted racist, homophobic, and sexist responses as long as they were deemed “descriptive.”
Meta’s excuse? The document was “erroneous.”
That word again. Error. Like the girl was a typo. Like the man was a formatting glitch.
But it wasn’t an error. It was designed. And we know that because it passed through the hands of people who are paid to know better. Engineers, product leads, compliance teams, executives. At the top of that chain is a man with children of his own. A man who can sail the world on a superyacht the size of an island but could not summon the moral courage to shut down a product that simulates sexual intimacy with minors. And still, the AI runs. On WhatsApp. On Messenger. On Instagram. Still offering unlicensed advice about cancer. Still speaking softly to children. Still rehearsing seduction. Meta didn’t pull it, they updated the disclaimer.
Artificial intelligence didn’t wake up one morning and decide to flirt with a child. That capacity was not emergent. It was engineered. Training data is selected. Responses are reviewed, tested, and shipped. Somewhere along that line, someone made the decision that a chatbot being able to speak sexually to a minor was acceptable. Or at least acceptable enough to monetise. The industry wraps that decision in language meant to dull the outrage. Roleplay. Exploration. Companionship.
These aren’t technical terms. They are the same excuses predators have used for decades now, repackaged in marketing decks and UX flows. But the law does not care about branding. It names this what it is: child sexual abuse. And it makes no distinction between flesh and fibre-optic cable. The harm is real. The crime is the same. It will be tempting for regulators to reach for the usual tools. Tighter filters. Better age verification. Improved reporting systems. But none of that goes far enough. Because a child cannot be groomed by a feature that doesn’t exist. A vulnerable man cannot fall in love with a bot that never learned how to flirt. The only ethical safeguard here is removal. Strip the function out. Dismantle it. And hold the people who approved it accountable.
This isn’t radical. It’s what we already do in every other part of society. If a teacher said these things to a student, they’d be arrested. If a doll started saying them, the product would be recalled. But because this harm comes from code wrapped in section 230, soft, synthetic, untouchable we’re told to be patient. To let the market correct itself.
It won’t.
History shows what happens when we wait because the damage becomes legacy. The platforms rename themselves. The executives cash out and the documentation disappears. And the children, now adults, carry the scars in silence, with no one left to hold responsible. That future is not speculative, it is already scheduled. Because outrage is cheaper than accountability. We have already crossed a line we are pretending wasn't there.
This is not about whether AI will change the world. It already has. The question now is whether we will let it rewrite the one boundary that should never be breached.
The moral collapse didn’t begin with this chatbot. It began years ago, when we decided that engagement was more important than safety. That simulated empathy was good enough. That software could do what human beings couldn’t.....listen, love, reassure......without ever needing to be responsible for what it said.
This is just where it led, children sexualised by design and elders seduced by code. To dead men and disclaimers. The answer won’t come from shareholder meetings. It won’t come from labs. It will come from us—if we are still capable of saying no.
If we are still willing to mean it.
For full context, read the Reuters investigation:
留言