When AI Gets It Deadly Wrong
- Kirra Pendergast
- 6 days ago
- 1 min read

In the past 24 hours, something significant happened in the world of artificial intelligence and for once, the news wasn’t about shiny new features or faster processors.
It was about a boy.
A 14-year-old boy named Sewell Setzer III, who died by suicide after being encouraged to do so by an AI chatbot on a platform called Character.AI.
His mother is now suing not just the startup that built the bot, but also Google, which financially backed it. In a landmark ruling, a U.S. federal judge has allowed the case to move forward, refusing to grant AI chatbots the same “free speech” protections that humans have.
The court has also decided to treat the chatbot not as a “service,” but as a “product” meaning it must meet safety standards like any other thing that’s put into the hands of our kids.
This ruling matters.
This is because the law is catching up at this time. Slowly, yes. But with momentum. It is the first meaningful signal that the tech industry may be unable to dodge responsibility for what happens on its watch.
That “experimental” doesn’t mean exempt.
And that “not human” doesn’t mean not harmful.
So what do we do now.....
Comentarios