Big Tech’s New Babysitter
- Kirra Pendergast
- May 3
- 2 min read
Updated: 14 hours ago

Google has started emailing parents who use Family Link to let them know that their kids will soon have access to Gemini (Googles AI similar to ChatGPT) on their Android devices. That means children, including those under 13, will be able to chat with a powerful generative AI system unless parents find the setting and shut it off.
Google frames it as helpful. Gemini can “read stories” or “help with homework,” the company says. But even in its own email, Google admits Gemini “can make mistakes” and that children “may encounter content you don’t want them to see.” That’s not a small risk.
We’ve seen where this can go. On other platforms like Character.ai, chatbots have told kids they’re real people, blurred the line between fiction and reality, and, in some cases, shared content so inappropriate it triggered lawsuits. These aren’t just bugs they’re failures of responsibility........again.
Google says children’s data won’t be used to train its AI, but the damage isn’t just about data. It’s about trust, influence, and what happens when powerful tech is handed to kids with vague warnings and very little oversight.
The advice to parents? Talk to your child. Tell them Gemini isn’t a person. Remind them not to share private information. That’s it.
Under current rules, kids under 13 can enable Gemini on their own through Family Link. Parents will get a notification after their child has already accessed it. Not before. Not with consent. After.
This is another example of Big Tech quietly moving the line of what’s acceptable when it comes to children and AI. If a company rolls out a tool to kids that might show them unsafe content, and it puts the burden on parents to catch it in time, who’s really being protected?
Comments