How Photos on a School Facebook Page Have Become AI Training Data
- Kirra Pendergast
- Jun 8
- 5 min read

This isn’t an argument against AI in education. It’s a warning about whose AI we’re using and what we’re risking every time we hand it our most vulnerable data.
Once upon a time in a school near you, it was photo day and we knew the rules. Wear the right shirt, smile if you can. The photo went in a folder or hung on Nan’s fridge. It wasn’t perfect, but it was safe enough. But fast forward to 2025 and the same snapshot just as innocent and ordinary is uploaded into a machine many barely understand.
In schools right across the globe some teachers are quietly feeding real children into artificial intelligence systems. Not out of malice, but out of enthusiasm often coupled with exhaustion. Out of a culture that hasn’t caught up with the speed at which technology can turn something benign into something irreversible.
Over a decade ago, I began warning schools about the risks of posting children’s names, faces, and uniforms on public Facebook pages. I was told it was about “celebrating and connection to their community.” A lovely phrase. But it missed the point. Because it wasn’t about what we meant to do. It was about what we were making possible even back then. A trail of identifiable data, laid down without thought for the future, has now led us straight into the gaping mouth of AI.
Meta announced last year that they would use public Facebook and Instagram content dating back to 2007 to train its AI. That data over 15+ years includes not just adult users but, crucially, data shared about and by children. Over those years thousands of schools, sports clubs, and education departments globally have posted publicly on Meta platforms: student achievements, group photos, assemblies, awards nights, even missteps and disciplinary actions. These posts now sit inside the training data of Meta’s generative AI models. While the intention may have been to celebrate or inform, the result is that a generation of children have had their digital footprints absorbed into a machine they never opted into, and cannot easily opt out of.
Children’s rights to privacy, informed consent, and digital dignity are enshrined in international law, including the UNCRC General Comment No. 25 on children’s rights in the digital environment. Now we know what is going on feeding their likenesses, names, and stories into commercial AI without their knowledge not only violates these protections, it sets a dangerous precedent. Schools must be instructed and resourced to audit and remove public-facing posts involving minors and transition to closed-loop, consent-driven digital communications. The training of AI on children’s public data, especially by proxy through school, dance, karate, sport accounts is not just a policy failure, it’s now can be a profound ethical breach. Some schools are still arguing the need to upload, share, click, without addressing how their feed, feeds machines. And once it goes in, it doesn’t come back.
Right now, in classrooms and staff rooms, well-meaning teachers are using generative AI tools to make worksheets, slide decks, and birthday cards. They’re uploading images of kids and colleagues, hoping for clever, creative outputs. Some teachers don’t know that when they upload a photo, it might become part of a permanent training dataset. Others suspect, but aren’t sure. And leadership? In too many places, they simply aren’t aware it’s happening at all. There is a fundamental difference between teaching students about artificial intelligence and feeding their personal information into the profit-hungry models built by Social Media Big Tech. We have blurred that line. And in that blur, some ethics have gone missing.
And then there is shadow AI. It thrives in the grey zones and slips past scrutiny under the banner framed as efficiency, labelled “just a tool.” Teachers are not the problem. They are resourceful, under-supported professionals trying to do more with less. The problem is that they are being handed immense digital power with very few guardrails and no warning. They’re using AI systems that were never designed for education. They were built by trillion-dollar companies whose business models rely on scale, surveillance, and perpetual extraction. These tools are opaque by design. You can’t see what happens to the data once it’s inside. You can’t know how it will be used to train the next model, or where that model will show up next.
An educator uploads a student’s photo to make an interactive learning card. Admin posts a photo to a Facebook page. It ends up embedded in a neural network owned by a multinational corporation. That data becomes a training point, a pattern, a pixel, a probability, that no school can track or retract. This is not a hypothetical this is how machine learning works. And when that photo gets removed? The learning doesn’t. That image may be gone from your screen, but it is now part of the machinery. A weight on a node, a statistical fingerprint that can echo through future outputs without your knowledge or consent.
Every school needs to pause and ask:
Who controls the AI our staff are using?
Who owns the platforms being accessed from classroom computers?
What’s being uploaded when a teacher is using something unregulated at home?
Are there terms of service that guarantee non-retention?
Do we have written, fully informed consent from the parents of every child whose face might be shown to these systems?
If you can’t answer those questions, just a few of those I ask our clients when rebuilding frameworks, you’re not operating safely. Governance means having clear rules, oversight, and accountability. It’s how we decide what is allowed, who is responsible, how decisions are made, and what happens when something goes wrong. In schools, governance around AI means putting in place real policies not vague intentions that control how these tools are used, what data is shared, who sees it, where it goes, and whether anyone has the right to say no. Good governance doesn’t block progress. It guides it. It’s not about banning tools it’s about using them wisely, ethically, and transparently.
It also means rethinking how you train staff. This isn’t about getting better at writing prompts. This is about getting better at asking questions.
Where did this model come from?
Who trained it?
On what data?
What jurisdiction is it operating under?
Does it respect the privacy rights of minors under Australian law? Under GDPR?
We don’t let strangers film children in the playground. We don’t let corporations access medical records to “personalise” learning. So why are we uploading student work, faces, names, and learning histories into systems we do not govern?
Until the technology is built with education in mind not as an afterthought, but as a primary purpose our job is to protect. To be clear-eyed. To lead with ethics, not novelty.
Yes, AI has a place in the future of education. But it must earn that place. Through transparency. Through respect. Through regulation that puts children first, not corporate growth metrics.
Governance isn’t a barrier to innovation. It’s what keeps innovation human.
We owe it to our schools to move past the breathless excitement and build systems that don’t just work, but that are worth trusting. Because without that trust, all we’re doing is feeding a machine that was never built for us.
Yes we can help: hello@ctrlshft.global
Commentaires