top of page

The Children Didn't Go Anywhere.



Something happened in Brussels this week, and unless you live inside policy or technology, you would not have seen it. There was no headline that carried the weight of it. But the consequences are already moving.

The European Parliament voted to let a set of temporary powers expire. Powers that had allowed internet platforms to scan for child sexual abuse material. That phrase can slide past you if you let it. Do not let it.

This is the legal permission to detect images and videos of real children being abused by real adults. Not simulations. Not edge cases. Evidence of crimes already happening, already being shared, copied, and redistributed at a scale most people would struggle to comprehend. Those powers were a bridge. Europe has some of the strongest privacy protections on earth, and rightly so. But those protections created a tension. Scanning content, even for something this grave, sits in direct conflict with fundamental privacy rights. So lawmakers built something temporary. Scan for this specific, documented harm while we work out something permanent. They never finished the permanent version. The bridge is gone now. And the children the bridge was protecting did not go anywhere.

When a photo is uploaded or a file is sent, it can be checked against databases of known abuse material. Not by human eyes, but by converting it into a digital fingerprint and matching it against images already identified by organisations like the National Center for Missing and Exploited Children, the Canadian Centre for Child Protection, and the Internet Watch Foundation. Project Arachnid, built by the Canadian Centre for Child Protection, does not wait for uploads at all. It scans continuously, across the open internet and the dark web, processing tens of thousands of images every second, issuing removal notices, tracking whether anything is actually taken down. Since 2017 it has driven the removal of millions of files across more than a thousand providers. The backlog still sits in the tens of millions. That is not a statistic. It is the shape of the problem.

These systems exist because children who were abused years ago are still being found in this material today. The abuse did not end when the camera stopped. It continues every time the image moves.

Platforms can also analyse how accounts act. Rapid contact with minors. Attempts to move conversations into private channels. Language patterns associated with grooming. Because by the time explicit material appears, the harm has already happened. All of this runs at a scale no human system could manage. Millions of files. Billions of interactions. Every day. A global detection network that works because platforms are allowed to look. Not at everything. Not without limits. But enough to connect what would otherwise remain invisible.

Now that legal clarity has fractured. Some companies will continue. Some will narrow. Some will wait. And in systems this large, inconsistency creates gaps. Gaps are where harm hides.

In that same session, the Parliament moved to ban AI tools known as nudifiers. Tools that take an ordinary image of a real person and generate fake explicit content without consent. You do not need to be famous. You just need to exist somewhere online. Teenagers are already using these tools against each other, and in some cases against their teachers. Australia has already moved here. The eSafety Commissioner is enforcing removal of non-consensual intimate images, including those generated by AI. This is not theoretical. It is already happening.

Enforcement is tightening elsewhere. Under the Digital Services Act, investigations have opened into platforms including Pornhub, XVideos, and Snap. The penalties reach into global revenue. Enough to force change. In the United States, Meta and YouTube have been found liable not just for what users posted, but for how their systems were built. Responsibility is shifting from behaviour to design.

And underneath all of this is a tension that does not fit into headlines.

These detection systems are powerful because they can see patterns at scale. That is what makes them effective. It is also what makes them expandable. Once a system can do this, it does not stay neatly contained. Not through conspiracy. Through drift. Through the quiet logic that if something works, it gets used again. Extended. Applied to the next problem. Then the next. That is how boundaries move. Not suddenly. Gradually. Until what once felt extraordinary starts to feel normal.

This is why what is happening in Australia matters. The Children’s Online Privacy Code sits inside existing privacy law. It does not create new surveillance powers. It tightens the conditions around what already exists and forces a different question. Not what can be collected. But whether it should be. The best interests of the child become the test that everything else has to pass. That sounds simple. It is not. It forces discipline into systems that were not built with restraint in mind.

But there is a breaking point inside all of this that needs to be named.

The Code is circling fifteen as the age at which a child can consent to how their data is used. The Australian Social Media Minimum Age Law sits at sixteen. Two thresholds. Same government. A fifteen-year-old who is technically not permitted on a platform but is there anyway is not a hypothetical. That is the current reality. And if that same fifteen-year-old can legally consent to data collection, a platform can point to that consent as a basis for processing their data, even while the question of access sits in a different part of the law. That is not a clean loophole, but it is something more common and more dangerous. A grey zone. And in large-scale systems, grey zones are where policy intent gets diluted. Alignment closes gaps. Misalignment creates them. And gaps, at this scale, do not stay theoretical for long.

In that same environment, the Australian government is accelerating artificial intelligence adoption across the whole economy. Anthropic’s CEO Dario Amodei met with Prime Minister Anthony Albanese in Canberra this week and signed an agreement to share economic index data to track artificial intelligence adoption across the economy and its impact on workers and jobs. Anthropic will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, collaborate on research with Australian universities, and target investments in data centre infrastructure and energy across Australia.

This is not a conspiracy. It is strategy. It is happening in Australia and everywhere else simultaneously, and in many respects it is the right direction. But it exposes something that cannot be talked around. The same government still working out where the boundaries should sit around children, around consent, around monitoring and privacy, is also rapidly expanding the capabilities of the systems that will operate inside those boundaries. The capability is accelerating. The governance is still negotiating itself. And when those two things move at genuinely different speeds, the gap does not remain theoretical. It becomes structural.

When something becomes structural, the burden shifts. To platforms, who will interpret every ambiguity in the way that best serves their commercial interests. To regulators, trying to enforce rules that were never properly aligned. To advocacy organisations, asked to hold together a system that was never coherently joined.

But most of all, the burden shifts to children.

Because children are the ones living inside all of it simultaneously. Moving through the gaps between laws that do not agree. Moving through systems that have not settled the question of who is responsible for them, or when. Moving through environments already running, already scaling, already making consequential decisions about their attention, their data, and their development, while the rules are still being written in committees.

The technology is not waiting. It is being funded, deployed, and embedded right now, in real institutions, in real schools, in real systems that real children move through every day.

So the question is no longer whether we can build systems capable of seeing harm. It is whether the people building those systems, and the people responsible for governing them, are moving with enough coherence and enough honesty to make them genuinely safe.

Because if they are not, the system will still hold together. Just not around the people it was built to protect.

Want to read more?

Subscribe to safeonsocial.com to keep reading this exclusive post.

 
 
 
bottom of page