top of page
0_2-8.jpg

When A Joke Becomes A Crime

  • Kirra Pendergast
  • 3 hours ago
  • 4 min read

A fourteen-year-old takes a screenshot of a classmate's Instagram photo, runs it through a free AI app, and generates a fake sexual image. They share it in a group chat. A few mates laugh. Someone screenshots it. Someone forwards it. By lunchtime, it has reached people the original sender has never met. The fourteen-year-old thinks it's a joke.

Under new amendments to the Crimes Act 1900 (NSW), effective 16 February 2026, it is a criminal offence. And the penalties are not small. We are talking up to three years' imprisonment and fines up to $11,000 — for adults. For minors, the consequences travel in a different direction, but they travel far, and they last.

This is the conversation we need to be having with our children. Right now.

The reforms are clear and deliberately broad. It is now a criminal offence in New South Wales to create, share, or threaten to share intimate images or audio without consent. That includes real images. It includes altered images. And this is the critical part it includes images that were entirely generated by artificial intelligence. It does not matter that the person in the image was never actually involved in the conduct depicted. If a digitally created image portrays someone in a sexual or intimate way without their consent, the law treats it as abuse. Because it is.

These state-level reforms sit alongside Commonwealth laws covering carriage service offences and child abuse material and the Online Safety Act 2021, which gives the eSafety Commissioner powers to order rapid removal. The legal net is wide. And it should be.

This is where parents, educators, and anyone who works with young people needs to pay very close attention.

If a young person creates an AI-generated sexual image of a classmate even as a joke, even as retaliation in a friendship fallout, even on a dare they may be committing offences under both state and federal law. And if both the person who made the image and the person depicted are under eighteen, the material may legally constitute child abuse material. Yes, a teenager creating a fake sexual image of another teenager can be producing what the law defines as child abuse material.The intent does not matter and the law does not care that it was meant to be funny.

For children under ten, criminal responsibility does not apply. For those aged ten to thirteen, the prosecution must prove the child understood the serious wrongdoing involved. For anyone under sixteen, prosecution requires approval from the Director of Public Prosecutions a safeguard designed to prevent the over-criminalisation of children.

But even without prosecution, the fallout is real. A police caution. A youth conference. Suspension or expulsion from school. Mandatory wellbeing intervention. In severe cases, referral to child protection services. And if a conviction does follow, the long tail is brutal affecting employment, travel, and Working With Children Check eligibility well into adulthood.

One group chat. One AI app. One moment of thoughtlessness and a young person's world may shift permanently.

Most young people who will fall foul of these laws will not understand what they have done until after they have done it. They are growing up inside digital environments where content creation is instant, sharing is reflexive, and consequences feel very distant and abstract. They have been handed tools of extraordinary power AI image generators that can produce photorealistic content in seconds without anyone explaining what those tools can do to another human being, or what the law says about using them.

This is not an excuse. It is an explanation. And it is a screaming case for up to date education.

The law now exists. The protections are necessary and overdue. But criminalisation alone will not stop a fifteen-year-old from making a catastrophic decision in a group chat at 11pm on a Tuesday. What might stop them is understanding that the image they just created of a classmate is not a meme. It is an act of abuse. It causes real psychological harm: anxiety, depression, social isolation, self-harm risk, school disengagement, and a lasting fear that the content will resurface for years to come.

Young people need to understand this not because the law says so, but because another human being's dignity demands it.

This is now forseeable risk across the country and even beyond. Schools must immediately update their policies to explicitly name AI-generated sexual content as serious misconduct. They must embed clear reporting pathways to police and the eSafety Commissioner. They must deliver digital safety education that goes beyond "be kind online" and actually walks young people through what this technology can do, what the law says, and what real harm looks like on the other side of a screen.

Parents need to know that the device in their child's pocket can now, in under sixty seconds, produce content that constitutes a criminal offence. That is the reality of the tools freely available to every child with a smartphone.

Young people themselves deserve the truth that the law has changed, that the stakes are real, and that a moment of stupidity in a group chat can follow them for the rest of their lives. They are living in this world and are, in most cases, eyerolling so hard you can hear it at online safety education that is to polite and to shallow and does not align with the digital reality they are living in.


These reforms are about recognising that digital sexual abuse is real abuse regardless of whether a camera was ever involved. They are about protecting young people who are victimised and making sure young people who offend understand what they have done. But the law alone cannot carry this. It needs education beside it. It needs conversations at kitchen tables and in classrooms. It needs adults who are willing to look at the technology children are using and ask the uncomfortable questions.


If you would like information about our in person education or our year round support for schools please hot reply or email hello@ctrlshft.global




 

 
 
 

Comments


bottom of page