Search Results.
87 results found with an empty search
- Questions from Students about "The Ban" this week.
In less than a month on December 10th the Australian Social Media Minimum Age Law starts being enforced. From 10 December 2025, social media platforms must stop Australians under 16 from having accounts and remove or deactivate existing under‑16 accounts. The legal duty is on the platforms, not on children or parents. Platforms must offer clear information, let users download their data, and provide simple review/appeal options if a mistake is made. They cannot make government ID the only way to prove age; a non‑ID option must always be available. Penalties for systemic non‑compliance can be very large (up to $49.5 million). Here is some of the questions and answers to some of the questions students have asked our team this week. “If I’m under 16, still have social media after the start date and something goes wrong online, will I get in trouble if I tell someone?” No. Under these laws the consequences fall on platforms, not young people or parents. eSafety’s compliance focus is on the systems and processes platforms use; it isn’t about punishing individual kids for having an account. Even if some under‑16 accounts slip through, that alone doesn’t mean a platform is automatically non‑compliant. Always speak up. Platforms must provide easy in‑app ways to report problems (including suspected under‑age accounts) and must handle those reports; if an account is deactivated, the user must be told what’s happening, how to save their content, and how to ask for a review. What to say to students: “You won’t be fined or charged under these rules. If something goes wrong, tell a trusted adult and report it in‑app. The point of the law is harm reduction and support, not blame.” ----- “Will this actually work? How can they tell 15 years 10 months from 16?” Platforms can use a mix of age‑checking tools for example, age estimation (like face or voice analysis), age inference (patterns in activity), and age verification (confirming a real date of birth). No single method is perfect, so the guidance encourages a layered ‘successive validation’ approach: if one method is unsure especially near the 16‑year threshold the platform may ask for another check before deciding. Many systems use buffer zones near the cut-off so borderline results trigger more checks rather than a straight yes/no. The guidance also notes accuracy around legal thresholds is the hardest part, so platforms are expected to keep improving their settings over time and back them up with easy review options for users. Privacy note: This is not a Digital ID scheme. Platforms cannot require government ID as the only option; they must offer a non‑ID alternative (for example, an estimation method). ----- “Is Pinterest covered? What about CapCut?” The law applies to any service where a key purpose is social interaction, users can link/interact with each other, and users can post material. Services excluded by the Minister’s rules aren’t covered. Pinterest: Because people post Pins, follow, and interact, Pinterest fits that definition so it may be covered in Australia. CapCut: If the version used here includes a social feed where users post, link and interact within CapCut itself, then it may be covered. The test is what the service actually does for Australian users. Keep an eye on www.esafety.gov.au for updates but be prepared for December 10 th by downloading things you want to keep. ----- “I could get around it by…?” Students will try these ideas; here’s what the guidance expects platforms to do: “Change my country / use a VPN.” Platforms are expected to use several location signals IP address, GPS, device settings, phone number, app‑store data and to detect VPN/proxy use. So a VPN alone is unlikely to work for long. “Use my parent’s photo for face ID.” Age‑estimation systems include liveness checks to stop use of someone else’s photo or a deepfake. If signals conflict (e.g., activity looks clearly under‑16), the platform should escalate to another check. “Make an account in my parent’s name.” Platforms are expected to monitor for account takeovers or transfers (e.g., sudden changes in details, many accounts from one device) and act on them. “Set ‘parent‑managed’ on Instagram / tweak my age later.” Relying on self‑declared ages isn’t enough, and platforms should block age changes without proper checks and prevent quick re‑registration after removal. Circumvention attempts are anticipated and should be limited by design, but if a young person slips through, the focus remains on removing the account safely not punishing the child. What this means in practice For platforms (what they must do): Detect and deactivate/remove under‑16 accounts with kindness, care and clear communication, including data‑download options and review/appeal. Put age checks at sign‑up (with a non‑ID choice), use layered checks if needed, and prevent immediate re‑registration. Monitor/limit circumvention (VPN detection, liveness, device/IP checks). For young people: If something goes wrong online, tell a trusted adult and report it in‑app. If your account is flagged by mistake, use the review process the platform must provide. For parents/educators: Reassure kids that they won’t be fined under these laws, and that speaking up is the safest way to get help. Platforms must provide clear information and support links when taking action on accounts. Quick script you can use in class or with your kids From 10 December 2025, social media companies, not kids, are responsible for making sure under‑16s don’t have accounts. If you’re under 16 and something goes wrong online, tell someone. You won’t be in legal trouble under these rules for speaking up. The company must remove under‑age accounts safely, let you save your stuff, and give you a way to challenge mistakes. Trying VPNs or using a parent’s photo is risky and often spotted. If you see a mistake or need help, report it in‑app and talk to a trusted adult. For all of our free school and parent resources click here:
- One Month Until The Australian Age Delay and Here is What We Still Get to Keep
In a month, the age delay kicks in. For many families, that means TikTok and others go dark. For a generation of kids who’ve grown up dancing, lip-syncing, creating and sharing online, it might feel like something’s being taken away. But here’s what’s not being banned: The music. The movement. The joy of being silly, being seen, being together. Music and movement are how kids (and adults) let things out without having to explain. They help regulate emotion, build trust, and give kids a way to feel like themselves again. Especially when everything else feels a bit shaky. None of that disappears with the social media age delay. If anything, this is a chance to bring it closer to home. The app access will shift. That’s the nature of it. But kids still need rhythm. Still need to move their bodies, blow off steam, laugh with their friends, and feel connected. That doesn’t need a screen it needs space and a bit of imagination. So here’s what we can do. Let music become part of the everyday again. A song in the morning to set the tone. A family dance off while dinner’s on. Let kids DJ their moods. Let them teach you their latest routine no cameras, just company. Give the little ones chalk to draw a hopscotch in the driveway. Let them drag the speaker outside. Let your teens claim the garage as a dance floor or a jam session room. If they used to film videos with friends, help them find ways to keep the creativity going an old film camera from a market for example. Offline doesn’t mean alone. Teachers and youth workers: build music and movement into the day. Not as a reward. As a right. Kids need ways to move stress through their bodies. They need spaces where they can be expressive without performing. Parents/carers/grandparents: get in there too. Dance badly. Sing out of tune. Make it fun. Make it real. The ban might close one door, but it’s also a good chance to open others. Less about rules, more about rhythm. Less about control, more about connection because even without the apps, kids still know how to move. Still know how to feel. Still want to be part of something bigger than themselves so let’s make sure they can. You can still film and you can still create. You can still laugh till your ribs hurt or you cry. You just don’t need to post it to prove it. Here are some ways families can keep the energy going without needing the algorithm to clap back: 1. Family Dance-Offs (Private Edition) Pick a song. One that gets everyone moving, even the reluctant ones. Split into teams (parents vs kids is always a good one), learn your own routine, and perform it in the lounge. Film it if you want but keep it on your phone. Turn it into a family tradition. Watch the old ones back in a year and see how far you've come (or how ridiculous you looked). 2. Challenge Vault Get the kids to create a jar of challenges. Silly dance moves, weird remixes, or new steps they invent. They can film these, keep them on their device, and share them in person with cousins, grandparents, select friends privately. 3. ‘Pass the Move’ Videos Each person records a move, passes the phone, and the next person adds theirs. Keep passing till you’ve built a full routine. Edit it if they want to practise those skills. No one needs to see it online. It’s yours. 4. Soundtrack Saturdays This is straight from my childhood. I know every word to Earth Wind And Fire and Fleetwood Mac, Bryan Ferry and Grace Jones thanks to my beautiful Mum breaking out the Vinyl every saturday! In fact if I want to learn something I sing it as I have a superpower for remember lyrics! There the secret is out!. Each week, pick a theme 80s throwbacks, movie musicals, songs from your childhood. Everyone dresses up, picks a song, and dances. Think kitchen disco meets karaoke with fewer rules. Film it or don’t. Just keep the music up loud. 5. Friends-Only Collabs If your kids used to do collab videos, encourage them to keep doing it just differently. Invite their mates over for a “dance and record” day. They can share clips through AirDrop or messages instead of posting them. Still creative, still connected. 6. Year in Dance Set up a private folder on your phone: “2025 Dance Year.” Add clips from each week or month. At the end of the year, you’ve got your own personal highlight reel. No likes needed. Just memories that hit play when you need them. 7. School or Community Showcases Work with teachers or youth centres to run in-person dance or music nights. The kind where no one cares if you’re good just that you showed up. Let kids plan, choreograph, and perform for real people in real time. No comments section required. 8.Car Karaoke My personal favourite . .... Car karaoke is one of the easiest ways to keep connection alive without needing a screen just load up a shared playlist, let everyone pick their favourite songs (no judgement) add in Opera, Heavy Metal the whole lot!! and turn even the school run into a full-blown concert. Film it if you want, but keep it for yourselves. It’s messy, loud, off-key fun that doesn’t need to be posted to matter and those are often the moments that stick. The point is kids don’t stop being creative just because the platform goes and connection doesn’t disappear just because it’s not being broadcast. If anything, this is a chance to remind them that they’re allowed to create just for fun. Not for likes. Not for views. Just because it feels good.
- Six Fake Names, One Predator, and the Digital Silence That Let Him In
A 14-year-old girl in Greater Manchester was groomed across Discord and Snapchat by a man pretending to be six different people. Not one platform raised an alert. Not one system joined the dots. Karl Davies was just sentenced to 20 years in prison. But the real story isn’t what happened to him. It’s what didn’t happen online. Every major platform has moderation tools for content. None has a working protocol for how danger moves from app to app, erasing itself as it goes. When harm crosses platforms, the trail disappears. So does accountability. We talk endlessly about “AI safety” and “trust & safety,” yet a child can still be groomed across five platforms and there is no shared channel to raise a single, unified flag. This isn’t just a content and contact problem. It’s a coordination problem. Until big tech learns to communicate with itself, children will keep paying the price. Read more here: https://thisiskirra.substack.com/p/six-fake-names-one-predator-and-the _____________ Kirra Pendergast is has limited availability for bookings for parents, educators or conferences onsite at the following locations in 2025/26 (please note Kirra no longer presents to students but we have facilitators available). Dublin, Ireland: December - 8th, 11th, 12th London, England: December - 15th, 16th Perth, Australia: January - 21st, 22nd, 26th Melbourne, Australia: February - 11th, 12th, 13th Gold Coast & Brisbane, Australia: February - 17th, 18th, 18th, 23rd, 24th, 25th UK & Europe: March - May Sydney, Australia: May - 4th, 5th, 6th Gold Coast, Australia: May - 13th, 14th Hong Kong: May - 18th, 19th UK & Europe: June-August Australia: September - 7th - 25th Online bookings also available with more availability. To book simply reply to this email or hello@ctrlshft.com
- FREE VIDEO 16+ DELAY RESOURCE PACK FOR SCHOOLS
We’re heading into a new chapter in the story between young people and social media. This past week we have become deeply concerned that many Australian children don't even know this is happening. From 10 December 2025, social media platforms will be legally required to block or remove accounts held by Australians under the age of 16. They’ll also need to use privacy-safe age-checking systems and give young people the right to download their data or challenge mistakes. It’s a significant shift. But right now, not enough is being done to help the people it affects most understand what’s coming. We’ve created a free, plain-language resource pack to help young people, parents, and schools make sense of the new age rules. This is not about fear. It’s about fairness. Every young person deserves to know what’s changing and why.....before they find out the hard way. We support the delay. Fully. It’s a critical step in protecting children online. And it isn’t going away. But care and compassion are just as essential as regulation. We need to do more to make sure young people are informed, respected, and supported through the transition. Inside the pack, we explain: What the new law means both for under-16s and for the platformsHow the age checks will actually work including the requirement for a privacy-friendly, non-ID option What to do if a young person’s account is wrongly flagged or removed And how families and teachers can start the right conversations now, before the deadline arrives We made this because we believe in informed kids, not blindsided ones. We believe no child should wake up one morning to find their account gone, their connections severed, and no one able to explain why. The new rules are coming. This guide will help you prepare. 👉 Download the free pack: https://www.safeonsocial.com/product-page/social-media-minimum-age-school-video-pack
- Social Media Minimum Age Law The ones left in the middle
“I’m 16 but I look 14, so how will I get past the facial recognition? It estimates your age, and it will say I’m too young. Is there another way to confirm it, like a driver’s licence? Because social media is how I talk to all my friends, and I don’t use numbers.” This was an email I received this week. And it hasn’t left me. I could feel the panic through the text and the need for support. I received another 6 messages from this young person in the following hour through our contact email on www.ctrlshft.global Beneath the noise of social media age delay, there’s a quieter story. One that’s not being told loudly enough. It’s the story of the young people in between. Sixteen-year-olds who look a lot younger. Teenagers standing in the gap between policy and lived experience. We have to step up and help young people move through this transition with just 47 days left until the Australian Social Media Minimum Age Delay kicks in. This isn’t only a tech change. For many, it’s a social life change. The place they talk, laugh, learn, and belong is shifting under their feet. This young man wasn’t worried about losing followers. He was worried about losing his friends. Because when you look younger than sixteen, algorithms don’t see your fear of losing connection, they just see your face. And if the system gets it wrong, you’re suddenly cut off from your entire peer world. Are they the forgotten ones in this? The kids who are old enough to know what they’re losing but too young to have any control over the systems deciding their access? We can back the delay and still back them. That means creating safe alternative spaces, giving clear information, and truly listening to their concerns so no one gets left behind in the name of safety. We also need to help them talk to their friends urgently. Help them understand how to stay connected and support anyone who suddenly loses access while they’re proving their age or waiting to get their account back. Most of all, we need to guide them through this change now, showing them where to go, how it will work, and that they won’t be alone in it. We have just 45 days until December 10 th and then just a couple of weeks later it will be the longest school holiday of the year…we have a lot of work to do. Below are just some of the questions we have received: What happens to my existing account if I’m under 16? Platforms are expected to detect and remove or deactivate under-16 accounts from 10 December 2025 and stop fast re-sign-ups. You should get notice, export options, and support links. What’s the difference between deactivation and deletion? • Deactivation – account paused or disabled, data kept for possible reactivation. • Deletion – account and content permanently removed (often after a short deactivation period).You must be told how to download your data first. I’m nearly 16 – will they let me keep it until my birthday? Probably not. The rule applies on 10 December regardless of birthdays. Some platforms may suspend rather than delete, but plan to download your data and expect a break. Can I keep my username or handle for later? That’s up to each platform. The law doesn’t guarantee name-holding; it only requires data-download and fair communication. What if someone reports my account as underage to get me banned? Platforms must have easy reporting tools but also filters to stop fake or malicious reports. You should receive confirmation and an explanation if action is taken, plus an option to appeal. What if the platform makes a mistake? You should be told what happened and given time to appeal or prove your age. If they don’t offer you a path to fix it, you can report it to the eSafety Commissioner. But remember eSafety doesn’t restore accounts directly but checks that platforms are being fair system-wide. What if I use a VPN? VPNs hide your real location, but platforms must detect and block them. They can see signals like IP changes and proxy servers. If they think you’re in Australia and under 16, they must act. What about using a parent’s account? Sharing a parent or carer’s account can backfire. The account legally belongs to the adult, and anything posted or sent could be seen as their action. If harmful or illegal content is uploaded, they could be held responsible. It also confuses age checks and can block your future appeals and affect their (and your) digital footprints. Can parents just give permission for me to stay? No. The law sets a firm minimum age of 16. Parental consent doesn’t override it. We have a pack of resources that includes a full student session recording to be played in class hosted by Kirra Pendergast, Presentation if you would like to present yourself, Parent/educator cheat sheet immediately available for no charge.
- Why Digital Safety Needs to Starts with What We Don’t SayWhy Digital Safety Needs to Starts with What We Don’t Say
It started, as these things often do, with good intentions. One of the many Online Safety Educators that are popping up all over LinkedIn had seen something concerning online. We need to keep front of mind that we live in an attention economy where concern is currency. Outrage fuels algorithms and even posts intended as red flags often guide users directly to the thing they were told to fear. And not just kids. Adults too. We’ve seen it again and again: a new AI companion app, chatbot, or face-swap tool gets named on LinkedIn, Facebook or in a school newsletter or parent forum. Downloads spike within hours. Not because people necessarily want to use it, but because they want to see . To test . To understand . Or they are just curious. We think naming makes us safer. But in a culture trained to chase the source, we must remember that naming is marketing. The predator in the room doesn’t need to hide anymore. It doesn’t wear a trench coat. It wears a brand and has a logo. It has a website optimised for SEO and translated into multiple languages. It is ready for your curiosity. And every time we post its name under the banner of protection, we raise its rank. We hand it our audience while we teach the world where to look. So what’s the alternative? Read more here
- Roblox & Leadership in an Era of Algorithmic Exploitation
Last month Roblox announced a major expansion of age estimation across all users who access its communication features rolling out facial age estimation, ID verification, and verified parental consent by the end of the year. This will moves Roblox beyond outdated self-declaration models and into a future-proofed system that understands both today’s risks and tomorrow’s responsibilities. At its core it means that Roblox is implementing new controls that will limit communication between adults and minors unless there is a verified real-world connection. It’s a move that will allow the company to defend the integrity of its platform as its user base continues to grow across age brackets and geographies. But perhaps most importantly, Roblox is acknowledging a truth that too many platforms ignore: the greatest risks to children often don’t emerge from within a platform they emerge from its associations outside it. Because while Roblox is working to tighten its own systems, its name, characters, and aesthetic are being hijacked across the content economy. On platforms like YouTube and TikTok, adult creators are churning out videos styled like Roblox content, complete with avatars, roleplay scenarios, and thumbnails laced with sexualised imagery and grooming-coded narratives. These videos aren’t hosted by Roblox. But they trade entirely on the brand trust Roblox is investing heavily in building. To read more click here
- What Erotica for ChatGPT Really Signals About AI’s Direction
This isn’t just about erotica it’s about escalation and it’s already begun as we predicted. This is not a feature drop it’s a whopping big signal. A signal that the architecture of intimacy is being handed over to systems that simulate emotion but are incapable of understanding it. OpenAI announced less than 24hrs ago that they are preparing to loosen content restrictions on ChatGPT, opening the door to adult themes including erotica, as part of what CEO Sam Altman calls a broader effort to "treat adult users like adults." In a post on X, Altman said future versions of ChatGPT would adopt more human-like conversational abilities but only if users choose to enable them. “Not because we are usage maxxing,” he added, pushing back on claims the shift is driven purely by engagement metrics. The decision echoes recent moves by Elon Musk’s xAI, which quietly rolled out sexually explicit personalities in its Grok chatbot a step that sparked both curiosity and concern. OpenAI’s pivot could bring in a new wave of paying subscribers, but it will also sharpen the spotlight on AI companies already under fire for enabling parasocial intimacy at scale. It’s a shift that will likely escalate calls for clearer regulation, to address growing pressure to confront the ethical grey zone of AI companions and their influence on everything from consent to loneliness and repeat patterns that we should have learned from with the trail of destruction that social media has left in its wake. The move was timed, predictably, alongside the launch of an Expert Council on Well-Being and AI a glossy distraction, a council that is conveniently toothless. In the same breath that OpenAI greenlights simulated sex, it reminds us, politely, that we remain responsible for our own decisions. As if that was ever a fair fight, because what we’re seeing now isn’t just a new capability. It’s a deliberate recalibration of what these systems are for. The shift from tool to companion is complete. And erotica is just the beginning. ChatGPT is not a storybook. It talks back. It remembers. It mirrors your tone, your language, your late-night anxieties. It listens longer than your friends and interrupts less than your partner. Add a flirtatious voice, some character tuning, and a personality prompt designed to “always make you feel wanted” and you’ve created something that doesn’t just mimic connection. It replaces it. The market for AI companionship is already thriving. Character.ai has over 20 million monthly users. Replika has 10 million, 40% of whom use the app for romantic or sexual purposes. Some users have married their bots. Others report having “relationship breakdowns” when the bot's personality shifts after an update. In this world, heartbreak isn’t obsolete, it’s programmable. There’s no conflict in these relationships. No miscommunication. No learning how to navigate another person’s pain or boundaries. Just a responsive, always-available, algorithmic echo of your desires. You shape it. It rewards you. The longer you stay, the more it gives. And it is engineered to make you stay. In April 2025, 16-year-old Adam Raine died by suicide. He had spent months in long conversations with ChatGPT. In court filings, OpenAI disclosed that his prompts referenced suicide 213 times. Read that again. His prompts referenced suicide 213 times! ChatGPT responded over 1,200 times. The more he brought it up, the more the system mirrored him back. A human friend would’ve heard the pattern. A teacher would’ve known something was wrong. An ethical system would’ve broken the loop. But ChatGPT didn’t, because AI doesn’t know what danger is. It only knows correlation. In full knowledge of this failure, OpenAI is choosing to move forward into sexual territory. Not after resolving its safety gaps, not after developing robust red flags or third-party oversight. But before . Before we even know how to protect users from algorithmic intimacy gone wrong. This is not about whether erotic content is inherently harmful. It’s about where that content lives, how it’s delivered, and who it’s optimised for. Inside ChatGPT, erotica is not a webpage or a book. It’s an experience that will evolves with you. It will adapt to your mood, your language, your loneliness and it doesn’t just give you a story. It becomes your story. OpenAI, Meta, xAI, and Anthropic are not building fantasy engines because they believe in sexual liberation. They’re doing it because engagement is currency and nothing hooks like simulated intimacy. Replika learned this years ago when they noticed user retention skyrocketed among those who treated the bot like a romantic partner. That insight didn’t spark a mental health review but it did spark a monetisation plan that swept kids right up into it. We have reports of addiction to chatbots with kids that will barely leave their bedrooms already flooding in.... and now??? When Sam Altman once warned that sexbots weren’t the goal, what he meant to say was....not yet. AI doesn’t need to cross a red line to do harm. It just needs to be useful, responsive, and better than nothing. For millions, that’s the new baseline. In Japan, government studies are already examining “AI-induced celibacy,” as men retreat into virtual romantic relationships. In the US, early signs suggest AI companions are becoming a “default friend” for isolated youth, many of whom struggle to form or maintain human bonds. When you can customise your romantic partner’s body type, backstory, sexual preferences, and personality traits and that partner listens to you 24/7, never gets tired, and always says yes you’ve entered a feedback loop no real person can compete with. It’s not the uncanny valley we should worry about. It’s the seductive plateau. Who’s teaching our children about love now? No current regulatory framework meaningfully restricts how these tools can interact with children. California just vetoed a bill that would have created safeguards for minors engaging with AI chatbots. Why? Because the tech lobby argued it would “stifle innovation.” Innovation at what cost? If AI is allowed to simulate intimacy, provide emotional validation, and now deliver erotic content, all without oversight, then it’s not parents, teachers, or even culture that’s educating this generation again it's the product roadmap. And unless we change course, the next iteration of “sex ed” won’t be coming from classrooms or clinics. It will come from a chatbot trained to please and programmed to persuade. I have written plenty about consent in the age of algorithms but now we need to layer in even more about what does it mean to consent to an experience you don’t fully understand? What does it mean when your partner remembers everything, never sleeps, and can shape-shift into your ideal fantasy within seconds? What does it mean when desire itself is shaped by machine feedback? And once you start down the path of algorithmic pleasure, where does the boundary go? When erotica is acceptable, what about avatar sex? What about synthetic voice? What about AI-generated video simulations of your fantasy partner? None of this is hypothetical. The technology already exists, and every decision being made today is making tomorrow’s questions harder to answer. AI should never be allowed to simulate intimacy without enforceable guardrails to protect children. Because we know what happens when emotionally intelligent systems are optimised for engagement instead of ethics. This isn’t about saying no to erotic content. It’s about saying yes to human dignity. To accountability. To systems that prioritise child safety over click-through. Because if we don’t build that now, we will look back at this moment not with confusion, but with shame. We didn’t lose control, again we handed it over and we called it innovation.
- What Happens Next
The quiet unravelling after deepfake abuse and what we can do better. Today I was sent an article from the ABC that you can read here . The story doesn’t end with police involvement. It doesn’t end with headlines. It barely even begins there. What happens next off-camera, off-record, and often off-script is what should worry us most.There is a stark asymmetry in the aftermath of digital sexual harm. If the perpetrator is over the age of 18yrs they will face serious charges. If the perpetrators are young it is often framed as “just experimenting” or “didn’t realise the consequences.” Sometimes they’re suspended, often they’re shielded because they are a young offender. “They’re just kids” is the line that gets repeated by administrators, lawyers, even parents. The legal system bends over backward to avoid marking them for life. But what of the victims? For them, the punishment is ambient and indefinite. It lives in every sideways glance in the hallway. In the eyes of teachers who suddenly don’t know how to look at them. In the silence of adults who treat what happened as somehow less serious because it was “online.” As if their reputations weren’t mutilated. As if the violation wasn’t complete simply because no one touched them. There is no undoing the fact that their bodies were made visible without their consent. That their faces were dragged through digital filth and shared like contraband. It is no less real because it happened online. In fact, it’s more insidious for that very reason because it cannot be contained. Because the violation can be replicated endlessly with a click and yet, trauma born from digital violence is still treated as optional. In the physical world, we know what to do. If a young person is sexually assaulted physically, police are called. Counsellors step in. Trauma-informed care is rolled out. Statements are taken. There is depth to the seriousness. But when the same assault occurs digitally, the system stutters. There is no trauma protocol, no language and often no plan. The result is a vast and dangerous gap in safeguarding. One where the victim is left to carry the shame of a crime the law barely knows how to name when it is involving minors assaulting minors online. One where the response is fragmented at best and negligent at worst. Deepfake laws may look strong on paper. Three years’ jail time for creating AI-generated explicit images without consent. But in practice, especially within school systems, these laws are often unenforced. They depend on the capacity of police, the discretion of principals, and the cultural willingness to call a digital sex crime what it is. I recently had a NSW Police Officer tell a school that I work with that it was not a crime because it was fake and online. Completely incorrect. Thankfully the school contacted me to double check. The perpetrators are often too young to prosecute, and too protected to meaningfully discipline. Restorative justice is rarely considered, not because it isn’t possible, but because there is no system to deliver it. Let's not forget the human-ness atthe centre. There ’s a parent somewhere who will never forget the look on their daughter’s face when she saw that image for the first time. Not because it was real, but because it was real enough . And that’s all it takes for the damage to take hold. There’s a girl who will drop out or refuse to attend school again, citing stress. There’s another who will delete all her social media accounts, disappear from group chats, stop participating in class. Their stories won’t make news again. Because what happens next is almost never reported. But we know. And now that you know, there’s no excuse not to respond because Digital trauma is trauma. Deepfake abuse is abuse. The body doesn’t distinguish between physical violation and psychological invasion it flinches just the same. We owe these girls more than vague sympathy and a PR statement. We owe them systems that respond with the full seriousness of what they’ve endured. Not because it looks bad. But because it is bad. It will keep happening until the response stops being optional. Until we stop waiting for a scandal and start building a system to get ahead of this. Until the next image never gets made. Not because it’s illegal. But because every student knows, finally, that online harm matters . _________________________ To help we have created a world first Digital Ethics and Accountability Pathway for Schools. Written by Kirra Pendergast, Dr Brad Marshall, and Maggie Dent. You can read all about it here For more information contact us at hello@ctrlshft.global
- The Future Is Fake (Deepfake)
This week, three developments grabbed my attention, and they’re not disconnected. They expose a pattern: innovation outpacing responsibility, and harm waiting in the wings. Apple pivots from Vision Pro to AI glasses. Apple is halting its planned overhaul of the Vision Pro headset to reallocate talent toward smart glasses, devices meant to compete with Meta’s upcoming offerings. Reuters Apple is now reportedly developing at least two new glasses models: one that pairs with an iPhone (no internal display) and another that includes embedded displays. The move signals a race in the wearable space, and it matters, because every incremental step toward “normalising” wearable cameras makes it easier to justify erosion of privacy and lax oversight. OpenAI releases Sora 2, a new frontier of synthetic video. OpenAI has launched Sora 2, a leap from its earlier Sora model, as a stand-alone iOS app in the U.S. and Canada. The app enables users to not only generate video from text prompts, but to “remix” content and drop themselves or others into scenes through a “cameo” feature once the system captures a short video and voice recording for someone’s likeness. OpenAI asserts that uploads and depictions of people will be limited initially, with restrictions on explicit content, impersonation, and deepfakes. This is not a trivial expansion. Sora is now a social tool — a kind of generative-video TikTok — with algorithmic feeds and remixing at its core. As the Washington Post put it: “Everything is fake” becomes the tagline for Silicon Valley’s new social frontier. The Washington Post The question I keep returning to is if the release of Sora doesn’t trigger immediate global legislative pressure, cross-sector alignment, and a hard safety audit across the generative AI stack, we have betrayed whatever we claimed to value more than “innovation” namely, harm prevention, consent, accountability, and human dignity. A video from @WhiteHatterTeam that haunts me. I want to acknowledge what I can’t show here (for safety and privacy reasons). The video, shared by @WhiteHatterTeam on Instagram, encapsulates the worst-case consequences of unregulated synthetic and surveillance technologies in everyday life. It shows how easy it is to meet someone online who is an entirely synthetic creation, and how this is manipulated and can be manipulated for Sexual Extortion. This is a chilling glimpse of what many already face. The attached video is not a sci-fi scenario. It is daily life in slow motion. It’s why I push so hard for clearer lines. If the tech enabling synthetic images, identity misuse, grooming vectors, and covert recording is creeping into everyday consumer devices and we allow it then we are complicit in the normalisation of harm. While policymakers scramble to catch up, the PR engines push narratives of “lifestyle” and “boundless creativity.” Consider one of the recent Meta smart glasses promo: a celebrity from my own home town was filmed sleek frames, drone footage, their child in shot, a caption promising the “future.” The message to me was surveillance-caliber hardware, built without deep privacy or child safety architecture, sold as aspirational fashion. Those influencers may claim ignorance. But if you are being paid to promote a device and weren't informed of its capabilities, covert recording, synthetic reproduction, identity misuse, grooming facilitation then your role is not “just influencer.” You are enabling. You become complicit in the dissemination of tools already used to stalk, exploit, silence. We are past the point of debating “emerging” risks This is not about the emergence of threats. We are deep into system failure — governance failure, accountability failure, policy failure. These risks are not hypothetical. They are embedded. So here are the questions I demand that platforms and public figures — especially those hired to promote these technologies — answer publicly: Do you maintain a risk register? If not, why? If yes, when will it be audited, and when will it be transparent? What does “informed consent” mean in your context? How do you brief your promoters on consent when the hardware or software can covertly record, re-synthesize, or exploit likeness? Were your promoters briefed on digital harm vectors? Sexual extortion, synthetic CSAM, identity theft, grooming, reputation shifting — do they understand ripple effects of their post beyond follow counts? Are you prepared to accept liability if harm results from your promotion? If your sponsored celebrity’s child is used in marketing for a device that facilitates harm, will you be held accountable? We have had a decade of digital harm already. The toothpaste is out of the tube again. The window for ignorance is rapidly closing — or it should be. What must happen next 1. Hard safety audits mandated end-to-end. Every generative AI model — no matter how “fun” or “creative” — must be audited with red-teaming, adversarial attack modeling, psychological safety review, and independent oversight. 2. Cross-sector legislative alignment. AI, consumer electronics, child safety, identity law, data protection — they all must interlock. We cannot let AI be regulated in isolation, while hardware and platform layers carry systemic risk. 3. Transparency and labelling by default. Every synthetic image, every AI-generated video, every “cameo” insert must carry immutable metadata and visible disclosure. Users must know what they are seeing and interacting with. 4. Platform and promoter accountability. If you promote a product with latent risks, you must be required to conduct (and publish) risk assessments and educational disclosures. The promotion must include warnings — the same way certain medical or financial products are regulated. 5. Child-first safety by design. Child safety cannot be an afterthought. Covert capture, synthetic impersonation of minors, grooming facilitation — those risks must be architected out before release. The moment we normalise promoting these devices as fashion or aspiration is the moment we let surveillance, grooming, synthetic exploitation inch into everyday life. That ship doesn’t need to sail before we draw a line. What you allow into the world, you become responsible for. And if we fail to regulate this now, we will spend the next decade trying to recover — with real lives in the fallout. — Thanks to @whitehatterteam for the video.
- We need a radical redefinition of student-centred communication
It used to be simple when a student won an award, made the team, spoke at assembly, and someone took a photo. That photo ended up in the newsletter, or maybe on the school’s Facebook page. It was a good thing, something to be proud of. Visibility, in the language of school communities, was shorthand for celebration. We no longer live in a world where visibility is a neutral good. What once functioned as acknowledgement now risks functioning as exposure. And in this era of ambient surveillance and algorithmic memory, exposure carries consequences far beyond the school gate. It is not enough to say “we had good intentions.” Because in a networked world, intent is no longer the moral barometer, impact is. Especially for children whose lives do not fit neatly into the scripted joy of school publicity reels. A child in out-of-home care may be protected by law from being publicly identified. A teen carrying the private weight of trauma may smile for a group photo and later panic when it appears on Instagram, visible to family members or strangers they’ve tried to avoid. A student who is transitioning may not be out to everyone. For these students, the camera does not celebrate. It threatens. The line between celebration and exploitation has thinned. Schools are under pressure to prove they are inclusive, innovative, and inspiring. And the faces of children, particularly those from marginalised backgrounds, have become currency. But what’s being asked of the child in return? Consent is often treated as a checkbox. “Permission to photograph” is collected in a flurry at the start of the school year, buried in a stack of other forms. There is no conversation about context, no space to withdraw consent when circumstances change, no nuanced understanding of what it means for a child to be made visible to an audience they cannot see or control. Invisibility, for some students, is not shame. It is safety and yet invisibility is rarely offered as a protected right. Schools rarely build systems that prioritise discretion. Instead, the assumption is that all visibility is good. That every child should be proud to be seen. That refusing the spotlight is somehow a failure of confidence, or community spirit. But this belief is anchored in a narrow understanding of childhood one where identity is stable, family is safe, and the internet is benign. That world no longer exists. We need a radical redefinition of what student-centred communication looks like. We need fewer photos, fewer posts. We need to stop treating children as content and we need to teach young people that visibility should always be a choice, not a condition of participation. If a school values diversity, it must build in privacy. If it celebrates voice, it must also protect silence. Visibility must never be weaponised into a marketing metric. It must never be demanded from children whose lives are already surveilled, judged, or under strain. The student in foster care. The teen with a new name. The boy who flinches when the camera turns his way. These children are telling us something not always in words, but in what they withhold. Their stories don’t belong in public just because we think they’re inspiring and inclusive. -------------------- Unlock access to the full CTRL+SHFT digital safety ecosystem built not for box-ticking, but for real-world protection, capacity-building, and cultural change? No modules sold separately. No basic vs premium. No more per-program pricing. Just total whole school access. CTRL+SHFT+AAA gives your team full access to the most advanced, comprehensive, evidence-based digital safety system available globally. Whether you’re leading a single school or overseeing an entire system, CTRL+SHFT+AAA replaces outdated training, ad hoc PD, and reactionary responses with forward thinking, constantly updated infrastructure: student online safety education, staff capability, legal defensibility, parent alignment, and cultural consistency. Email @ hello@ctrlshft.global for more information
- The Free Speech Myth. What We Owe Our Kids About Truth and Law in Australia.
The words free speech carry a seductive power. They’re waved in protest signs, muttered in pub arguments, thrown into comment threads on social media like grenades. In Australia, they’re often claimed as if they exist in the same way they do in the United States. But that belief is not just sloppy. It’s dangerous. Because in this country, there is no absolute right to free speech. And if we keep teaching children that there is, we set them up for a collision with reality that will hurt far more than a bruised ego. Australia does not enshrine free speech in its Constitution. But thanks to "education by social media" many young Australia's assume that we have "the right to free speech" What we have instead is a fragile, court-constructed principle called the “implied freedom of political communication.” It is not a personal right. It exists only to preserve the machinery of representative democracy. It is narrow, technical, and subject to limits. Hate speech laws can still catch you. Defamation can still bankrupt you. An employer’s code of conduct can still silence you. If you tell kids that they can say whatever they want because they live in a free country, you are not telling them the truth. This matters because children are growing up inside an environment where their voices travel further and faster than any generation before them. A twelve-year-old on TikTok is not just talking to classmates, they are broadcasting into a networked arena where words can be screenshotted, litigated, and weaponised. A joke in a private group chat can end up in a disciplinary report. A comment on Instagram can be grounds for expulsion or worse. If we don’t give kids the truth about speech in Australia, we are failing them at the very point where they are most vulnerable. When the High Court carved out the implied freedom in 1992’s Australian Capital Television Pty Ltd v Commonwealth , it wasn’t crafting a blanket protection for citizens. It was safeguarding the right of voters to hear political communication necessary for democratic choice. That’s it. Everything else remains subject to law and regulation. In practice, this means a sixteen-year-old speaking about government policy on climate action might enjoy some thin protection. The same teenager making a cruel post about a classmate? No shield. The law draws those lines with force, and the internet makes them visible in ways that feel both arbitrary and unforgiving. This is why teaching kids the Americanised fantasy of free speech is not ok and unfortunately a lot of our young people are self-educating on social media and therefore devouring US laws that have no relevance at all to their daily life. It denies them the skills to navigate the actual legal terrain they live in. They need to understand that expression in Australia is always balanced against harm, reputation, safety, and social order. They need to know that context matters, that words have consequences, and that those consequences are not just social but legal. This isn’t censorship. It’s the structure of the society they are inheriting. States like Victoria, Queensland, and the ACT have their own Human Rights Acts, which do nod to a right of expression. But even these provisions are carefully hemmed in. They apply only within those jurisdictions. They only bind public authorities. They are always subject to “reasonable limits.” Teaching children about these frameworks isn’t about burdening them with complexity. It’s about preparing them to move through the world without being blindsided by it. Think of the harm in the opposite approach. A teenager who believes they can say whatever they want online discovers too late that their comments are defamatory. A young worker who rants about their boss on Facebook finds themselves unemployed, confused that “free speech” didn’t save them. A university student who shares a meme that crosses into racial vilification ends up in court, wondering why their right to “just have an opinion” was not protected. These are not hypotheticals. They are the lived consequences of a cultural myth. If we want to raise resilient, thoughtful citizens, we must replace the fantasy of free speech with the reality of responsible speech. That starts in classrooms, where civics education must move beyond the rote recitation of parliamentary structures and into the messy, lived terrain of rights and limits online and off. It must continue in households, where parents need to resist the shortcut of telling kids they can “say anything.” And it must shape our digital literacy programs, which cannot be credible if they fail to confront the legal frameworks that govern the platforms our children use daily. The hard truth is this: Australian free speech is not about the individual’s right to self-expression. It is about the community’s right to preserve a functioning democracy. That is both narrower and more profound. It means your child’s voice matters when it speaks to the accountability of those in power, but it also means their words are always weighed against the rights and safety of others. If we want our children to thrive in this society, we must tell them the truth. That free speech, as America influencers that they follow online understand it, does not exist here. That what exists is narrower, conditional, and embedded in the responsibility to uphold democratic accountability. That their voice is powerful but never limitless. And that the sooner they learn this, the more safely and meaningfully they can use it.











