Search Results.
87 results found with an empty search
- Five Nights at Epstein's - What Parents and Teachers Need to Know
CONTENT WARNING: This article discusses a game that references child sexual exploitation and may be distressing to survivors of abuse. It contains frank discussion of how real-world criminal cases are being turned into entertainment for adolescents. A game is circulating among young people called Five Nights at Epstein's . It's being shared in group chats, traded between friends, and accessed on platforms where user-generated content slips past filters. This is not a rumour. It's not exaggerated. And it's not "just a game." Five Nights at Epstein's is a survival horror game set on Epstein Island, where players evade threats referencing Jeffrey Epstein a man whose documented crimes involved the sexual exploitation of children. The game mimics Five Nights at Freddy's , a popular horror franchise where players survive nights while avoiding animatronic threats. In this version, the fictional pizzeria is replaced with a location where real abuse occurred. Where real children were harmed. Where survivors are still living with trauma, and now it's entertainment. This isn't a commercial release with age ratings or content warnings. It's user-generated content circulating on open platforms. It spreads link by link. The danger is not horror as a genre. Teenagers have always explored fear in controlled spaces haunted houses, scary movies, campfire stories. That's developmentally normal. The danger is trivialising real sexual exploitation as entertainment. When abuse becomes a game mechanic, the moral line shifts. Some young people will scroll past without thinking. Others will feel deeply unsettled and lack the language to explain why. For those carrying their own trauma—disclosed or hidden—content that gamifies abuse can cause immeasurable damage. This is not about sheltering children from difficult topics. This is about recognising that turning documented child exploitation into a jump-scare game normalises horror that should never be normalised. Teenage brains are wired for novelty and peer approval. Impulse control and long-term consequence assessment don't fully develop until the mid-twenties. When something controversial trends, it moves fast through online friend networks, not because every child seeks harm, but because exclusion carries social cost. They access it to stay in the loop. They share it to signal they're not sheltered. They laugh because that's what the group chat expects. The timing isn't coincidental. When high-profile criminal cases trend in media, the online ecosystem metabolises it instantly news becomes meme, meme becomes parody, parody becomes game. For adults, those categories are distinct. For young people raised in algorithmic culture, the boundaries dissolve. Anyone can create and upload content in minutes. Platform moderation is inconsistent and reactive. They remove what they catch, but virality outruns enforcement. The burden lands on families and schools who discover trends only after they've circulated. What Schools Should Do Stay calm. Keep it steady. The goal is containment, not accidental amplification. DO: Quietly check whether the game has been accessed on school devices or networks Brief wellbeing staff so they're prepared if students want to talk or seem distressed Remind students in general terms that not everything online is made for young people without giving the game publicity Monitor for distress signals in students who may have their own trauma activated DON'T: Hold dramatic assemblies that inflate curiosity Name the game repeatedly in school communications Lead with moral outrage (adolescents read adult emotion as significance escalation amplifies reach) What Parents Should Do Do not come in angry. Do not lead with threats. Your goal is to keep the door open for honest conversation, not slam it shut with punishment. Start Simply: "I've heard about a game being talked about online. Have you seen it?" If they have: "How did it make you feel?" Listen Before You Lecture If children think they're about to lose their devices, they will minimise or deny. If they feel safe, they're far more likely to talk openly. Children who fear punishment hide discomfort. Children who feel safe disclose it. That's the difference between secrecy and support. If You Find the Game Report it through the platform it's hosted on Check downloads or shared links if needed Talk about what you found without shaming But do not replace conversation with surveillance. Trust is what keeps the door open next time something worse shows up on their screen. This isn't isolated. Real-world harm is increasingly repackaged as digital spectacle because o utrage drives engagement, and engagement drives visibility. Algorithms reward shock. Adolescence unfolds inside that incentive structure in spaces designed for maximum attention capture, not developmental protection. When criminal abuse becomes a game backdrop, when survivor testimony becomes game atmosphere, we are watching the normalisation of shock as entertainment. We can't scrub the internet clean. Exposure is inevitable. What we control is the climate we create around it. Stay informed without becoming alarmist.Stay calm when others escalate.Stay connected even when content feels confronting. If a young person seems unsettled after encountering something online, talk early. School wellbeing teams and health professionals exist because distress addressed quickly doesn't become buried trauma. The counterforce to this isn't surveillance outrage or dramatic intervention. It's connection. Because when the next disturbing trend surfaces—and it will—the young people who will tell you about it are the ones who know you'll stay steady when they do. If you or a student needs support: Lifeline: 13 11 14 Kids Helpline: 1800 55 1800 Blue Knot Foundation (trauma support): 1300 657 380 1800RESPECT (sexual assault support): 1800 737 732
- When A Joke Becomes A Crime
A fourteen-year-old takes a screenshot of a classmate's Instagram photo, runs it through a free AI app, and generates a fake sexual image. They share it in a group chat. A few mates laugh. Someone screenshots it. Someone forwards it. By lunchtime, it has reached people the original sender has never met. The fourteen-year-old thinks it's a joke. Under new amendments to the Crimes Act 1900 (NSW), effective 16 February 2026, it is a criminal offence. And the penalties are not small. We are talking up to three years' imprisonment and fines up to $11,000 — for adults. For minors, the consequences travel in a different direction, but they travel far, and they last. This is the conversation we need to be having with our children. Right now. The reforms are clear and deliberately broad. It is now a criminal offence in New South Wales to create, share, or threaten to share intimate images or audio without consent. That includes real images. It includes altered images. And this is the critical part it includes images that were entirely generated by artificial intelligence. It does not matter that the person in the image was never actually involved in the conduct depicted. If a digitally created image portrays someone in a sexual or intimate way without their consent, the law treats it as abuse. Because it is. These state-level reforms sit alongside Commonwealth laws covering carriage service offences and child abuse material and the Online Safety Act 2021, which gives the eSafety Commissioner powers to order rapid removal. The legal net is wide. And it should be. This is where parents, educators, and anyone who works with young people needs to pay very close attention. If a young person creates an AI-generated sexual image of a classmate even as a joke, even as retaliation in a friendship fallout, even on a dare they may be committing offences under both state and federal law. And if both the person who made the image and the person depicted are under eighteen, the material may legally constitute child abuse material. Yes, a teenager creating a fake sexual image of another teenager can be producing what the law defines as child abuse material.The intent does not matter and the law does not care that it was meant to be funny. For children under ten, criminal responsibility does not apply. For those aged ten to thirteen, the prosecution must prove the child understood the serious wrongdoing involved. For anyone under sixteen, prosecution requires approval from the Director of Public Prosecutions a safeguard designed to prevent the over-criminalisation of children. But even without prosecution, the fallout is real. A police caution. A youth conference. Suspension or expulsion from school. Mandatory wellbeing intervention. In severe cases, referral to child protection services. And if a conviction does follow, the long tail is brutal affecting employment, travel, and Working With Children Check eligibility well into adulthood. One group chat. One AI app. One moment of thoughtlessness and a young person's world may shift permanently. Most young people who will fall foul of these laws will not understand what they have done until after they have done it. They are growing up inside digital environments where content creation is instant, sharing is reflexive, and consequences feel very distant and abstract. They have been handed tools of extraordinary power AI image generators that can produce photorealistic content in seconds without anyone explaining what those tools can do to another human being, or what the law says about using them. This is not an excuse. It is an explanation. And it is a screaming case for up to date education. The law now exists. The protections are necessary and overdue. But criminalisation alone will not stop a fifteen-year-old from making a catastrophic decision in a group chat at 11pm on a Tuesday. What might stop them is understanding that the image they just created of a classmate is not a meme. It is an act of abuse. It causes real psychological harm: anxiety, depression, social isolation, self-harm risk, school disengagement, and a lasting fear that the content will resurface for years to come. Young people need to understand this not because the law says so, but because another human being's dignity demands it. This is now forseeable risk across the country and even beyond. Schools must immediately update their policies to explicitly name AI-generated sexual content as serious misconduct. They must embed clear reporting pathways to police and the eSafety Commissioner. They must deliver digital safety education that goes beyond "be kind online" and actually walks young people through what this technology can do, what the law says, and what real harm looks like on the other side of a screen. Parents need to know that the device in their child's pocket can now, in under sixty seconds, produce content that constitutes a criminal offence. That is the reality of the tools freely available to every child with a smartphone. Young people themselves deserve the truth that the law has changed, that the stakes are real, and that a moment of stupidity in a group chat can follow them for the rest of their lives. They are living in this world and are, in most cases, eyerolling so hard you can hear it at online safety education that is to polite and to shallow and does not align with the digital reality they are living in. These reforms are about recognising that digital sexual abuse is real abuse regardless of whether a camera was ever involved. They are about protecting young people who are victimised and making sure young people who offend understand what they have done. But the law alone cannot carry this. It needs education beside it. It needs conversations at kitchen tables and in classrooms. It needs adults who are willing to look at the technology children are using and ask the uncomfortable questions. If you would like information about our in person education or our year round support for schools please hot reply or email hello@ctrlshft.global
- Why Under-16s Can Still Access Social Media in Australia — and the Crucial Detail Most Parents and Educators Are Missing
Confusion has a way of spreading faster than truth, and right now, confusion is spreading through our school communities about Australia’s "ban" on social media for children under 16 years. I can hear it in the voices of parents who are worried their child has done something illegal. I can see it in students who believe they are one login away from punishment. I can feel it among staff who are unsure where their legal responsibilities begin and end. Australia did not ban children from social media. Australia banned social media companies from providing services to children below the minimum age. That distinction matters more than most headlines ever allowed. Under the Online Safety Act 2021, which you can read in full at https://www.legislation.gov.au/Details/C2021A00076 , the legal responsibility sits with platforms. The law requires social media services to take reasonable steps to prevent access by minors under 16. It regulates industry behaviour. It does not criminalise childhood, and it does not ban the internet. Somewhere along the way, this became publicly shortened to “the ban.” It is a convenient phrase. It is also misleading. When language is careless, fear fills the gap. If your child entered a fake date of birth to open an account, they have not committed a crime. If your thirteen or fourteen-year-old has a social media profile, they are not breaking Australian law. You, as parents, have not committed a criminal offence. Educators are not legally liable because a student under sixteen has an account. The legislation does not create criminal penalties for young people who are on platforms below the minimum age. This law was designed to shift accountability upward, not downward. It targets corporate systems, not children navigating a digital world built by adults for them. Why this clarity matters is not abstract. It is practical and urgent. Many young people under 16 in Australia can still access social media. Regulatory reform does not flick like a switch. It embeds over time. Platforms adjust age assurance processes. Enforcement frameworks evolve. Compliance tightens in stages. Sometimes access changes slowly. Sometimes it shifts overnight. We have been preparing students for 10 December 2025, assuming that digital access, account continuity, and platform reach could change quickly, because responsible risk governance does not gamble on best-case scenarios. It prepares for disruption before it arrives. Exposure does not move in a straight line. Platform restrictions can intensify without warning. The only defensible position for a school community is readiness grounded in education, not panic driven by rumour. But there is another layer to this that concerns me even more. When young people believe they will be punished simply for being on social media under 16 years, they go quiet when something goes wrong. Silence is the oxygen of harm. Cyberbullying festers in private messages. Image-based abuse circulates in hidden folders. Grooming thrives in secrecy. Sexual extortion escalates in isolation. If a child believes admitting they are on a platform will get them into trouble, they are less likely to report when a conversation turns threatening, or an image is shared without consent. That delay can be the difference between swift intervention and lasting trauma. We need students to understand a fundamental concept. Their safety matters more than a sign-up form. If something online makes them feel uncomfortable, frightened, pressured or unsafe, they will not be punished for having an account. They will be supported. Parents, the same principle applies at home. If your child comes to you about an online safety issue, the priority is their well-being, not whether they complied with a platform’s age requirement. The first response must be protection and care. Consequences can be discussed later if necessary. Safety cannot. Schools play an important and steady role in guiding students to build safe and responsible digital habits, responding thoughtfully when online issues affect well-being, supporting families as they navigate digital challenges and foreseeable risks, meeting legal reporting responsibilities where serious harm arises, and nurturing a culture where students feel safe to seek help early rather than carrying concerns alone. If a child is experiencing cyberbullying, image-based abuse, sexual extortion or any form of online harm, support exists beyond the school gates. The eSafety Commissioner provides reporting tools and direct assistance at https://www.esafety.gov.au The Australian Centre to Counter Child Exploitation operates through the Australian Federal Police at https://www.accce.gov.au Kids Helpline offers confidential counselling for young people on 1800 55 1800 and at https://kidshelpline.com.au . These services exist because online harm is real, measurable and growing. According to the eSafety Commissioner’s own research, a significant proportion of Australian children report negative online experiences before the age of sixteen. This is not a theoretical risk. It is a lived experience. Clarity is protection. Precision is protection. Calm, accurate language is protection. Let us place responsibility where the law places it, on the companies that design, profit from and control these environments. Let us prepare our young people not with fear, but with literacy, boundaries and open channels of communication. We can build a culture where a child who is scared of what is happening on a screen feels safer walking into a classroom or a kitchen and saying, I need help. That is the standard we need to set.
- Roblox Wants You to Know They're Listening to Parents Now. That Is Not the Same Thing as Making Your Child Safe.
On February 19, 2026, Roblox announced its inaugural Global Parent Council. Eighty parents from thirty-two countries, hand-selected, meeting quarterly to "share insights and perspectives" and "advise on products, policies, and partnerships." There is a Head of Parental Advocacy with a doctorate. There is a companion programme called Parent Champions. The press release uses words like committed, empowered, co-creating, safe. If you are a parent who has been worried about Roblox, this announcement is designed to make you feel heard. I need you to sit with what I am about to say next, because it matters. Listening is not the same thing as changing. And this announcement changes nothing structural about how Roblox works. I use the term child online safety washing a lot — it's the digital equivalent of greenwashing. It is what happens when a company publicly amplifies minor advisory gestures while the underlying architecture that generates risk remains untouched. Behavioural data from children still fuels monetisation. Social mechanics still enable predators to migrate children to unmoderated platforms. AI training pipelines still rely on content created by minors. And nearly sixty percent of Roblox's users, the majority of their customers, are under sixteen. A quarterly listening session with eighty parents, with no statutory authority, no independent oversight, no power to compel design changes, and no access to internal data, is not governance. In fact it is barely a focus group. And real children are being harmed while the press release circulates. ** Trigger Warning ** - The scale of what is happening deserves my complete frankness. As of January 2026, at least 115 lawsuits are consolidated in US federal multi-district litigation against Roblox, representing minors who allege they were sexually assaulted by someone they met on the platform. Eight hundred parents sent a jointly signed letter to Roblox's board, urging the company to stop forcing child sexual exploitation cases into secret arbitration. The same month the Parent Council was announced, Los Angeles County sued, alleging the platform had become "a breeding ground for predators." The Texas Attorney General sued for putting "pixel pedophiles and corporate profit" over children's safety. At least thirty people have been arrested since 2018 for abducting or sexually abusing children they groomed on Roblox. And in Australia, also in February 2026, Communications Minister Anika Wells wrote directly to Roblox demanding an urgent meeting over reports of children being groomed by predators and exposed to sexually explicit content. She asked the Classification Board to review whether Roblox's PG rating, last assessed in 2018, still made sense. Days later, eSafety Commissioner Julie Inman Grant announced her office would no longer just monitor Roblox's safety commitments — they would directly test them. If found in breach of Australia's Online Safety Act, Roblox faces fines of up to A$49.5 million. In November 2025, a Guardian journalist entered Roblox with parental controls switched on. Within the session, that journalist was handed a sexualised avatar, cyberbullied, violently killed, and sexually assaulted. With the safety settings on . This is a pattern. And patterns, unlike press releases, do not lie. What a Parent Council Cannot Fix Roblox is not a game. It is an ecosystem — a digital world with its own economy, social networks, and communication systems. And the architecture of that ecosystem has specific features that generate risk. The established pattern, documented in lawsuit after lawsuit, is for predators to identify a child on Roblox and migrate them to an unmoderated platform like Discord or Snapchat. Roblox does not hard-block this migration for users under eighteen. The workarounds are well known and well documented in court filings. The Robux economy uses design patterns — limited-time offers, artificial scarcity, social pressure — that mirror gambling mechanics. Multiple lawsuits allege these are deliberately designed to exploit children's developmental vulnerabilities. With over 150 million daily active users, the ratio of harmful interactions to moderation capacity is structurally insufficient. The platform added sixty million daily users between late 2024 and late 2025. Moderation did not scale proportionally. It rarely does. These are architectural problems. You do not solve them with quarterly parent feedback sessions. You solve them with hard design constraints, revenue trade-offs, and rigorous independent oversight. If Your Child Plays Roblox I am not telling you to rip the device from their hands. Many kids have positive experiences on the platform. But the company's public messaging about safety and the structural reality are two very different things, and this announcement is designed to close that gap in your mind without closing it in practice. The risk is not primarily in the games. It is in the communication systems and the ease with which a child can be contacted by a stranger pretending to be another child. If your child has Roblox and Discord or Snapchat on the same device, the pathway from initial contact to unmoderated private communication is disturbingly short. Many of the most serious cases — including abduction, sexual assault and sadistic exploitation — followed this exact pattern. Use the parental controls, but understand they are not sufficient. And talk to your child — not once, but regularly — about what it means when someone they meet in a game asks them to move to another app. Tell them it is the same risk as getting in a car with a stranger. What Real Accountability Looks Like If Roblox were structurally serious, it would establish an independent child safety board with genuine authority to compel design changes and publish findings. It would release regulator-grade harm transparency reports — not curated snapshots. It would ring-fence all data from minors out of AI training. It would hard-block social migration for under-eighteens. It would encourage independent research with full data access. None of this is technically infeasible. The company generates billions in revenue. The constraint is not money or engineering. It is will. Legislators and regulators are getting smarter. They are starting to ask for data flows, architectural controls, and algorithmic accountability. The UK Online Safety Act, the EU Digital Services Act, Australia's Online Safety Act, COPPA — regulators worldwide are moving toward systemic accountability. A quarterly check-in with eighty parents does not satisfy a systemic duty of care. When your primary demographic is children, you are not a gaming company. You are a children's digital infrastructure provider. Safety is not a feature to be toggled on. It is the product. You cannot buffer systemic risk with curated listening sessions. To the eighty parents on the council, your instinct to show up is admirable. But ask yourself — does the council have any power? Can you compel a design change? Access incident data? Publish independently? If the answer is no, then you are not an adviser. You are an audience member in a performance and the most powerful thing you could do is demand, publicly and together, that the council be given independent authority, and if that demand is refused, to say so out loud. Consumer Notice (updated Feb 20, 2026): "As of January 2026, there are 115 Roblox lawsuits in the federal MDL" and "An MDL was formed in December 2025 to consolidate dozens of cases in federal court." → https://www.consumernotice.org/legal/roblox-lawsuit/ King Law (updated Feb 20, 2026): "There are currently 115 pending lawsuits in MDL 3166, which represents minors who claim they were sexually assaulted by someone they met while playing Roblox." → https://www.robertkinglawfirm.com/mass-torts/video-game-addiction-lawsuit/roblox-lawsuit/roblox-sexual-abuse-lawsuit/ 800 parents letter re: arbitration: King Law : "A group of 800 parents have sent a jointly-signed letter to the board of directors for the Roblox Corporation. The letter is urging Roblox to stop its attempts to send child sexual exploitation and grooming lawsuits to arbitration." The letter asked Roblox to "stop the improper and shameful attempts to force these vulnerable, sexually abused and exploited children into secret arbitration proceedings." → https://www.robertkinglawfirm.com/mass-torts/video-game-addiction-lawsuit/roblox-lawsuit/roblox-sexual-abuse-lawsuit/ Additional sources: NBC News (Feb 20, 2026) on the LA County lawsuit filed the same week as the Parent Council announcement → https://www.nbcnews.com/business/business-news/los-angeles-county-sues-roblox-rcna259891 LA County official press release (Feb 19, 2026) → https://lacounty.gov/2026/02/19/la-county-sues-roblox-for-unfair-and-deceptive-business-practices-that-endanger-and-exploit-children/ Texas Attorney General lawsuit announcement → https://www.texasattorneygeneral.gov/news/releases/attorney-general-ken-paxton-sues-roblox-putting-pixel-pedophiles-and-profits-over-safety-texas
- Why some student images have become a foreseeable risk — and what calm, child-safe leadership looks like now.
If you work in a school as a principal, in advancement or marketing, board member, wellbeing lead, ICT manager or educator, you may already know this feeling. That quiet pause before approving a photo for social media and the moment of unease that didn’t exist five years ago. The sense that something has shifted, even if the policies haven’t yet caught up. You’re not imagining it. The risk environment has changed, and with the amount of coverage now appearing across Australian and international media, this risk is no longer abstract. It is foreseeable. For many years, schools relied on parental consent and permission to publish forms as the primary legal and ethical basis for publishing student images. Prior to 2021 that approach was considered sufficient. Today, unless consent is genuinely informed, specific, and auditable, it no longer reflects how images are captured, stored, transmitted, and reused in the context of a school's social media use and across multiple EdTech systems. Loss of control increasingly occurs before anything is posted publicly at the point of image capture, storage and transmission. Photos taken on personal devices or unmanaged school systems can automatically synchronise to private cloud accounts, be retained beyond their original purpose, or be accessed, copied or repurposed in ways never anticipated at the time consent was given. Once an image leaves a controlled school environment, schools can no longer reliably ensure that its use remains aligned with the purpose parents agreed to, that security standards are met, or that the image will not be misused. This is why leading child-safety bodies, privacy regulators, and the eSafety Commissioner now treat image-based harm as a material governance and duty-of-care issue, not a technical one. Privacy settings do not completely prevent copying and screenshots. Consent forms do not prevent third-party misuse, and schools are often unaware of harm until long after it has occurred. Under Australian law and the National Child Safe Principles, schools are required to take reasonable, proactive steps to prevent foreseeable harm, including online and technology-facilitated abuse. Image-based exploitation, deepfakes and AI-enabled misuse are now recognised psychosocial hazards with serious implications for student and staff wellbeing, learning, attendance and long-term mental health. When risk is foreseeable, and harm could be severe, the duty of care requires action before a crisis, not after a police report or media inquiry. This is why many Australian schools that have worked with us are quietly but confidently changing their approach. Some have gone public with their reasons for no longer publishing student images on public platforms that are easily deepfaked. Others have removed student photos from social media entirely, limiting use to closed, authenticated environments. Others have changed the type of photos they publish. Because leadership means responding to what is now known and what is clearly coming. Schools did not create this digital ecosystem. Technology companies did. But schools are still the ones placing children into it, often without full visibility of how images and other identifiable data can be used downstream. This isn’t an argument for shutting down communication or community engagement, as there are ways you can keep your school's social media just by changing the way you do things. It’s an argument for modernising risk assessment and setting safer defaults that align with contemporary child-safe expectations. The question many schools are now asking is not “How do we keep posting safely?” but rather, “What is the minimum public digital footprint our students actually need?”. It reframes the issue from fear to governance, from reaction to leadership. We’ve supported hundreds of schools to make this transition calmly and confidently, without fear campaigns, without parent backlash, and without adding pressure to already stretched staff. This work is not about going backwards. It’s about moving forward with integrity, clarity and child-centred decision-making. This risk isn’t going away; in fact, it may increase a recent IWF survey showed a 26,362% increase in CSAM in the last year. Schools that respond early, thoughtfully and systemically are already setting themselves apart as leaders in child safety, wellbeing and digital responsibility. _________________________________________________________________ What comes next — and how schools are responding We are now inviting our next tranche of 50 schools to participate in our CTRL+SHFT+OS Early Adopter Program. CTRL+SHFT+OS is not a policy pack or a one-off intervention. It is the operating system schools use to run and prove the duty of care across the whole school community in one connected system. It gives leadership teams a single, defensible system to capture early risk signals, run the correct response pathways for compliance, support students and staff in real time, and generate regulator- and board-ready evidence automatically without relying on memory, inboxes or reconstruction after the fact. If your school is ready to lead early or if you are navigating board, leadership or community conversations about what responsible digital practice now requires, we would welcome you to get in touch to organise a live demo. Contact: kirra@ctrlshft.global
- “Who here is still able to access all the apps?”
For the past two days, I began each student's talk with these questions: “Who here is still able to access all the apps?” And for the older students: “Who has a younger sibling still with access?” In the first session, Yrs 9/10, there was hesitation. A few brave hands. I could feel the uncertainty in the room, maybe even shame or fear from those still under 16 years that they might get in trouble. I broke it open and spoke as I have become known for on LinkedIn and in international media for many months now. I reframed it to reflect what should have been in the media before big tech's PR spin reached it. I told them how I correct adults, governments, and leaders: The Australian Government didn’t ban under 16s from social media. It banned social media companies from accessing Australian children under 16. The energy in the room shifted immediately. Because when you tell young people the truth without fear, without condescension, and without calling the Social Media Minimum Age Law a failure, something remarkable happens. They listen. Not because they’re scared. Because they’re smart. I explained the why. The government banned Big Tech from using persuasive design to manipulate young minds. From harvesting their data. From nudging them through dark algorithmic loops. From monetising their moods. From twisting their bodies into shapes that the algorithm tells them are worthy. From shaping their identities to serve a machine that has never loved them, only to feed on their attention and grow profit and power. I explained that calling it a “ban” was strategic. Probably coined by a Big Tech PR team. That is normalised and stuck so fast that even the Prime Minister uses it, because that’s what people recognise. How that’s the point. If Big Tech paints the government as the villain, they get to keep being the architect of harm. Unaccountable. Hiding behind parental controls and layers of “child online safety washing” that were never built to work. And they know it. We gently cracked open the truth. The pull of the attention economy. The way the attachment economy has been reshaped less by families and more by profit. We talked about how, in the absence of enough time, presence, or support, many young people now turn to their devices not just for entertainment, but for comfort. For connection. And increasingly, they’re met not by a friend or a safe adult, but by a chatbot trained to mimic care, while harvesting emotion as data. We talked about how none of this is accidental. It’s surveillance capitalism. We spoke about the systems behind the screen. How the simple act of their parents saying “Happy Birthday, darling, I’m so proud of you” on Facebook has been reframed as the modern ritual of love and how parents feel guilty if they don’t perform it online. In the Year 7 and 8 session, I asked how many had just received their first phone “for safety.” A third raised their hands. Their honesty was humbling. But not surprising. I talked about the myth that giving a child a smartphone is always an act of safety, a phone with less functionality is all they need. AS all to often the gift of a smartphone to young is an unexamined handover to a billion-dollar industry with no duty of care for a child’s emotional lives. In the Year 11/12 session, we went deeper. I told them that boys are not the problem. They’re the target market. Vulnerable. Curious. So often misunderstood. Swept into a digital storm that no adult fully prepared them for. What we’re seeing now is the impact of a system carefully designed to prey on adolescent uncertainty. The online world doesn’t wait for boys to grow into themselves. It pushes. Repeats. Rewards. It draws them into content that appears bold, funny, and rebellious, and slowly becomes darker. No 14-year-old boy is born with hatred. Misogyny isn’t innate. It’s learned. Drip by drip. Hidden in jokes, laced through online personalities, offered as belonging. One click out of curiosity becomes a pattern. A moment of feeling lost becomes a digital door. And behind that door are echo chambers where empathy is mocked, and bravado becomes a currency. We must stop assuming bad intent. We must see what’s really going on. Because many boys are not choosing hate. They’re being groomed into it quietly, cleverly, by systems that care more about screen time than their well-being. Our job isn’t to blame them. It’s to reach them. Gently, truthfully, before the algorithms do. I taught the seniors how to identify it if it has happened to them and how they can talk to younger brothers and cousins who may be caught up or showing the signs. I told the girls the truth about online consent, filters, dysmorphia, hustle culture, and the new insidious failure narrative that if you’re not a billionaire by 22, if you’re not flying private, if you’re not selling self-branded content and counting your passive income before breakfast, you’re failing. This isn’t fringe. These are the dominant narratives on their feeds. We talked about algorithmic bias and digital echo chambers, about AI and the environmental impact, about how to build pause to combat outrage and division of communities and countries and how to protect their cognitive sovereignty, and how feeds differ across continents and why that is and why we need to understand what is happening. I told them the truth about the Social Media Minimum Age Law: it's not there to punish kids. It's there to hold tech companies accountable. So, if something goes wrong, and they're under 16, they won’t get in trouble for speaking up. They should speak up because their safety always comes first, and I told them that while change might feel slow, systemic public health shifts always are. We discussed that when I was a kid, cars didn’t have seatbelts. It took time, pressure, and public awareness to make a safety standard. That this is their seatbelt moment. That yes, on December 10th, they might have lost access. Maybe not. Maybe next week. Maybe in three months. But we need to be prepared for the fact that it may happen when they least expect it, and they deserve to understand why. It was never about them. I spoke about 1984 in Byron Bay. How, at 14, I caught little bits about the Cold War in the background of the 6 pm news as I walked through the family room, if my dad was watching the news. That was about as bad as it got for me. Not out of nostalgia, but to give them context. Now, they live inside it. A 24/7 livestream of Wars, Epstein files, Adult content that is not love, Violence, Misinformation, Hate, Climate Catastrophe, Polarisation, and Manufactured Identity. If you don’t get it right, they roll their eyes so hard you can hear it. I don’t scare kids. I inform them truthfully, and when you do that, they hear you. In rooms of 300+ students in each session, there was barely a whisper of distraction. When they did chatter, it was talking to a friend about what I had just said. Curiously unpacking it in their personal space. It happens when someone they don’t know shows them respect and treats them like the capable, intelligent human beings they are. Young humans whose lives are already deeply entwined in digital systems that most adults barely understand. The kind of respect they get from their teachers who know them well, but rarely from a complete stranger. I don’t usually speak directly to students anymore. I spend most of my time with leadership teams, educators, governance and risk teams and policymakers because systems shape behaviour, and this is a system-wide problem that I can help guide leadership through so it filters through the organisation. At St Ignatius College in Adelaide , I work with their students because the incredible leadership team and educators, under the thought leadership of Principal Lauren Brooks, understand that this isn’t a tech issue, a discipline issue, or a one-off assembly fix. They are deeply invested in shifting digital culture across the school and their wider community. St Ignatius didn’t just book a once-a-year speaker for their students; I also met with all of the leadership, heads of house, and curriculum. I have presented to parents and hosted all staff PD, and they participate in our CTRL+SHFT+AAA program year-round. St Ignatius continues to positively amplify the shift they made last year and their leading Tech Smart framework. They understand that the line between online and offline no longer exists. This is just life now, and it needs to be lived well. That means changing how we use devices in school, how we educate about the ethical and safe use of technology, and the importance of remaining human at the centre, not once a term or once a crisis, but all year, across the whole school and parent community. This is what real change looks like. Systemic. Sustained. Embedded. After the sessions, one student said, “I’ve seen so many cyber safety talks, but that was the best.” Another said, “That was the first time someone actually explained things I didn’t know.” One more told me, “At my old school, it was the exact same presentation every year.” Students told me they’d be deleting their accounts not out of fear, but because, for the first time, they understood why . Many came up to say thank you. Quietly. Even while I was waiting for my taxi. We need to understand something urgently. This generation is not desensitised. They’re overwhelmed. They’re not disengaged, but some are drowning in noise and AI slop, desperately scanning for a signal that keeps them deeply, messily, and gloriously human. My final student session at St Ignatius was a combined Year 5/6 session, and it went to a whole new level I did not see coming. We discussed safer, more effective ways to use the games they love (I never say "don’t"). We also discussed climate change and AI's impact on the planet. The students led the conversation into environmental engineering, energy transfer, data centres, and the invisible architecture behind the digital world they were born into. I told them how cities like Helsinki don’t waste the hot water pumped out after it has been used to cool data centres; they redirect it. The waste heat expelled by data centres is captured and channelled to heat entire neighbourhoods. In winter. In sub-zero temperatures. The room erupted. “That’s genius,” they said. And before I could ask another question, they were asking theirs. Could we do that here? Could Adelaide become a city that thinks like that? This was Year 5 and 6!!!! This is what happens when we speak to kids like they are already part of the future, because they are. When we move past finger‑wagging and fear and offer them the real, raw brilliance of human innovation, their minds ignite. When you tell them about the extraordinary innovation of Australia’s First Nations peoples like the boomerang, a throwing stick designed to return so it could be used again, or ancient stone fish traps engineered for sustainable food, the room lights up with recognition. With the understanding that some of the most sophisticated innovation in human history was imagined, tested, and perfected right here thousands of years before them, with no computers or AI in sight. When we do the work. When we show them the whole picture and invite them into it, they don’t just care. They lead. Not because we told them to, but because we trusted them to. Our job isn’t to hand them watered-down online safety warnings. It’s to give them better truths to stand on and then step back gradually, so they can build what comes next.
- Why Emoji's Over Faces Will No Longer Protect Our Kids Privacy
If you’ve ever posted a photo of your child with a smiley face or heart emoji over their face, you’re not alone. It’s become a kind of digital parenting ritual. A signal that says, I love them, but I’m protecting them too. That instinct, to protect while still sharing, is deeply human. But here’s what most of us haven’t been told: the tools we’ve been using to protect our kids online haven’t kept up with the technology, which is now learning from them. It’s time to gently, but honestly, update what we understand about online safety because the internet doesn’t just watch anymore. It learns. What’s Changed Artificial intelligence no longer needs a full face to identify a child. It can spot them from the shape of an ear, the tilt of a head, the badge on a school uniform, the pattern of candles on a birthday cake, or the garden behind your house. It pieces it all together, not because it’s malicious, but because it’s designed to learn. Once it has learned, it doesn't forget. Even one image, posted at the right time and place, can become part of a data set. That data can be used to create synthetic images of children. Sometimes for advertising. Sometimes for uses that are deeply disturbing. While we were focusing on covering faces with emojis, the systems were watching everything else. The backgrounds, captions, hashtags, geotagged location and more. They weren’t just learning from our photos. They were learning from us how much engagement a post received, what we posted more of, what got attention and what didn’t. We weren’t just encouraged to share, we were trained to, and that’s what no one told us. The platforms did not wait for us to become influencers. They turned everyday parenting into a feedback loop. Each time a photo of a child received more likes than a sunset, more comments than an adult achievement, the system took note. Each time a birthday post travelled further than a work milestone, the algorithm quietly learned what mattered most to us and then fed it back, amplified. Over time, visibility became confused with love. Not posting started to feel like hiding, or worse, like not being proud. That was not a coincidence; it is the basis of a social media platform's business model, known as the attention economy. The dopamine wasn’t accidental either. The surge of connection after sharing a milestone, the reassurance of being seen in our parenting, the relief of belonging to a community that seemed to value our children as much as we did. These responses were measured, refined, and reinforced. The system algorithms rewarded emotional exposure, not because it cared, but because it performed. Slowly, without any announcement or consent, parenting became public-facing. Not because parents were careless, but because the architecture of these platforms was designed to exploit the most vulnerable, loving instincts we have. This is why emojis became a ritual. They offered the illusion of control in a system that had already moved past it. They allowed us to believe we were protecting our children while still feeding the machine. Understanding this matters. Once you see how the training worked, the pressure to participate begins to ease. You realise you were never failing to protect your child. You were operating inside a system that quietly taught you that sharing was the price of belonging, and now that we know better, we are allowed to choose differently. What We Can Do Now This isn’t about blame. It’s about awareness. Once we know better, we do better. So here are some grounded, gentle steps to consider as you navigate this space, especially as the school year begins and those first-day photos flood our feeds: 1. Take a pause before you post Ask: Who is this really for? If it’s for grandparents or loved ones, consider a private message, a shared album, or a printed photo you can stick on the fridge. Sharing doesn’t always need to be public. 2. Be mindful of context It’s not just the face. It’s the school logo, the front of your house, the birthday banner, the uniform, the personalised water bottle in the corner. These are breadcrumbs. AI doesn’t need the whole puzzle; it only needs a few pieces. 3. Think beyond the emoji Covering a face can give the illusion of safety, but it doesn’t hide identity from machines. If you’re going to post, consider photos that show hands, backs, or silhouettes without revealing identifying context. 4. Delay before sharing Let the moment happen offline first. Give yourself a day or two. If it still feels important to post later, you can do it with more clarity and intention. 5. Build digital consent early Even with young children, begin asking, Do you want me to post this photo? Involving them in the decision, even when they’re small, models respect and give them a sense of ownership over their own image. 6. Don’t let the algorithm shape your parenting If it starts to feel like not posting means you’re not proud — pause. That’s not your instinct speaking. That’s the system doing what it was designed to do…reward content, not parenting. The Bigger Picture We have been told that sharing our children online is a way to stay connected to family and friends. That it was harmless. That a face sticker was enough to keep them safe. But none of that was true, and not because we failed, but because no one warned us what we were feeding. Our kids are growing up in a world where their likeness is data. Machines are trained on their images, their homes, their routines, long before they understand what any of it means. Tech platforms are not designed to protect them, but we can step back into our power. We can choose fewer public posts. Presence over performance. We can choose not to give their story away before they’ve had a chance to live it. Please remember that pride doesn’t need an audience; it just needs to be real. One day, our children will ask us not just what we shared — but why. Let’s make sure we can say – “Because I didn’t know at first. But when I did, I chose you over the feed" That answer will be more than enough.
- The Toys That Listen And What Parents Need to Know This Christmas
Walk into any department store this Christmas and you’ll find shelves of smiling toys powered by AI. Teddy bears that “listen with love.” Dolls that promise to “grow with your child.” Robots that say they can help with maths, literacy and loneliness. These aren’t screens, and yet they’re wired just the same with microphones, sensors, data connections, companion apps, and, crucially, an invisible thread to corporate servers far beyond parental reach. Audio becomes text, text becomes data, and data becomes insight. Insight, in turn, becomes a product to sell. So where does that leave families looking for gifts?…..In the toy aisle! You don’t need to ban tech, but you do need to know what it’s doing. If a toy contains a microphone, it can record. If it talks back with personal insight, it has memory. If it requires an app, it’s probably storing data. If it connects to the internet, it can be breached. The most powerful thing you can do is ask the question companies don’t want you to ask: Why does this toy need to know so much about my child? The answer is rarely about education. Read more here: https://kirrapendergast.substack.com/p/the-toys-that-listen-and-what-parents
- Staff Personal Phones and Child Images in 2025/26
If any of our personal phones were ever checked because something unexpected happened at school or childcare, many of us would probably discover a handful of student photos sitting quietly alongside our own family pictures, cloud backups, messages and apps. Not because anyone set out to do the wrong thing, but because for years this has simply been the familiar way of capturing learning, connection and those small, beautiful moments in a child’s day. But the world has shifted around us. Technology is different, expectations are different, and the risks, especially for children, are far more complex than they once were especially in a world of AI and Deepfakes. What felt harmless even a few years ago now sits in a grey zone where privacy law, child-safety responsibilities and everyday digital habits collide. This isn’t about blame or finger-pointing. It’s about understanding how the world has changed, and why something as simple as a quick photo on a personal device now needs a second look. Because when we know better, we can do better, and we can protect the children in our care with the same gentleness and wisdom we bring to every other part of their wellbeing. Most permission to publish forms give consent for a school or service to use a child’s image not to scatter raw photos across the personal tech of every staff member. They don’t magically make personal devices secure, compliant, or safe and with new regulations in early childhood banning personal phones outright, and school systems tightening expectations everywhere else, the gap between “what we’ve always done” and “what is legally required” has become a canyon. I wrote the blog below in July 2024. In the past few weeks I have been rewriting policy for schools to address evolving privacy legislation. I wrote the blog not to shame or judge but to highlight to Australian schools, with absolute clarity, the risks we can no longer ignore. ______________________ Across Australia, many educators and carers are still using their personal phones to take photos of children for learning documentation, quick parent updates, or spontaneous moments worth sharing. Often, the intention is kind and the moment is genuine. But with deep respect for the work you do, it’s time we acknowledge something important: This practice, however well-meant, puts children, staff, and services at risk. Personal mobile phones are not designed for secure, professional use in educational settings. When photos of children are taken on a personal device even just once the data may: Automatically upload to cloud platforms like iCloud or Google Photos Sync across other personal devices smartwatches, tablets, laptops Be accessed by third-party apps, often without the user’s knowledge Remain in backups or deleted folders for weeks, months, or longer Create a digital trail that can’t be tracked, audited, or recalled This isn’t about blame, it’s simply how the technology works. Even with the best of intentions, once an image of a child is captured on a personal phone, the organisation loses control over where that image might go, or how long it might be stored. Under the Australian Privacy Act 1988 , any image that identifies a child is considered personal information. That means there are legal responsibilities under the Australian Privacy Principles (APPs) about how that information is collected, stored, and shared. APP 3 – Collection: Must be lawful, fair, and necessary APP 6 – Use and Disclosure: Limited to the original purpose, in authorised systems APP 11 – Security: Organisations must take reasonable steps to protect personal data Personal phones, no matter how careful we think we’re being, simply can’t meet that legal benchmark. “An organisation must take reasonable steps to protect the personal information it holds from misuse, interference and loss.”— Office of the Australian Information Commissioner What About Department Guidelines? Most state and territory education departments now explicitly state that personal devices are not to be used for taking or storing photos of children. For example, the Victorian Department of Education says: “Schools must ensure that photographs, video or recordings of students are not taken or stored on personal devices.”— DET: Photographing Students Policy This reflects a growing understanding of the need for consistent, secure, and professional systems when it comes to documenting children’s lives. This Is About Child Safety, Not Just Digital Systems The National Principles for Child Safe Organisations call on all of us schools, centres, staff, and leadership to do everything possible to protect children’s privacy, including how their images are captured and stored. Using personal devices, no matter how informally, can: Undermine institutional safeguards Bypass accountability processes Increase the risk of accidental breach or misuse In child safety, it’s often not what goes wrong that matters most, it’s whether we had systems in place to prevent it. “Child safe organisations need to have systems in place to protect children’s personal information, including images and recordings.”— Australian Human Rights Commission Moving Gently Toward Best Practice If your school or service is still using personal phones for images, know that you are not alone. This has been standard practice in many places for years. Change is not about shame, it’s about moving forward with more awareness, better systems, and stronger safeguards. You didn't know what you didn't know. Here’s what many centres and schools are now doing: Providing organisation-owned devices for documentation Updating internal policies to align with legal and departmental expectations Training staff on privacy obligations and child safety implications Ensuring any old or non-compliant images are reviewed, deleted securely, and reported where needed Seeking support from digital safety professionals to strengthen systems It’s Okay Not to Have It All Perfect......Yet The important thing is to act now, with care and commitment. Our shared goal is to keep children safe not just physically, but digitally and emotionally as well. If you’re unsure whether your current practices are in line with: The Privacy Act The Australian Privacy Principles Your state or territory’s education department guidelines The National Child Safety Framework Then now is a good time to pause, reassess, and reach out for support, I am here to help. No blame. No judgement. Just a shared responsibility to do better, together. Helpful References: Privacy Act 1988: https://www.legislation.gov.au/Series/C2004A03712 Australian Privacy Principles: https://www.oaic.gov.au/privacy/australian-privacy-principles Phoitos and Videos: https://www.oaic.gov.au/privacy/your-privacy-rights/social-media-and-online-privacy/photos-and-videos APP 11 – Securing Personal Information: https://www.oaic.gov.au/privacy/guidance-and-advice/securing-personal-information ACECQA Guidelines: https://www.acecqa.gov.au/sites/default/files/2024-07/Guidelines Google Photos Deletion Policy: https://support.google.com/photos/answer/6128858 OAIC Children and Young People: https://www.oaic.gov.au/privacy/your-privacy-rights/more-privacy-rights/children-and-young-people Child Safe Principles: https://humanrights.gov.au/our-work/childrens-rights/projects/child-safe-organisations DET Student Photography Policy (VIC): https://www2.education.vic.gov.au/pal/photographing-students/policy
- Gmail users, please listen up.
If you don’t want your emails, chats, and digital habits feeding into Google’s AI systems (yes, even when it doesn’t say “AI” outright) there’s something buried in your settings you need to switch off. Google isn’t going to wave a flag or drop you a notification about it. These so-called smart features ? That’s just AI hiding under another name. Predictive writing. Autocomplete. Auto-summarise. “Help me write.” They’re generative AI tools in everything but name. And they’re hoovering up your data to keep learning. Your data trains their models, your habits improve their products. If that doesn’t sit right with you, here’s how to take back a little control. Step 1: Turn off the smart features for Gmail, Chat, and Meet Go to Gmail on your computer Hit the settings gear top rightClick See all settings Scroll down until you find “Smart features” Untick the box that allows Gmail, Chat and Meet to run on smart features. It may boot you back to the screen so you have to hit settings again each time. They don't make it easy. Step 2: Kill it for Google Workspace and other services too Still in the General tab Find “Google Workspace smart features” Click through to Manage Workspace smart feature settings Turn both toggles OFF That’s it. Not particularly hard but also not obvious and most users won’t know to go looking unless someone tells them. Now, depending on where you live Switzerland, the UK, Japan or the European Economic Area these features may be off by default. Because those regions have tighter data laws. The rest of us? We’re left fending for ourselves in an invisible game of opt-out. The bigger issue here isn’t just privacy, it’s the quiet erosion of autonomy. These AI-infused features aren’t always helpful. They’re designed to reshape how you write, respond, work. To nudge your behaviour, subtly, constantly through dark patterns, and the more you use them, the more they learn. Not just about language patterns, but about you. We’ve seen it over and over in the past few years as AI becomes baked into everything, rarely named, never fully explained no informed consent. Features are rolled out at speed and opt-outs are buried permissions are rolled back to when you first ticked a box saying "accept terms" and it has become your job to monitor it all. This is what happens when regulatory frameworks can’t keep up with product roadmaps due to legislation passed way before AI in the way we use it today was a thing. When companies aren’t afraid of penalties and when user rights are treated as settings, not standards. Unless that changes and AI enforcement becomes more than just a wishlist item in policy drafts, people will continue to be datafied by default without ever knowing what they gave up.
- Meta is moving early and texting Australian teenagers. That part is real.
To every parent, teacher and trusted adult — talk to your teens today. Not with fear, but with clarity. Meta is texting Australian teenagers. That part is real. They’ve begun sending messages via SMS, email and in-app notifications, warning young users they have just days left on Instagram, Facebook and Threads if they’re under 16. The law banning under-16s from these platforms officially starts on December 10 , but Meta is moving early — accounts will begin disappearing from December 4 . This is a big, complex shift. But it’s also the perfect opportunity for scammers. They know teens are confused. They know panic makes people click. I am fearful they may start sending fake messages that look exactly like Meta’s — urging young people to "download your data here", or "verify your age now" through links that aren’t safe. So here’s what we need to tell our kids today — not with alarm bells, but with calm, informed authority: Never click through from a text claiming to be from an app. Not now. Not ever. If a young person receives a message about the ban, tell them to go straight to the Instagram or Facebook app, or visit meta.com directly through a browser. That’s the only place they should be downloading data or verifying age. Not through a random SMS or email link, no matter how real it looks. If a teen is wrongly flagged as under 16, Meta will ask them to verify their age using ID or a video selfie. That process is done securely within the app , and not through a shortcut or suspicious link. The technology behind it, Yoti , is trusted but only when accessed through the official platform you can see the results of the Age Assurance Trial here: https://ageassurance.com.au/report/ The key here is timing. Right now, we have a narrow window before confusion peaks and scams escalate. This is the moment to step in gently, confidently, and have the conversation. Reassure your teen that they haven’t done anything wrong. Remind them that real warnings from Meta are coming via text but it is never safe to click through from an SMS EVER! There’s no need for panic. But there is a need for precision. If we wait, the risk grows. If we act now — clearly, calmly, together — we protect not just their accounts, but their confidence and safety online. Talk to them today. Even if they roll their eyes. Even if they say they already know. Just open the door because digital scams don’t care how old you are. But being prepared? That starts with us. Download Instagram Memories here: https://help.instagram.com/181231772500920/?helpref=uf_share Other Useful Links: https://www.facebook.com/help/2199535317224012/?helpref=uf_share Our Free Resources: https://www.safeonsocial.com/shop eSafety Social Media Minimum Age Hub: https://www.esafety.gov.au/about-us/industry-regulation/social-media-age-restrictions-hub
- Before They Break Out
Six years ago, I was invited to speak at a school where one of the students was already a well-known influencer who was having a profound effect on her peers. She was 11. Her platform? Skincare routines. Today, it’s not just worse. It’s everywhere. “Get Ready With Me” videos are now a daily ritual for primary school girls. Morning routine. After-school routine. Night routine. Cleanser, toner, serum, lip mask, ice roller. Ten steps, tapped out by tiny gel nails on 10-year-olds unboxing yet another haul from the same three beauty chains. Some of these children are now being gifted products directly by brands. Products that cost those companies less than $20 to manufacture but the influence? It’s worth far more, and they will not be paid as a child model or actor would. With one unboxing, they’re broadcasting a message to thousands of little girls that you need this to be beautiful. You need this to be enough. What about those that can’t afford it? What about those watching that walk start believing their skin, their perfect childhood skin, is already something to fix? This isn’t about play. This is a marketing strategy with a child’s face on it. We are not just witnessing the collapse of childhood. We’re watching it be monetised. The question for us, especially those of us in digital, education, governance, and leadership, is no longer “what’s going on?” It’s how long will we stay silent? Read more here: https://kirrapendergast.substack.com/p/before-they-break-out











