Search Results.
87 results found with an empty search
- JULY 2025 JUST DELIVERED LEGAL PRECEDENT: When the bell rings, the duty does not end.
The line between what schools know and what they do just got a legal reckoning - by Andrea Turner Ctrl+Shft Legal Advisor In July 2025, the NSW Court of Appeal handed down a decision that should have halted every school board meeting, inbox scroll, and policy review in the country. In State of New South Wales v T2 [2025] NSWCA , the court confirmed something that educators, lawyers, and families have been circling for years: the duty of care held by schools does not end at the school gate, nor does it dissolve after 3:15 pm. The case didn’t hinge on dramatic new facts. It wasn’t a shocking outlier. It was about something far more uncomfortable: what happens when a school hears a warning— even faintly —and fails to act. The ruling couldn’t be clearer. If a school becomes aware of a threat to a student, whether through a formal report, a groupchat screenshot, or an escalating pattern of behaviour, it is on legal notice. Not moral notice. Legal. The defence of “we didn’t know enough” no longer holds. The excuse of “it didn’t happen on campus” has lost its weight. To understand how we got here, you have to go back to the moment that brought this all into the light. In 2024, a 14-year-old autistic boy was physically assaulted just outside the boundaries of his school. This wasn’t a bolt from the blue. The school had prior knowledge of social friction and patterns of concern. But the risk was not meaningfully addressed. In T2 (by his tutor T1) v State of New South Wales [2024] NSWSC 1347 , the court awarded damages to the boy’s family, not for what happened in a single moment, but for everything that led to it. The failure wasn’t isolated. It was systemic. Now, with the 2025 appeal decision, the message has moved from pointed to permanent. The precedent is set. The law has shifted. And for schools, that shift must be more than acknowledged; it must be operationalised. But the law doesn’t pause for complexity. And neither does human digital risk compliance. Bullying and harassment no longer confine themselves to the school day. They metastasise through Nudify apps, vanish into disappearing messages, and gather momentum in weekend group chats. They happen in toilets, carparks, Discord servers, and livestreams. They’re shared in Snap stories at 2 am and re-emerge as taunts the next morning. While schools carry institutional responsibility, families are being asked to do something equally difficult: raise children inside a digital ecosystem they did not build, cannot fully access, and often barely understand. Some parents don’t know what their children are doing online. Others are working multiple jobs and hoping someone will alert them if something goes wrong. Some know something is off but don’t know how to begin, or who to ask, or how much is “too much” to intervene. This isn’t about judgment. It’s about recognising that the online world has become a central arena of harm, and it is unreasonable to expect families to navigate it alone. Schools must review their frameworks not just for what happens on school grounds, but for how they track, assess, and intervene in ongoing patterns of harm wherever they occur. It’s not enough to have a policy. There must be practice. There must be clear escalation pathways. There must be ownership, not just intention. And across the board, we must stop confusing policy awareness with protective action. Knowing is not the same as doing. And silence is not the same as safety. Ctrl+Shft are the Global Leaders in Human Digital Risk Compliance. Book a meeting with us today - hello@ctrlshft.global
- What You Share With ChatGPT Could End Up on Google. Here’s What You Need to Understand.
We’re living through a moment where digital instincts haven't caught up with digital reality. A moment where a feature can feel like a private diary, but act like a press release. And most people have no idea it's happening. ChatGPT’s “Share” button is one of those design sleights-of-hand that seems harmless. Elegant. Convenient. You’ve just had a conversation with the AI—maybe it helped you draft a policy, unpack a trauma, or name the thing you couldn't say out loud. It wraps the entire thread in a link and offers it up: clean, clickable, ready to pass on. And you do, send it to a colleague, a client, a classroom. Click. Copied. Shared. But here’s the part the interface doesn’t tell you: those links are public. Not semi-public. Not “only if you have the URL.” Public. Search-indexable. Discoverable. Crawled by Google. And increasingly, they’re turning up in live search results. Paste the right search string and you’ll surface complete ChatGPT conversations, no account names, sure, but full context. Real queries. Real answers. Real risks. Now, think carefully about what people are putting into these exchanges. These aren’t sanitised brainstorming sessions or harmless hypothetical debates. These are unfiltered disclosures: abuse histories, panic spirals, identity crises. Private medical queries. Legal what-ifs. Draft resignation letters. Notes to dead parents. Teachers navigating disclosures. Kids navigating identity. It’s not content—it’s people, mid-process. They’re typing into this box because it feels safe. Not performative like social media. Not surveilled like a search. A liminal space—quiet, responsive, seemingly contained. But that feeling? It’s a design illusion. Because when that link gets indexed—by accident or indifference—that illusion breaks. The damage isn't theoretical. It’s immediate. And while you might be “anonymous,” the breadcrumbs are real: phrasing, context, location cues, references. It doesn’t take much to connect the dots if someone is motivated. Or careless. Or cruel. This isn’t an indictment of OpenAI or a rejection of the tool. It’s a call to close the gap between what people think the system is doing, and what it’s actually doing under the hood. So, What Do We Do? If you’re someone people turn to for guidance on tech, safety, digital wellbeing this is your cue to step in. Not with fear, but with clarity. Don’t just warn people. Show them how the system works. Teach them where the guardrails fail. Give them the language to talk about digital trust in terms that matter. Here’s where to start: Never assume the Share button is private. If you wouldn’t publish it on a website, don’t send it via a shared ChatGPT link. If it’s raw, intimate, legal, or painful it stays out. Stop entering identifiable information. Not just names. Think: diagnoses, workplace incidents, school names, birthdays, subtle phrasing. These aren’t just details, they’re identifiers in the wrong hands. Turn off memory. This doesn’t erase what’s already there, but it stops the model from pulling your past into your present. It narrows the trail. Opt out of training. In your settings, you can choose not to have your chats used to train future models. It’s not perfect, but it’s a start. Audit your privacy settings regularly. OpenAI is moving toward greater transparency, but default settings don’t protect everyone. Make checking those settings part of your digital hygiene. The Core Truth “Anonymous” isn’t the same as “protected.” The real risk lives in the gap between what users expect and what the system does. That’s where harm blooms. That’s where people get blindsided. People trust what feels private. But the interface doesn’t show you the index bot. It doesn’t flash a warning when your trauma turns into a URL. So treat every interaction with AI like it could one day be read aloud in a courtroom, or in front of your board, or by your 13-year-old. Because for too many people already—it has. Don’t let a private moment become a public fallout. Build the literacy. Spread the warning. And keep pushing for infrastructure that matches the stakes. Contact us for up to date education and support - hello@ctrlshft.global
- Why Blaming Parents Helps No One
When a child makes a catastrophic mistake online, the first question people ask is always the same. Where were the parents? It’s a question soaked in judgment. It implies negligence. Weak boundaries. A lack of discipline. It lands like a hammer on mothers who monitor every message, on fathers who’ve delayed the phone longer than most, the parents who said no to social but yes to a phone for safety reasons. It hits hard on caregivers who’ve done everything right, until their child sends the photo, joins the group chat, films the fight, says the wrong thing, clicks the wrong link, or trusts the wrong person or even takes part in an SMS truth or dare with their year group. Where were the parents? They were there. They were trying. They were protecting. But the truth that no one wants to say out loud the truth that terrifies schools and shames families and gets buried beneath the latest policy refresh is this: You can’t compete with it. No parent can. Not with the scale. Not with the speed. Not with the design because this isn’t a parenting failure. This is a full system failure. A system that markets pornography to children before they know what sex is. A system where 13-year-olds are sold tracking devices disguised as protection. A system where algorithms push extremism, cruelty, self-harm, and misogyny, and call it content. A system where a classmate can generate fake nudes of your child in seconds, and the law still doesn’t know who to charge. A system that collects and stores a child’s data even when they are under 13 years old, and it is illegal for the company to do so under COPPA—but they manipulate parents into thinking sharing is love, when all it’s doing is giving the app the information it would usually be fined for capturing directly. Parents cannot compete. So let’s stop pretending that parental control apps, monitoring software, dinner table conversations, or “just saying no” can somehow neutralise that. Parents cannot compete against a trillion-dollar company’s algorithms. Impossible. We need to get real. And here’s what’s worse: I see it all the time, even from the so-called experts, the trusted voices in this space. The commentators. The policy-makers. The keynote speakers on tech safety stages. Blaming parents. Implying that if only you’d talked more, if only you’d checked the phone more, if only you’d made better choices, your child wouldn’t be in trouble. That narrative is not just inaccurate, it’s cruel. And it keeps the focus on the one thing that won’t fix the problem........guilt. Let’s stop blaming mothers when their sons are radicalised on Discord. Let’s stop blaming fathers when their daughters are stalked on Snapchat. Let’s stop blaming parents for what amounts to digital warfare, fought on a terrain they didn’t build, with weapons they’re not allowed to see. This blame is not just wrong. It’s dangerous. Because while parents are tying themselves in knots trying to be perfect, the real threat grows. And when that threat hits home, and it will, those same parents are now too ashamed to act. Too shocked. Too heartbroken by the idea that, despite everything they’ve done, their child is still in the middle of something horrifying and real and wildly unfair. This is where we lose kids, not in the mistake, but in the shame storm that follows. Because when a parent collapses under the weight of “Where did I go wrong?”, the child is left to carry not just the consequences of their online choices, but the emotional fallout of the person they trust most. The one who’s supposed to be their anchor is now drowning too. And no one is steering the boat. So here’s what needs to change. We need to stop treating digital safety as a test of parenting. It’s not. It’s a test of resilience. Of readiness. Whether you, as the adult, can stay calm in the most emotionally volatile moment your child has ever faced. Your kid doesn't need perfect parenting. They need you, prepped. Not prepped to prevent. But prepped to respond. That means preparing for the day they call you and say:“There’s a photo.”“There’s a video.”“There’s a group chat and everyone’s turning on me.”“There’s a stranger, and I think I messed up.”“There’s something online and I can’t make it stop.” That’s not the day to fall apart. That’s the day to know how to show up. You need to be ready. Not just legally, not just logistically, but emotionally. Steady hands. Soft eyes. Clear words. A decision already made: this does not break us. Because your job is not to panic. It’s to protect. And not just from the internet, but from the weight of a world that tells kids one stupid moment defines them, and tells parents one stupid moment defines you. You have to reject that. You have to know that good kids screw up. Peer pressure and a desperate need to belong can lead kind kids to share the wrong thing. Smart kids freeze in the face of a threat. And well-parented kids? They are right there in the thick of it, too. Because the tech is too fast. The culture is too relentless. It’s not your fault. But it is your job. Your job is to be ready for the thing you couldn’t predict. To stay standing when your child needs you most. To navigate the mess with clarity, not collapse. To build the kind of relationship where your child doesn’t hide, because they know: no matter what, you are not afraid of them, or for them. You are ready. Prepared, not paranoid. Calm, not clueless. Steady, not perfect. This is the world now. And it’s not fair. But it is real. Every child, no matter how well raised, deserves a parent who is not destroyed by their first digital mistake. Not because it won’t hurt. But because the only way forward is through. Together. And schools need to hear this too. Because while parents are being blamed for not doing enough, schools are expected to hold the full weight of what happens online, even when that harm was set in motion long before the first bell. Educators are exhausted. Families are overwhelmed. And too often, blame ricochets between them. Parents frustrated that schools didn’t prevent it. Schools frustrated that it landed at their feet already too late. But this isn’t a blame game. It’s a partnership problem and it’s solvable. When schools and families work together—with shared language, clear protocols, and the right education—something remarkable happens. The silence lifts. The shame thins out. Kids feel safer to speak up, earlier. Parents feel equipped. Teachers feel backed, not burned. That’s where we come in. We don’t deliver a scary slideshow and disappear. We embed digital resilience into the DNA of the school. We make it real, relevant, and ongoing. We bridge the gap between what’s happening online and what’s happening in real life. We show staff how to manage incidents with a learning moment steeped in restorative justice. We help parents respond to crisis without emotional collapse. We create frameworks that last beyond a presentation, and conversations that continue long after we’ve left. If your school is ready to stop reacting and start leading, bring us in. We’ll help you protect your students, support your staff, and stand beside your families. Not when it’s too late. Now. Contact us today at - hello@ctrlshft.global
- What To Do If A Child Is Being Bullied Online
When things go wrong online, it is 24/7, it cuts deeper, lasts longer, and follows them home. Whether your child is being bullied, or you’ve just found out they’re the one doing the bullying, it requires a new kind of parenting: calm, connected, and cyber-literate. Cyberbullying is not just mean comments or name-calling online anymore. It’s evolved into a spectrum of harm including: Doxxing : Publishing someone’s private information online Deepfake manipulation : AI-generated videos or images used to humiliate Trolling mobs : Group pile-ons, often anonymous Exclusionary tactics : “Ghost group chats,” silent treatment, or mass blocking Revenge posting : Leaking screenshots or private photos for retaliation Gamified humiliation : Using likes, comments, or followers as weapons Digital stalking : Monitoring every online move and turning others against the target It’s always about power, but now it’s algorithmically amplified. And it can happen anywhere: Instagram, Discord, school LMS systems, or anonymous apps. Schools have a clear duty of care to protect students from harm, including cyberbullying, when it’s linked to school life, even if it happens outside school hours. While schools can’t be in a child’s bedroom monitoring late-night device use or enforcing boundaries set (or not set) at home, they can and must act during school hours. This includes educating students about digital safety, supporting mediation between peers, monitoring mental health and wellbeing, and responding swiftly when online conflict spills into classrooms. The duty isn’t to control every online interaction, it’s to recognise risk, intervene early, and work with families to keep students safe, both emotionally and socially. Building self-esteem and self-worth is critical in protecting young people from the impacts of online harm because kids who know their value offline are less likely to seek it in likes, comments, or toxic group chats. The most effective way to do this is by helping them discover who they are beyond the screen . Sport teaches teamwork, discipline, and physical confidence. Art offers self-expression and a way to process big emotions. Music builds identity, focus, and joy. Drama, coding, volunteering, outdoor adventure, anything that helps them achieve, belong, or create something real. These activities not only build skills, but they also build protective layers around a child’s self-worth. The goal isn’t to keep them away from tech, it’s to make sure their self-esteem isn’t built on it . Start by anchoring yourself. When your child or a student is hurt, your instinct might be to panic or problem-solve but what they really need first is your calm presence. Sit with them. Be curious, not confrontational. You might ask: “Is something online making life feel heavier right now? “Has anything happened that made you feel left out or unsafe? “If someone else were going through what you are, what do you think they’d need most?” Once the conversation opens, gently begin documenting. Take screenshots, record messages, and save timelines before platforms auto-delete or content disappears. Don’t share it around or dramatise it. Keep it secure and factual. Support the child in using platform tools to block, mute, restrict or report abuse. These actions are not about retreat, they’re about control. Encourage them to screenshot the reporting for evidence. If the harm has crossed into school life, directly or indirectly, notify the school, their teacher, the wellbeing lead, the online safety coach or the digital safety officer. Many schools have Ctrl+Shft’s Digital Ethics and Accountability Pathway program for intervention, including restorative conferencing. And most importantly: don’t just take the phone away. It sends the message that their honesty comes with punishment. Instead, co-design a recovery plan with them. This might include safe offline spaces, trusted peers or staff they can check in with, reduced digital exposure, and, where needed, professional counselling. Watch carefully. Ongoing distress, sudden silence, sleep issues, or unexpected outbursts could be signs of trauma. Early intervention matters. You’re not overreacting by reaching out for help. You’re protecting their (and your) mental fitness. What If Your Child Has Caused Harm Online? It’s confronting but it doesn’t make your child a bad person. It makes them human. Cyberbullying is often a symptom not the root: fear, status anxiety, peer pressure, emotional overwhelm, or unresolved pain. Start by regulating yourself. Then calmly ask questions that invite honesty, not defensiveness: “What was really going on when you posted that?”“What were you hoping would happen?”“If the roles were reversed, what would you want to feel safe again?” Avoid labels like “bully.” Focus on behaviour and growth. This isn’t about punishment. It’s about repair. Use this as a moment to teach digital empathy, accountability, and maturity. Support children to take ownership: Reflect on the impact, not just the intention. Offer a genuine apology (not performative). Participate in a restorative process if one is available guided, respectful, safe. Make amends in ways that matter: kindness, inclusion, speaking up next time. Explore what’s underneath the behaviour. Are they being bullied themselves? Feeling invisible or pressured to belong? Lacking healthy outlets for stress or identity development? This isn’t just a tech issue, it’s a wellbeing issue. Teach Digital Emotional Literacy : Help your child name what they’re feeling, and recognise what others might be feeling, too. Build an Agreement, Not a Rulebook : Collaborate on device boundaries, online tone, and what to do when things go wrong. Use Tech That Teaches Empathy : VR experiences, storytelling apps, and interactive learning can humanise the screen. Mentor, Don’t Monitor : You don’t need to stalk their accounts. You just need to be close enough that they come to you first. Know When to Escalate : If someone is at risk of harm through threats, coercion, image-sharing, or stalking contact your school and the eSafety Commissioner. www.esafety.gov.au If safety is in danger, call the police. If you would like to book our team to speak at your school or would like more information on our Digital Ethics Program click here
- What Parents Deserve to Know, and What Centres and Schools Must Confront
Last week, like so many others, I sat with the weight of the news. The devastating revelations of abuse in early childhood education centres across Australia weren’t just shocking, they were heartbreaking. But what struck me most was what didn’t make headlines. The quiet architecture of risk. The systems that allowed harm to go unnoticed, not out of malice, but out of assumption. I did what I always do in moments like this, I went down a rabbit hole of trying to find the fixable fault lines. I opened up the publicly available policies on the websites. I downloaded dozens of policies from early childhood providers, documents on privacy, digital safety, social media, and codes of conduct. I wanted to see how our systems are protecting children as physical and digital become more blurred by the minute. The truth? Most aren’t. What I found were policies that haven’t been meaningfully updated since 2016. Several still included references to “facsimile machines.” Most treated digital risk like an afterthought, something to file under “IT” or “parent permissions,” not child protection. Digital life isn’t optional anymore. It’s the operating system of childhood. Children aren’t just visiting digital spaces, they’re living in them. Learning through them. Being documented within them from before they can speak. Yet the systems meant to keep them safe in those spaces are still built for an internet that existed 15yrs ago. It’s not just the apps. It’s the architecture firm that posts the site plan of a new school build. The tradesperson on-site, unvetted, because they were only there for an hour. The educator capturing photos on a personal phone. The parent uploading birthday pictures and the end of year concert. The group chat with no boundaries. The “update” that never came. This is where harm hides. Not in the dark corners, but through the front door. Not because people don’t care but because the systems weren’t built to hold the complexity of the lives they’re meant to protect. Educators are doing extraordinary work. Every day, they navigate apps, parent messaging, consent forms, privacy settings, digital learning tools. But they’re being asked to carry a level of risk they were never trained to hold, without the technical language, policy infrastructure, or professional support to do so safely. And into that vacuum steps a new market: a wave of well-meaning consultants offering “policy updates” on LinkedIn promising to be able to prevent abuse, most with no experience that spans what’s actually required. Child safety. Privacy law. Digital governance. AI. Ethics. Online harm . It’s not one lens that is needed it’s all of them. Templates are being sold, policies get a cosmetic refresh. But what’s missing is the architecture of protection the kind we at CTRL+SHFT build every day, across systems, sectors, and at scale. The policies I read weren’t bad because the people behind them didn’t care.They were bad because they were built for a world that no longer exists. The digital world moves fast. Harm moves with it. Unmonitored apps, messages sent between staff, photos shared to facebook pages without informed consent to parents that don't understand that image can been screenshot and become a deepfake nude used for CSAM in under three minutes. Photos can even be taken by what looks like a pair of reading glasses. And still, many systems rely on outdated codes of conduct, blanket consent forms, and compliance checklists that offer the illusion of safety but none of the substance. What parents are asking for isn’t panic, it’s preparation. They will want to know: Who is thinking further ahead than next week’s newsletter? Who has asked the hard questions about what happens to their child's data? Who is standing at the digital gates, not just assuming they’re locked? And those questions aren’t just for early childhood services. They’re for every primary school, every secondary college, every board, every system. This is about leadership. Leadership that shows up not in reaction, but in full review and redesign. Digital safety isn’t a feature it’s now foundational. And failing to treat it as is becoming more and more indefensible. So if you are a school leader, a centre director, a board member, or a member of the P&C this is the invitation. Not to defend what was. But to build what’s needed. And if you’re ready to build, really build, we are here to help. That’s what we do. Not because it’s our job. Because it’s the only work that matters now. Five Areas for Immediate Reassessment 1. Education Platforms Are Collecting More Than We Realise Modern EdTech tools do more than support learning. They may also capture: Location data, device type, and login behaviour Facial imagery, voice recordings, written work, and shared photos Learning patterns, emotional tone, participation levels Some of this data may be used for purposes beyond education—such as product development, AI training, or third-party analytics—often without clear visibility to the school or the family. “The scale of data collected is enough to build a full digital biography of a child—identity, behaviours, abilities, and vulnerabilities.” —UK Digital Futures Commission Schools are doing their best. But the nature of these tools means that even with the best intentions, understanding how all this works can be limited. 2. Behaviour Tracking Is Becoming the Norm, Quietly, and Automatically Many apps now include behaviour tracking features: Points systems and “badges” for compliance Mood indicators or real-time engagement scores Behavioural data that may be seen by other families or educators These are designed to support learning. But over time, they can create profiles of children that may follow them, misrepresent them, or reduce them to patterns. 3. Consent Practices are Overdue for a Refresh Most schools rely on consent policies drafted years ago often before AI, analytics, or hybrid learning were common practice. As a result: Consent is too broad, too passive and needs to be fully informed Parents and Educators may not know where the child’s data is going Teachers may be working without clear guardrails This isn’t a failure of schools. It’s a reflection of how much the environment has changed. 4. External Links Can Introduce Invisible Data Flows Many educational platforms integrate with or link to: YouTube, Google Maps and more Cloud-based storage providers Tools with their own privacy and advertising models Schools don’t always have the ability to audit or restrict these flows, especially when they’re part of “core functionality.” And families are rarely told when their child crosses into a less protected zone. 5. Schools/Centres Hold the Legal Duty Legally, schools/centres are classified as data controllers responsible for student privacy and protection. But in reality: EdTech contracts are often fixed and non-negotiable Training on digital governance is rare or outdated Platform complexity makes it difficult to track what’s being shared or stored Laws already exist that can attract large fines for how photos are handled _________________________________________________________________ 1200+ organisations have turned to our team for expert support, and policy updates that can match the moment. Contact hello@ctrlshft.global or www.ctrlshft.global
- Why “Sexual Extortion” Must Replace “Sextortion”
We need to discuss the language we use when referring to digital harm, abuse and crimes. Because right now, in schools, in courtrooms, in headlines and hashtags, we’re using a word that hides the very thing we’re trying to expose. “Sextortion.” It rolls off the tongue like a cousin to “sexting.” It sounds digital, flirty, fleeting or a misstep, maybe. But it doesn’t sound like what it actually is: a form of abhorrent sexual violence. A crime of coercion and blackmail designed to silence and shame. And that’s exactly the problem. The word “sextortion” trivialises the act as much as it dilutes its gravity. It softens the edge of what is, in reality, a brutal abuse of power often carried out against children and teenagers who are groomed, manipulated, and then psychologically terrorised. “Sexual extortion” is what’s actually happening. And to stop it, we need to start by calling it what it is. We don’t just describe the world with language, we shape it. How we name harm determines how seriously it’s taken. Whether it’s understood as a personal failure or a systemic threat. Whether victims are believed or dismissed. Whether perpetrators are prosecuted or excused. Think about the shift from “revenge porn” to “image-based abuse.” The former carried the weight of a lover’s quarrel, petty, emotional, and mutual. The latter reframed it as what it is: a violation, a breach, a crime. We’re overdue for the same shift when it comes to sexual extortion. “Sextortion” sounds like a hybrid of poor judgment. It echoes terms like “sexting,” “snap,” and “hookup.” It suggests agency, flirtation, maybe a boundary crossed. It sounds, frankly, like something someone got into. “Sexual extortion” is different. It brings us back to the core of the crime: extortion. A criminal act. A deliberate manipulation. A weaponisation of fear. One word masks coercion. The other exposes it. When we use precise, serious language, we remove ambiguity. We align with trauma-informed frameworks that recognise power imbalance, not personal failure. We shift the cultural posture from “why did you send it?” to “who demanded it, and how do we hold them accountable?” Victims are more likely to speak up when they know the system sees what happened to them as a crime. Parents are more likely to respond with support, not shame. Schools are more likely to intervene. Police are more likely to act. The way we name the crime determines who we believe. Sexual extortion is the fastest-growing cybercrime against children worldwide. It often begins on the apps they use every day: Instagram, Snapchat, and Discord. It doesn’t discriminate by postcode or personality. Victims include boys and girls. Kids who are outgoing and kids who are private. Kids who trust easily, and kids who just want to be liked. Some of them survive the shame. Some don’t. Too many have died before anyone ever called it what it was. Predators thrive in silence. Shame is a cage that keeps its victims quiet. And vague, casual language allows both to continue. If we want to break the cycle, we need to start where every conversation begins with the words we choose. It is not sextortion. It is sexual extortion. And the sooner we start naming it, the sooner we start changing it. Because kids are dying. Because systems are still hesitating. Because words, when used with precision, can be the first step out of silence. www.ctrlshft.global
- ***Trigger warning*** Three Dead Teenagers. One Common Thread. Apple’s iMessage Failed Them.
This is not a story about technology. This is a story about neglect. Over the weekend I read an article in The Wall Street Journal that you can read here: https://www.wsj.com/tech/personal-tech/sextortion-scam-teens-apple-imessage-app-159e82a8?st=TWuAaq&reflink=desktopwebshare_permalink In three separate homes, in three different US states, three families are now living with the unimaginable. Their sons were targeted by sexual extortion scammers. Strangers who knew exactly what they were doing. These criminals manipulated, coerced, and terrorised their victims using the most unassuming, widely trusted platform in the Western world: iMessage . Apple’s default messaging app, pre-installed on every iPhone, every iPad, every Mac. The blue bubbles that feel familiar. Safe. Polished. Private. It’s the very sheen of iMessage that makes it so dangerous. Because unlike WhatsApp, Instagram, or even Telegram, platforms that are frequently criticised for harbouring criminals iMessage offers no functional infrastructure for users, especially minors, to report crimes. There is no button to flag suspicion of sexual extortion. No alert to moderators. No connection to law enforcement. Just a quiet, meaningless option to “report junk,” which disappears into a void with no confirmation, no tracking, no hope. This is what that means. A child, being blackmailed with explicit images, receiving threats to expose them to their family, school, or followers has no pathway to ask for help through the very tool they’re being attacked on. Instead, they’re left to fend for themselves. To block account after account as the abuser cycles through endless new iCloud identities. Because Apple allows unlimited, anonymous account creation. And because there's no intervention system in place, the attacks keep coming. Three teens. Gone. One of them, a 17-year-old from Michigan, received over a hundred messages in a single night. Demands. Threats. Warnings. When he tried to ignore them, they escalated. When he blocked them, they reappeared. He didn’t tell his parents. Not because he didn’t love them. But because shame is a cage, and these criminals know how to lock it tight. He died the next day. His story is one of hundreds now being investigated across the U.S. and globally, in cases tied to online sexual extortion an epidemic so rapid, so insidious, the FBI, ACCCE and the AFP and other global law enforcement agencies have issued repeated public alerts, and NCMEC (the National Center for Missing and Exploited Children) has warned that the psychological trauma inflicted by these schemes is leading directly to suicide. But here’s where the story twists. NCMEC received just 250 reports of child exploitation from Apple platforms last year. Meta, the company behind Facebook and Instagram, submitted over 5 million . This is not about who has the most users. It’s about who has the most denial. Apple’s number isn’t low because the abuse isn’t happening on its platforms. It’s low because Apple has systematically failed to build reporting infrastructure that would allow it to know . That would allow it to act . It’s not a limitation of technology. Apple has some of the most powerful engineers on Earth. It’s not a question of resources. Their Q1 2025 earnings exceeded $119 billion. It’s not even a legal grey area. Apple has the same obligations under U.S. federal law as Meta and Snap to report known instances of child exploitation to NCMEC. The difference is will. There is no transparency report that outlines how Apple handles abuse cases on iMessage. No moderation team made publicly accountable. No roadmap for future safety tools. Apple’s public communications celebrate encryption, privacy, control. But what happens when that control is handed to predators, and children have nowhere to turn? Technology must be held to the same standards we demand of any public infrastructure. We would never allow a school to operate without doors that lock, without fire exits, without the ability to call for help. Yet we allow these massive tech platforms to be part of a child’s daily life without any of those safety mechanisms. When the most dominant platforms refuse to participate in safety design, the system breaks. And that’s what this is. A complete system failure. Because Apple doesn’t just control the device. It controls the ecosystem. The operating system. The default apps. The user experience. Which means it also bears the responsibility to protect the youngest, most vulnerable users within it. The company that changed the face of communication has refused to adapt it to a world where communication is weaponised. You won’t hear Tim Cook talk about this on stage. You’ll hear about AI. You’ll see camera upgrades. You’ll watch slow pans of anodized aluminum and phrases like “our most powerful iPhone yet.” But you won’t hear the names of the teenagers who died after being hunted through Apple’s app. Their deaths won’t be included in the shareholder brief. There will be no ticker tape for the lives lost to an interface that chooses aesthetics over accountability. We cannot accept this as the price of connection. Not when the tools to fix it are simple. Not when other companies, with all their flaws, have shown it’s possible. Report buttons. Human moderation. Escalation pathways. Crisis response teams. Mechanisms to alert, intercept, and intervene before a child makes a permanent decision in a moment of temporary despair. The lack of these systems is not a glitch. It is a choice. And until Apple makes a different one, every parent should know: the iMessage icon isn’t just a blue bubble. For some, it has become the last door a child walked through before taking their life. The least we can do is knock it down. 3 Ways to Start the Conversation About Sexual Extortion with Your Tweens and Teens — and Why It Matters Now More Than Ever We no longer use the term sextortion because it dilutes the violence of what’s happening. Sexual extortion makes it clear. This is not a misstep or a teenage experiment gone wrong. It’s exploitation, plain and dangerous. By naming it for what it is, we strip away the shame that stops kids from asking for help. Words matter. And so does timing. If you or your child are navigating any of this, don’t wait. Go to the Australian Centre to Counter Child Exploitation (ACCCE) for official guidance and reporting tools. You can also visit SmackTalk , a peer-informed education platform that tackles these conversations head-on with straight talk, not sugar-coating. Because the digital world isn’t going to slow down. But we can get louder. Smarter. And much, much harder to manipulate. The word sexting once seemed like the scariest thing a parent might have to explain to their child. But we're past that. The reality is starker now. Children are no longer just experimenting with risky images or impulsively sharing with people they trust. They are being targeted, manipulated, blackmailed — sometimes by strangers, sometimes by people they know. Run through encrypted platforms, gaming chats, social media, and even school-group DMs, these schemes are often part of larger criminal networks that know how to groom a child in minutes. They collect compromising images or videos, then threaten to expose the victim unless they send more. Sometimes money is demanded. Sometimes the threats escalate into real-world harm. In all cases, the child is trapped. And deeply alone. This is not hypothetical. In 2024, the Australian Centre to Counter Child Exploitation (ACCCE) recorded an unprecedented increase in reports of sexual extortion, especially targeting boys between 12 and 17 years old. Many cases involved international criminals posing as teens online. And while legislation scrambles to catch up, kids are being coerced into silence and shame. Some don’t survive the psychological fallout. So if you’re a parent or caregiver, the most powerful thing you can do today is start the conversation not with fear, but with clarity and consistency. Here’s how. 1. “What kind of images or videos do you think are okay to share with others?” This isn't about lecturing. It’s about giving your child space to process what they already see online and what they think is normal. When you ask this question, you’re not just asking about behaviour. You’re helping them define personal boundaries, online consent, and digital permanence. Let them speak. No interruptions. Then share your values in plain, non-judgmental terms. The goal is to build trust, not compliance. 2. “What would you do if someone asked you to send a photo or video of yourself?” Kids don’t make decisions well when they’re panicked. But if they’ve already imagined a scenario, they’re more likely to respond with confidence. This is your chance to help them pre-load strategies and scripts for high-pressure situations. Reinforce that it is never okay for anyone — friend, crush, stranger — to demand or guilt them into sharing images. And that they can always come to you, no matter what. Make it normal to talk about awkward or frightening scenarios before they happen. That’s where the real protection begins. 3. “Do you know what can happen if someone shares your image without permission?” Your child probably knows the images don’t disappear. But they may not know that their data — including photos — can be stolen, altered, and sold. Or that once an image circulates, even among peers, it can be used for bullying, impersonation, or long-term exploitation. This is where sexual extortion often begins: one image, shared under pressure, then used as blackmail. The offender might threaten to send it to family members, friends, or post it publicly unless more are sent. It's a trauma trap. Let them know the law is on their side. That it's never their fault even if they sent an image of themselves naked or nearly naked. And that there are real people who can help, right now.
- How Photos on a School Facebook Page Have Become AI Training Data
This isn’t an argument against AI in education. It’s a warning about whose AI we’re using and what we’re risking every time we hand it our most vulnerable data. Once upon a time in a school near you, it was photo day and we knew the rules. Wear the right shirt, smile if you can. The photo went in a folder or hung on Nan’s fridge. It wasn’t perfect, but it was safe enough. But fast forward to 2025 and the same snapshot just as innocent and ordinary is uploaded into a machine many barely understand. In schools right across the globe some teachers are quietly feeding real children into artificial intelligence systems. Not out of malice, but out of enthusiasm often coupled with exhaustion. Out of a culture that hasn’t caught up with the speed at which technology can turn something benign into something irreversible. Over a decade ago, I began warning schools about the risks of posting children’s names, faces, and uniforms on public Facebook pages. I was told it was about “celebrating and connection to their community.” A lovely phrase. But it missed the point. Because it wasn’t about what we meant to do. It was about what we were making possible even back then. A trail of identifiable data, laid down without thought for the future, has now led us straight into the gaping mouth of AI. Meta announced last year that they would use public Facebook and Instagram content dating back to 2007 to train its AI. That data over 15+ years includes not just adult users but, crucially, data shared about and by children. Over those years thousands of schools, sports clubs, and education departments globally have posted publicly on Meta platforms: student achievements, group photos, assemblies, awards nights, even missteps and disciplinary actions. These posts now sit inside the training data of Meta’s generative AI models. While the intention may have been to celebrate or inform, t he result is that a generation of children have had their digital footprints absorbed into a machine they never opted into, and cannot easily opt out of. Children’s rights to privacy, informed consent, and digital dignity are enshrined in international law, including the UNCRC General Comment No. 25 on children’s rights in the digital environment. Now we know what is going on feeding their likenesses, names, and stories into commercial AI without their knowledge not only violates these protections, it sets a dangerous precedent. Schools must be instructed and resourced to audit and remove public-facing posts involving minors and transition to closed-loop, consent-driven digital communications. The training of AI on children’s public data, especially by proxy through school, dance, karate, sport accounts is not just a policy failure, it’s now can be a profound ethical breach. Some schools are still arguing the need to upload, share, click, without addressing how their feed, feeds machines. And once it goes in, it doesn’t come back. Right now, in classrooms and staff rooms, well-meaning teachers are using generative AI tools to make worksheets, slide decks, and birthday cards. They’re uploading images of kids and colleagues, hoping for clever, creative outputs. Some teachers don’t know that when they upload a photo, it might become part of a permanent training dataset. Others suspect, but aren’t sure. And leadership? In too many places, they simply aren’t aware it’s happening at all. There is a fundamental difference between teaching students about artificial intelligence and feeding their personal information into the profit-hungry models built by Social Media Big Tech. We have blurred that line. And in that blur, some ethics have gone missing. And then there is shadow AI. It thrives in the grey zones and slips past scrutiny under the banner framed as efficiency, labelled “just a tool.” Teachers are not the problem. They are resourceful, under-supported professionals trying to do more with less. The problem is that they are being handed immense digital power with very few guardrails and no warning. They’re using AI systems that were never designed for education. They were built by trillion-dollar companies whose business models rely on scale, surveillance, and perpetual extraction. These tools are opaque by design. You can’t see what happens to the data once it’s inside. You can’t know how it will be used to train the next model, or where that model will show up next. An educator uploads a student’s photo to make an interactive learning card. Admin posts a photo to a Facebook page. It ends up embedded in a neural network owned by a multinational corporation. That data becomes a training point, a pattern, a pixel, a probability, that no school can track or retract. This is not a hypothetical this is how machine learning works. And when that photo gets removed? The learning doesn’t. That image may be gone from your screen, but it is now part of the machinery. A weight on a node, a statistical fingerprint that can echo through future outputs without your knowledge or consent. Every school needs to pause and ask: Who controls the AI our staff are using? Who owns the platforms being accessed from classroom computers? What’s being uploaded when a teacher is using something unregulated at home? Are there terms of service that guarantee non-retention? Do we have written, fully informed consent from the parents of every child whose face might be shown to these systems? If you can’t answer those questions, just a few of those I ask our clients when rebuilding frameworks, you’re not operating safely. Governance means having clear rules, oversight, and accountability . It’s how we decide what is allowed, who is responsible, how decisions are made, and what happens when something goes wrong. In schools, governance around AI means putting in place real policies not vague intentions that control how these tools are used, what data is shared, who sees it, where it goes, and whether anyone has the right to say no. Good governance doesn’t block progress. It guides it. It’s not about banning tools it’s about using them wisely, ethically, and transparently. It also means rethinking how you train staff. This isn’t about getting better at writing prompts. This is about getting better at asking questions. Where did this model come from? Who trained it? On what data? What jurisdiction is it operating under? Does it respect the privacy rights of minors under Australian law? Under GDPR? We don’t let strangers film children in the playground. We don’t let corporations access medical records to “personalise” learning. So why are we uploading student work, faces, names, and learning histories into systems we do not govern? Until the technology is built with education in mind not as an afterthought, but as a primary purpose our job is to protect. To be clear-eyed. To lead with ethics, not novelty. Yes, AI has a place in the future of education. But it must earn that place. Through transparency. Through respect. Through regulation that puts children first, not corporate growth metrics. Governance isn’t a barrier to innovation. It’s what keeps innovation human. We owe it to our schools to move past the breathless excitement and build systems that don’t just work, but that are worth trusting. Because without that trust, all we’re doing is feeding a machine that was never built for us. Yes we can help: hello@ctrlshft.global
- The Hidden Fault Line and Why Digital Safety Will Make or Break Your Organisation
Systems full of pressure, distraction, and exhaustion. Systems threaded with constant connectivity—where the line between the digital and the real world hasn’t just blurred, it’s vanished. It’s no longer enough to talk about staff wellbeing in terms of fruit bowls, Friday yoga, or colourful posters pinned to noticeboards. That’s surface-level. And the surface is no longer where the harm is happening. If your organisation hasn’t embedded digital safety as part of its everyday culture, you’re not truly looking after your people. You might be well-meaning. But good intentions aren’t protection. Not in a world where harm can arrive through a screen, at night, alone, when no one else is watching. Digital Harm Isn’t an IT Problem. It’s a People Problem. And That Makes It a Leadership Responsibility. Yes, your cybersecurity team protects data. They keep the digital gates locked. But digital safety —the emotional, psychological, and cultural kind—lives elsewhere. It lives in the quiet spaces that don’t get logged in the system. In the messages no one reports. In the screenshots staff save but don’t share. In the feeling someone gets when they realise: I don’t think anyone will believe me. When leadership sees digital risk only through the lens of compliance or technology, we miss what really matters. We miss the teacher who left her role because of online abuse no one stepped in to name. We miss the child who goes quiet—not because they’re shy, but because they’re carrying something that happened in a group chat no adult knows about. We miss the mother lying awake at 2am, overwhelmed and helpless, because her daughter is being targeted online and she’s working double shifts and can’t make it to the cybersafety session. We miss the young man being blackmailed through private messages. Who hasn’t told a soul. Who might be sitting at the desk next to you. When you haven’t built a culture of safety and trust, these stories stay in the shadows. And that’s where the harm grows. This Isn’t About Technology. It’s About Trust. And trust, once broken, is hard to rebuild. Every organisation—whether it’s a school, a business, or a public institution—needs to recognise that digital harm is now one of the key forces shaping culture. The emotional toll it takes isn’t always loud, but it’s relentless. Staff become withdrawn. Students disengage. Communities feel fractured. Not because they don’t care. But because they’re tired, scared, and quietly burning out. No one sets out to create unsafe systems. But if we’re not talking about how online behaviour, online pressure, and online harm affect our people, we are part of the silence that lets it continue. This Is a Leadership Blind Spot. And the Cost of Not Seeing It Is Mounting. Policy is not protection unless it’s alive in the day-to-day choices of your team. Unless it’s known, lived, and trusted by the people it’s meant to serve. When governance is real—when it’s more than a document, more than a policy manual gathering dust—something powerful happens. People stop feeling alone. They start knowing: if something goes wrong, we will be believed, we will be supported, we will be safe. We’ve seen what happens when digital safety is built into culture—not as a reaction, but as a foundation. In schools where staff are trained not just to respond to harm, but to recognise the signs before it spirals. In small businesses that chose to go above the minimum, embedding best-practice systems because they understood that their people were their greatest risk and greatest strength. In leadership teams that said: We didn’t see this early enough. But we refuse to keep looking away. And That’s When the Culture Starts to Shift. Not because someone stood at the front of the room and gave a scary presentation. But because the system finally caught up to the lived experience of the people inside it. Because students found the words.Because staff felt less afraid.Because parents felt seen. Not because everything was fixed overnight—but because the first step was taken with courage, and with care. We Are in a Moment That Demands That Kind of Leadership. As AI reshapes how we work and learn, as algorithmic systems accelerate beyond human oversight, and as legislation struggles to keep pace with the velocity of online harm—we must update our safety frameworks too. If your policies were written for a different time, they won’t hold under the pressures of today. And if your culture tolerates digital harm—by ignoring it, minimising it, or offloading it—you’re teaching your people that safety isn’t real here. That trust is conditional. That silence is safer than speaking up. We don’t want to do that.We don’t want our students, our staff, our children, or our community to feel invisible in the systems meant to protect them. So now is the time to lead with both head and heart.
- The Silent Crisis of 2025 This Is What Bullying Looks Like Now
A disappearing name in a group chat. A message was screen-captured and sent to twenty people before the original sender knew they’d been exiled. The weaponisation of AI against children by other children, deepfake nudes created not for public humiliation, but for control. Bullying has changed. It is coordinated, sustained psychological harm, silent, collective, and built on the very tools we’ve handed them. The new bullying isn’t loud. It doesn’t need to be. Kids now have the tools to erase others without consequence. Fake “vibe” language gets used to silence others under the guise of wellness. Deepfake generators create nude images of classmates who are too confident, too visible, too inconvenient. These aren’t isolated cases. They are patterns. And they’re becoming predictable to those of us paying real attention. Children keep folders of content on their phones like war chests. Screenshots, edits, and nudified images stored and ready to use. Not for laughs. For leverage. For punishment. For revenge, when someone steps out of line. It’s surveillance culture in miniature. Learned. Replicated. Localised. Everyone is behind. Policy. Schools. Tech platforms. Parents. Because cruelty now masquerades as empowerment. Kids say, “I’m just protecting my peace” while ghosting a peer into psychological isolation. They say, “I don’t owe anyone access to me”, as they eject someone from every group chat. They call it “curating energy” when it’s actually planned exclusion. And because they’re using language borrowed from adult influencers and wannabe therapists online, the harm is easily misread as maturity. But it isn’t. It’s mimicry without comprehension. It’s the vocabulary of boundaries used in the service of bullying. And we let it happen because there’s no bruising. Because there’s no screaming. Because there’s no policy language for what a child looks like when they’ve been digitally erased. But those of us who sit with these kids, the counsellors, the psychologists, the educators, who haven’t yet gone numb, see it every day. The damage doesn’t stop at school gates. It walks into workplaces. It impacts performance. It warps how kids form adult relationships. It seeds a culture of digital cruelty that matures into corporate indifference. Anti-bullying frameworks, policies, laws and education are dated. They were written for a different internet. The truth? Most schools and businesses are doing their best to adapt to the new normal. But “best” doesn’t mean “equipped.” They are dealing with an entirely new emotional ecosystem, one that changes faster than traditional PD can keep up with. Schools are often navigating bullying with policies written by people who don’t understand how this has evolved......and schools are all too often the publishers of the very photos that are weaponised. Schools should not be expected to double as cyber forensics experts just to protect a child’s well-being. But that’s exactly where we’re heading if schools don’t bring in help that’s current, credible, and grounded in the real digital lives of students. The most dangerous myth we still tell about bullying is that it builds resilience. It doesn’t. It forces adaptation. Kids don’t toughen up. They shut down. They perform. They shape-shift to survive. And in a world that demands constant digital presence, those survival tactics don’t stay temporary. They become their identity. A student excluded online doesn’t just feel left out they question whether they matter. A teen whose image is deepfaked doesn’t just feel exposed they learn to hide. By the time they graduate, the silence has calcified. What begins as harm in Year 9 becomes silence in the staff meeting. That’s the long arc of unaddressed bullying. And we are watching it play out in real time. One-off cyber safety education sessions are almost completely obsolete. They don't work anymore. And yet, some schools still cling to them — a box-ticking ritual dressed up as action. Somewhere along the line, real conversations about bullying got lost. Drowned out by the noise of digital chaos. Swallowed by the scramble to cover every emerging risk — predators, privacy, platforms — until the slow, personal cruelty playing out between students was forgotten. We’re now seeing a divide. Some schools are done pretending. They’ve recognised that digital harm isn’t just about safety, it’s about identity, belonging, and what happens when kids are allowed to erase each other without consequence. These schools ask us for help. They’re rewriting policy. They’re not waiting for the next crisis. Others? Still running ten-year-old cyberbullying talks. Still treating the symptoms while the culture underneath rots. Still assuming parents of students don’t notice how thin the protection really is. We don’t work with schools because they’ve failed. We work with the ones ready to face what they’ve outgrown. Social erasure. Deepfake abuse. Algorithmic exclusion. This is bullying now. And it doesn’t wait for staff training days or squeeze into a 45-minute assembly. The harm is constantly evolving. And if you’re not evolving with it, you’re not protecting students. You’re leaving them to figure it out alone, in silence. And that is what is what is most dangerous about this moment.......quietness. We have a generation learning that harm is something you should be able to handle alone. That if you can’t cope, you’re the problem. And when enough children internalise that? You don’t just get broken hearts. You get broken systems. If your school, your organisation, your government isn’t already taking this seriously you are already behind. Not because of ignorance, but because of pace. The question now isn’t “Should we act?” It’s: “What’s the cost if we don’t?” And if that answer makes you uncomfortable good. That’s how change begins. If you're ready to stop reacting and start leading, we’re here to help. We work with schools, businesses, and governments to rewrite the way bullying, digital safety, and wellbeing are understood and dealt with. Policy. Training. Strategy. Real solutions, not lip service . Because waiting until it’s too late is no longer an option. Contact us here
- The Children of 764
****Trigger warning**** This post mentions multiple crimes and abuse towards children. It started, with a boy in a bedroom. Alone, online, and invisible to the adults around him. He liked Minecraft. He watched gore. Somewhere between pixels and unseen pain, a transformation occurred. His name was Bradley Cadenhead. He was a teenager living in Texas, and he became the architect of a Discord server called 764. In 2024, the National Center for Missing and Exploited Children’s CyberTipline received over 1,300 reports linked to 764 and similar networks. That’s a 200% increase in just one year. These aren't numbers. They're warnings. Not abstract risks, but coordinates of human lives breaking in real time. The children, the teenagers, the women drawn into this gravity well of psychological terror don’t make headlines. But they should. 764 was more than a server. It was a digital dungeon, a theatre of cruelty, a place where abuse wasn’t hidden but celebrated. The name itself was a nod to the ZIP code where Cadenhead lived. Local violence turned global. It began with sharing images, then escalated. Members of the group would lure vulnerable girls, and sometimes boys, into video chats. Then they would extort them. Cut yourself. Undress. Show us pain. Perform, or else. In one case, they told a girl to stab herself while on livestream. Another was pushed to provide names, personal information, even intel on her school. What followed was not just digital exploitation. It was real-world terror. Bomb threats. Warnings of school shootings. Towns evacuated. Teachers under siege. All triggered by a group that operated from bedrooms and basements, wielding nothing but screens, VPNs, and a complete lack of empathy. Discord, the platform where 764 was born, eventually flagged and reported the group in 2021. Cadenhead was arrested and sentenced to 80 years in prison in 2023. But the network didn’t die. It scattered. Splinter groups with names like 764 Inferno emerged, each one more sadistic than the last. They were not just sharing illegal content. They were coordinating abuse, active, live, and escalating. In April 2024, two of the group’s new leaders were arrested: Leonidas Varagiannis, a 21-year-old U.S. citizen living in Greece, and Prasan Nepal, a 20-year-old in North Carolina. According to the Department of Justice, they ordered minors to harm themselves on camera. This wasn’t a few rogue individuals. This was a network with intent. Psychological warfare carried out in real-time against children. And still, most adults know nothing about it. We want to believe child predators live in shadows. That their depravity reveals itself in how they look or act in public. That we’d recognise it if it came near our families. But the reality is more banal, and more horrifying. The boys of 764 wore hoodies, not handcuffs. They played video games. They used the same platforms our kids use: Discord, TikTok, Twitch. They looked ordinary because they were. What makes this worse, and far more complex, is how victimhood can twist. In Vernon, Connecticut, a local honour-roll student was manipulated into becoming an accomplice. She befriended one of the 764 members online. He convinced her to share explicit photos, then coerced her into handing over information about a teacher. That data was used to send threats of bombings and mass shootings. The digital and physical worlds collided. Fear pulsed through schools and neighbourhoods. Police initially believed she was behind the threats, and in a way, she was. But they also saw her as a victim. Because she was. This is the psychological terrain we are now forced to navigate. Where young people are both targets and weapons, victims and enablers, abused and recruited. And it's not just in America. I have multiple reports of young teenagers in Australia displaying all the same patterns that parents have reported to the Police. Girls, mostly. Secretive chats with strangers online. Sudden disappearances. Running away. Self-harm. Refusing to believe it’s not love. Every textbook sign of coercive control, except the controller is behind a keyboard. There’s no “boyfriend”. Just an IP address. And behind it, someone who knows exactly how to make a teenager feel seen, wanted, dependent — then destroy them piece by piece. What these kids are going through is not melodrama. It is not a phase. It is abuse, scaled by algorithms, automated through platforms, and reinforced by silence. And if you're a parent, an educator, or just an adult paying attention, here’s what you need to understand: the threat isn’t coming. It's here. And we are dangerously underprepared. What You Can Do 1. Don’t dismiss strange behaviour. Sudden secrecy, withdrawn behaviour, obsessive device use, erratic sleep, or unexplained injuries these are not just growing pains. Don’t look away. Ask. Keep asking. With compassion, not interrogation. 2. Don’t tell them it isn’t love. Not yet. Not until you’ve listened. If you challenge the reality a teenager is clinging to, you don’t dismantle it. You drive them deeper into it. Ask questions. Let them speak. Then slowly, carefully, introduce the idea of what real care looks like and how coercion wears its mask. 3. Document everything. If a child confides in you, take notes. Screenshot messages. Save usernames. Record timelines. Do not assume the platform will preserve evidence. They won’t. Preserve the proof, then report it to both www.accce.gov.au and eSafety.gov.au 4. Get off the moral high horse. You are not here to shame them for what they sent. You are here to keep them alive. Many victims say they stayed because they were more afraid of parental anger than of the abuser’s threats. Fix that. 5. Learn the apps. Don’t rely on headlines. Create your own Discord account. Watch TikTok lives. Ask your kids to show you how Snap maps work. If you don’t know the terrain, you can’t help them navigate it. 6. Don’t go it alone. Reach out to experts. You don’t have to be a digital native to take action, just be a present adult. Researchers have already warned us. In 2022, a peer-reviewed study in JAMA Pediatrics found that exposure to violent or sexually exploitative content online is associated with increased risk of both victimisation and perpetration in adolescents. The study emphasised that platforms are not neutral environments https://jamanetwork.com/journals/jamapediatrics/fullarticle/2789050 They are designed for engagement, and nothing engages faster than fear and sex. TikTok knew. Discord knew. These companies sit on data that would make most of us weep. But their responsibility is diluted by shareholder interest. The safety of our children is not in their terms of service. It is in ours. The future will judge us not by how we innovated, but by what we tolerated. And right now, we are tolerating too much. Start talking.
- Duolingo’s CEO Just Came for Teachers.........And Missed the Point Entirely.
In early May, Luis von Ahn, the founder and CEO of Duolingo, said something on the " No Priors " podcast I have been stewing on for the past week. I have a lot to say about a lot of things, but this was so blinkered, so tech-bro confident in its disregard for human complexity, it managed to offend me greatly on behalf of teachers, parents, and anyone who’s ever spent a day inside a real school. “I’m not sure there’s anything computers can’t really teach you,” he declared. The only reason schools won’t disappear entirely, at about 23mins in he argued later in the episode, is because “you still need childcare.” And just like that, one of the most influential voices in educational technology flattened the role of a teacher to something between a babysitter and a UX hurdle. A nice-to-have, not a need-to-be. According to von Ahn, a future classroom looks like a room full of kids “Duolingo-ing,” supervised by adults who provide "emotional support" while the real learning happens on-screen. If you’re a teacher, you already know what this gets wrong. If you’re a parent who’s ever watched your child blossom because of a great teacher, or wither under the absence of one ,you know too. AI can teach you to conjugate verbs. It can quiz you on the periodic table. It can tell you which math concept you haven’t mastered based on keystroke data and error patterns. What it can’t do is smile at a child who feels invisible. It can’t detect the edge in a student’s voice that signals a brewing crisis. It can’t pause a lesson because the class is buzzing with tension after a fight at lunch and pivot the conversation to what it means to repair trust. AI can’t look at a roomful of adolescents and decide, in the moment, that the plan for today doesn’t matter as much as what’s going on in their lives. AI can’t love your kid. The best teachers, the ones who change lives........teach from love. What von Ahn is proposing isn’t education. It’s content delivery, and content delivery isn’t what school is for. Duolingo’s rise has been meteoric. With over 116 million monthly users, the company has turned bite-sized learning into a cultural staple. But what works in language acquisition apps isn’t education. It’s gamified behavioural conditioning. Von Ahn boasts that Duolingo has run over 16,000 A/B tests to fine-tune motivation. Running over 16,000 A/B tests means the company has treated its user base as a live experiment, constantly tweaking and optimising every aspect of the experience to make it more addictive, more "efficient," and more aligned with what keeps people coming back. It’s behavioural science applied at scale not to deepen learning, but to maximise stickiness. And that’s the difference. A/B testing optimises for what works on a screen , not what works in a life . It can now tell you when you’re most likely to complete a lesson and how to keep you hooked. But none of this qualifies as a philosophy of learning. It’s a strategy for retention. What Duolingo teaches is compliance with a system. Not critical thinking. Not reasoning. Not how to sit with another person’s grief, or recognise coercion, or navigate ambiguity, or lead. This isn’t the future. It’s a fantasy fuelled by money and the illusion of objectivity. Because no matter how much data you harvest, learning isn’t linear. Children aren’t machines. You don’t raise a good human with personalised reminders and dopamine hits. You don’t cultivate moral courage through perfect spacing algorithms. You don’t teach young people how to think by optimising their path to an answer. The best teachers don’t just answer questions. They provoke them. They teach children to live in questions, to sit with discomfort, to grapple with contradictions. AI can’t do that. Because it wasn’t designed to. Because it doesn’t understand stakes. Because it doesn’t have skin in the game. Schools are not mere instruments of knowledge transfer. They are where kids learn how to be. How to stand up. How to belong. How to disagree without dehumanising. How to lose and recover. How to speak with purpose. These are not extras, they are the core of democratic life. Teaching is not obsolete. It is irreplaceable. The minute we allow tech to define education as an efficiency problem, we have already lost something precious: the understanding that school is sacred not because it teaches facts, but because it teaches people. Let von Ahn have his scalable owl that fakes its own death..........I’ll take the teacher who sees my child. Who refuses to standardise their spirit. Who stands, not because it’s efficient, but because it matters.










