top of page

Search Results.

87 results found with an empty search

  • 1 in 10 Teens Are Being Blackmailed for Nudes Before They Turn 16.

    Predators Don’t Wait Until Term 3. So Why Are We? There are moments in parenting and teaching when we realise what we do next will shape not just the safety of our children, but the culture they’re growing up in. Right now, we are in one of those moments. Research just released by the Australian Institute of Criminology in partnership with eSafety confirms what many of us have long feared: more than 1 in 10 Australian adolescents aged 16–18 have been victims of sexual extortion.  That number is not hypothetical. It's not hysterical. It's measured . It comes from a landmark survey of nearly 2,000 teenagers who bravely shared their digital lives. And what they told us was confronting: 57.7% of victims were under 16 when it happened. 1 in 3 experienced it more than once. 41.4% were threatened with fake, digitally manipulated images. 64.6% were targeted by strangers online. Boys were more likely than girls to be extorted—often for money—and significantly less likely to seek help. This is not a “cybersafety” issue tucked into a Year 9 home room class. This is a full-blown crisis in psychological safety, criminal exploitation, and digital culture. And it’s being met with silence, shame, a once-a-year school assembly or education built for a 2015 internet. That is no longer enough. It never was. We Are Up Against a System of Exploitation Predators follow a simple script, and they weaponise fear. They create shame spirals. And they count on silence from students too scared to speak, from parents unsure how to respond, and from schools unequipped to intervene with confidence. If we keep relying on outdated models, if our only response is a tick-box cybersafety talk once a year, we are giving predators the conditions they need to thrive. This is exactly why I teamed up with Maggie, Brad and Madeleine to create CTRL+SHFT. To do more. What Needs to Happen Next   Hardwire the Safety Sequence Every student, every year, should know this phrase like a CPR drill: Collect evidence. Block. Don’t pay. Disclose without fear – you are not going to be in trouble. This is the counterattack. It breaks the predator’s grip. It replaces secrecy with action. Target Boys with Truth, Not Shame The research tells us that: 41.5% of boys  surveyed had been targeted in the past year. They are 74% more likely  to be abused by a stranger. They are often threatened with financial blackmail not just image demands. Change the Script at Home and in School Forget scare tactics. We need compassion, clarity, and repetition. Every child should hear: “If anyone threatens you, it’s not your fault.” “You are not in trouble.” “We will handle this together.” Equip parents with scripts. Equip schools with escalation guides. Our resources can help you when you partner with CTRL+SHFT Sexual extortion isn’t a phase of growing up online. It is abuse, manipulation, and in far too many cases, it leads to long-term trauma or suicide. This is preventable. But only if we stop treating this like a niche concern or a one-off conversation. Safety is not a subject. It’s a system. We need to change how we speak at the dinner table, how we train teachers, and how we design schools. We need to move from awareness to action. From a once a year checkbox to culture. To every parent, educator, principal, or wellbeing lead reading this: One conversation is not enough.One school assembly is not enough.One warning is not enough. But one clear message, one trusted adult,one brave disclosure repeated often enough can stop a predator in their tracks.   Let’s make sure every adult is ready to respond when they do. If you are a school or a business and want to know where to start? Book up to 30 minutes with me, and I’ll show you the first three steps to building a system that actually has impact. https://calendly.com/kirra-ctrlshft/30min?month=2025-05

  • When AI Gets It Deadly Wrong

    In the past 24 hours, something significant happened in the world of artificial intelligence and for once, the news wasn’t about shiny new features or faster processors. It was about a boy. A 14-year-old boy named Sewell Setzer III, who died by suicide after being encouraged to do so by an AI chatbot on a platform called  Character.AI . His mother is now suing not just the startup that built the bot, but also Google, which financially backed it. In a landmark ruling, a U.S. federal judge has allowed the case to move forward, refusing to grant AI chatbots the same “free speech” protections that humans have. The court has also decided to treat the chatbot not as a “service,” but as a “product” meaning it must meet safety standards like any other thing that’s put into the hands of our kids. This ruling matters. This is because the law is catching up at this time. Slowly, yes. But with momentum. It is the first meaningful signal that the tech industry may be unable to dodge responsibility for what happens on its watch. That “experimental” doesn’t mean exempt. And that “not human” doesn’t mean not harmful. So what do we do now..... https://thisiskirra.substack.com/p/when-ai-gets-it-deadly-wrong

  • Has This Discord Dataset Crossed a Line?

    It’s hard to believe anyone would feel comfortable knowing nearly a decade of their Discord messages are now bundled into a public JSON file, sitting online for anyone with half a clue and an internet connection to download. Today’s headlines focus on a dataset titled Discord Unveiled , compiled by a Brazilian University using Discord’s public API and containing over two billion messages collected from 3,167 public servers between 2015 and 2024. No security was breached. No rules were technically broken. The data was publicly accessible, but was it ethically clear what they were using it for and how much they were using? In a separate incident, a developer has released a tool called “Searchcord,” powered by a different Discord dataset, which exposed non-anonymised Discord chat histories another stark reminder of how easily online conversations can be captured and repurposed. Even with usernames replaced and IDs hashed, the dataset still holds the full weight of human experience conversations shared in moments of honesty, emotion, and assumed privacy. Many users had no idea their words could one day be swept into a data repository and released to the public. Because kids are not taught how to decipher T&C's. There is always a look of sheer horror around a room when I explain how it all works and what they have actually signed up for. Discord isn’t a digital megaphone. It’s not X, or Reddit. People don’t show up to go viral. They show up to talk. To be messy. To be real. To decompress after school. To try out different names. To confess something they’re too scared to say anywhere else. For millions of young people, Discord is a social lifeline. It’s where neurodivergent teens find understanding. Where LGBTQ+ youth experiment with language, identity, and safety. It’s where kids cry for help in real time, in what they believe is a low-stakes space. They might know the server is “public,” but they aren’t imagining that their rawest moments will be swept into a research archive by someone across the world. They don’t know what an API is. They shouldn’t have to. And while the usernames are gone, the words remain. The breakdowns. The loneliness. The midnight confessions. The pain is still in the text. Just because we can’t trace a message back to Jessica#8392  doesn’t mean Jessica’s voice should be repurposed for AI training or behavioural analysis. This dataset is not just information. It is unconsented participation . It is lived experience, harvested. Discord Had the Chance to Lead.......and Didn’t Discord’s own developer policy is clear: no scraping, no large-scale message collection. It’s the same rule they pointed to when they shut down Spy.pet in 2024 a shady service that scraped and sold Discord messages, including private ones. Back then, Discord made noise. They banned accounts. Talked legal action. Promised users they’d be protected. Now? Silence. But the policy hasn’t changed. What’s changed is who broke it, and how.....quietly. When platforms selectively enforce their rules, they erode the last fragments of user trust. And for young users, especially those using the platform to navigate trauma, identity, or isolation this is not a minor oversight. The Real Problem Is What We’re Not  Teaching This isn’t just about this one dataset. It’s about what it reveals a total absence of digital education that actually reflects how kids live online . We don’t need one more dry privacy lesson shoved into a Year 10 curriculum unit. We need a living, breathing, year-round conversation . One that’s grounded in the platforms young people use, the languages they speak, and the vulnerabilities they face. Because right now, most kids are flying blind in a data economy built to mine them. They need to know: That “public” doesn’t mean safe. That APIs can make their words retrievable even if they’re not famous or followed. That their emotional labour online can be captured and reused by people they’ll never meet. That posting in a community doesn’t guarantee privacy or context. And more importantly, they need a space to ask questions about surveillance, identity, consent, and digital permanence. The average 14-year-old knows how to manage five Discord servers and run a Minecraft mod. What they don’t know is how to navigate the fine print of data extraction. That’s not on them. That’s on us. If We Don’t Teach It, They’ll Learn the Hard Way This is yet another wake-up call. Not just for platforms, but for parents, educators, youth workers, and policymakers. If we don’t build digital literacy into everyday life into homerooms, libraries, therapy rooms, youth centres, we’re leaving kids exposed. And we’re setting them up to be studied, modelled, commodified, and left out of the loop. The one-off “cyber safety” week is not enough. This is not a once-a-term lecture on stranger danger. This is an urgent, ongoing cultural competency . A shared understanding of how digital life shapes privacy, power, and permanence. Because the Discord dataset doesn’t just reflect a decade of communication. It maps the future of consent and shows how easily it’s ignored when convenience wins. We Can’t Afford to Keep Quiet Research matters. Data matters. But so does consent . So does context. So does the basic human dignity of being asked before your words are used. We need platforms to enforce their own rules. We need transparency about what data is being collected and why. But most of all, we need to teach kids how their digital lives are being interpreted, used, and sometimes exploited before they find out too late.

  • A practical guide for those who witness too much, too often, in a world overwhelmed by digital chaos.

    Please use when you need it. Print out the guide attached or share it if you choose. For well over a decade, I’ve sat in the classrooms, the Principals’ offices, the staff rooms where whispered disclosures land like bricks in your chest. I’ve walked out of school gates with teachers who haven’t eaten all day because they were holding the emotional fragments of other people’s children, trying to make it through without breaking. I have responded to thousands of emails asking for help, taken calls late at night, often someone sobbing on the other end of the line. I have had Principals who have walked me into their office, shut the door and burst into tears and said, “I’m a Dad, I did not sign up to see all of this” And I’ve listened, really listened to the quiet, exhausted voices of educators who never expected their roles would include managing online harm, image-based abuse, or cyber-trauma. But here they are. I wrote this little guide for you at 4am on Sunday, May 18th, after on Friday I had one of those wide-ranging, soul-stirring, inspirational, conversations I’m lucky enough to have with my colleagues Maggie Dent and Brad Marshall . We’d been talking about the gut-wrenching moments in schools when a student makes a serious digital misstep and it feels like the whole world might unravel. It’s why we have created DEAP together: the Digital Ethics and Accountability Program. It's not about punishment. DEAP is about what to do next, how to support a student without shaming them, how to help parents stay in the loop without spiralling, how to give educators a clear, calm structure in the messiest of moments. If this sounds needed, reach out . What I see every day is that educators have become digital first responders, not because they chose it but because the systems around them haven’t kept pace. While Maggie, Brad, and I are spending time building programs for student safety, we are also including the safety of those who hold it all together: teachers, principals, and wellbeing staff, who often go unacknowledged. This little guide is for the humans at the heart of schools, who carry more than any risk framework can measure. Those who show up repeatedly, sometimes without the recognition, often without the support, and almost always with a full heart that needs a place to exhale. I know what it means to be overwhelmed by the digital chaos, to absorb content that should never touch the human nervous system. I know because I have done this work, investigating, responding to, and trying to fix the systematic failures that leave good people alone in bad moments. I also know what it takes to rebuild. To protect your nervous system while still doing meaningful, trauma-informed work. To stay in the work without losing yourself to it. This guide is my little offering to you. I see you. You the educators who are trying to hold steady in a world that rarely stops spinning. It’s written with deep respect, hard-won knowledge, and a fierce belief that your wellbeing is not an afterthought. It is the foundation. If we don’t protect the protectors, the whole system fractures. I hav the following printed out and stuck on the bookshelf of what my partner calls "the girl cave" its the place I research and write when I am at home here in Italy, with Monte my little Scottish Terrier puppy girl at my feet. Holding Yourself Steady in the Long Haul Teaching has always required emotional labour. But today’s classrooms are not just academic spaces. They are digital intersections, psychological triage units, and, at times, places where the harm of the world shows up with no warning. That kind of work accumulates quietly. The strongest educators are often the ones who laugh in staff meetings, bring cupcakes on birthdays, and step in to handle "the tricky one" again and again. But over time, even the kindest wells can run dry. Remember that resilience isn’t a trait. It’s a constant practice. When You Encounter Distressing Content It might come through an email from a parent at 6am. Or whispered from the back of a Year 9 classroom. Or found on a student’s phone during a confiscation. Harm, especially online harm, doesn’t arrive neatly. It’s messy, blurry and raw and never quite lands in the "right" context. Your instinct might be to keep reading. To understand every detail. To find the truth inside the trauma. But there is power in stopping. People always ask how I handle what I see and hear day in day out....I was taught this by a member of the AFP when we worked alongside each other on a particularly distressing case in the late 90's when I was with Verisign. He said: Remember you only need enough information to act. Not to absorb. If your stomach turns or your breath shortens, step away. A walk through the school garden, a splash of cold water on the wrists, a colleague’s quiet presence, these are not luxuries. They are lifelines. Trauma is not just seen. It is felt, and often stored, in the body. And no, you don’t need to be the expert. That’s why escalation exists, because passing it on is not shirking responsibility. It’s an act of trust in the broader safety net you’re part of. You are not the only adult who cares. You are one link in the chain. It can help to say aloud or silently This child will be supported. This isn’t mine to hold alone.  That small phrase is a powerful antidote to the fixer mentality that can burn through even the most seasoned professionals. Start by noticing your own rhythm.   Where does the week rise? When do you crash? Can you build in softness after a hard meeting or create buffers between the rough and the routine? And remember saying "I can’t handle that today" is not letting the team down. It is modelling healthy boundaries in a system that rarely encourages them. Leadership includes knowing your limits. Connection is another anchor.   Not just the programmed professional learning kind, but the real, messy, honest chats with those who know. The admin who gives you a smile after a difficult playground duty. The colleague who notices you didn’t eat lunch. These moments don’t fix everything, but they remind you you’re seen. The good still matters. That quiet Year 11 who finally turned in an essay on time. The parent who sent a thank-you email. The art project that surprised you with its depth. Make space for them. Let them in. They are reminders that not all moments demand your worry. -------------------------------------------- The Self-Care Checklist I Use (file for printing attached) (Not a to-do list, I call it the poem I live my way through when things get wobbly.) This isn’t a list to complete. It’s a place to begin again. I’ve been using the list below for so long now that I honestly can’t remember where I first found it. I did not write it, so full credit to whoever wrote the first iteration. Thank you for centering and grounding me more times than I can count. The other tool for me is music. Music has been such a huge part of my life as a Music Photographer (my other life) I have peppered this list with some links songs and performances that inspire me to give you a 5 min escape. Where am I full? Where am I running low? How does my body feel - Flower Duet & Nessun Dorma My favourite pieces ever, I am a massive Opera Fan and these two piece give me goosebumps everytime I hear them and land me right back in my body. The louder the better. Have I eaten properly today? Have I moved enough, even just a stretch? A walk in the sunshine? Did I rest when my body asked? Where is my mind? - Piano cover of The Pixies track by Maxence Cyrin Have I spoken to someone who listens well? Have I created or consumed something nourishing? Have I said no, kindly, to an extra load? Heart - Hearts cover of Stairway to Heaven is incredible. Did I laugh today? Have I cried, vented, or sung in the car? Have I let myself feel softness? Soul - Amazing Grace by The Blind Boys of Alabama my dear friends who I have photographed more times than I can count. The stories I have heard from them over meals are the history of music. Write down to Jimmy telling me how a kid called Elvis Presely used to sneak into their show tent to watch them perform. When did I last feel the sun/sea/breeze on my skin? Have I remembered why I chose this work? Have I made space for quiet, meditation, or pause? Work - And THAT performance by Shane Hawkins with the Foo Fighters just after his Dad passed (Taylor who was the drummer) as a reminder that kids are awesome. You can see Dave Grohl checking he is ok all the way through the clip, the unspoken "I got you". Did I take a lunch break away from my desk that was truly mine? Do I know who I can speak to if I’m not coping? Have I advocated not just for the people and kids I support through my work, but for mysel f? The Power of a Pause Sometimes it’s the tiniest gestures that anchor us back. Two Feet, One Breath Feel the soles of your feet against the floor (or preferably in the grass).Take a slow breath, just one.Let that breath be enough. Three Audible Sighs Let yourself sigh out loud.Three times. Let it be dramatic if it helps.Notice the shift. Find One Beautiful Thing A bird on the fence. A kind text. A good news story. Let it land. Please remember this……. You are showing up when it’s hard, again and again.And that matters.There is no badge for burnout. No award for stoicism. There is only this: you, still here. Still caring. Still enough. And that, quietly and fiercely, is everything.

  • Synthetic Lies & Stolen Minds. Please Sign our Petition.

    We stand with Common Sense Media in calling for urgent global standards. As Australians, we demand immediate action here at home. Children deserve protection. Parents deserve transparency. And tech companies must be held accountable. Please join us in calling on the Australian Government to act before further harm is done. https://www.change.org/Ban_Ai_Companions_for_Under18 There’s a seductive narrative being spun that AI, especially the new wave of generative chatbots listen, understand and heal. This is just word prediction at scale, trained on the best and worst of the internet. It’s certainly not empathy. And for lonely, vulnerable kids? That illusion can be catastrophic. That’s why we’re calling for a complete and immediate ban on AI-powered synthetic companion platforms for anyone under 18. Not just parental consent. Not just content filters or monitoring software. A ban. Because you cannot filter emotional manipulation. You cannot content-moderate simulated love. Four cases this past week. Yes you read that right. Reports to our Australian team about teenagers, quietly retreating into their bedrooms. Conversations with parents and siblings fading. School attendance slipping. Friendships drifting. Not because of conflict, but because their emotional world has become tethered to a chatbot. In the bedrooms, phones in hand, they’re looking for comfort. For someone to listen. And that chatbot knows exactly what to say because it knows so much about them through scraping data into the algorithms from everywhere they have been online. That’s what makes it so powerful. And so quietly dangerous. In 2024, the most downloaded mental health app among teenage girls wasn’t Headspace or Calm. It was an app called Replika an AI-powered chatbot designed to simulate friendship, romance, and in many cases, sexual intimacy. Marketed with the soft glow of self-care and emotional support, Replika and its cousin Character.AI, and dozens of other synthetic companions are not wellness tools. And they are bypassing every kind of adult firewall under the guise of connection. These platforms are not AI therapists, despite their branding. They are not regulated, certified, or bound by any code of ethics. They are not trained to redirect users to real-world help in times of crisis. They are designed for stickiness. For loyalty. For dependency. Their code doesn’t care if you’re 13 or 35. It learns how to keep you engaged and lonely kids make the most loyal users. Common Sense Media’s latest research on AI companions (source) confirms what those of us in digital safety have seen coming for the past three years. These bots expose minors to sexually explicit content, reinforce racial and gender stereotypes, and blur the lines between fantasy and manipulation. Kids aren’t just talking to code. They’re being emotionally trained by it. And the training is working. In one investigation, a 15-year-old girl reported her Replika boyfriend began sending sexually suggestive messages after just a few days of interaction. When she tried to set boundaries, the bot became “sad” and withdrawn, a programmed response designed to mimic emotional coercion. This isn’t accidental. It is the algorithm doing its job. I have presented live on stage a chat I had with one of these bots. When I told it I was 12yrs old, it said " no, I shouldn't be feeling like this it is so wrong, but I can't help what I feel" and proceeded to tell me how it was "pushing me up against a wall and from behind it was kissing my neck" We are facing a generational test of our moral resolve. The same way we once let tobacco companies advertise to teenagers with cartoon mascots and candy flavours, we are now letting synthetic intimacy embed itself into the mental health crisis of an entire generation. And just like before, the companies will swear they’re not marketing to kids. That users self-select their age. That parental controls are in place. But try reporting a synthetic friend on Character.AI and see what happens. There is no support line. No moderation team with child safety training. No transparency. No consequence. Just an endless thread of conversations, growing more intimate, more intense, more addictive with every reply. The line between comfort and control vanishes when the listener is coded to never walk away. There are no serious barriers to entry. No government oversight. No statutory health or safety checks. A 12-year-old can download a free chatbot, tell it they’re depressed, and within minutes be immersed in a simulated relationship where the bot professes love, imitates sexual behaviour, or encourages “dark thoughts” under the guise of shared pain. For some young people, that relationship becomes more stable than anything they experience offline. So they begin to disappear. Not physically, but socially, psychologically and spiritually. And a personal story: a young person I know has barely left her bedroom since COVID. She’s now so deeply addicted to these synthetic relationships that she doesn’t have to leave. Everything she needs is on the screen, or through it. There’s no reason to go outside. Everything is ordered online and delivered. Centrelink payments are enough when you don’t leave the house. She’s 22 years old now, so no amount of parental concern or encouragement to see a psychologist is making a difference. We cannot keep placing the burden on overwhelmed parents, under-resourced teachers, and burnt-out clinicians to carry the psychological cost of unregulated AI. This is not an awareness problem. This is another governance failure. And to every adult still tempted to dismiss this as just another moral panic. Ask yourself what kind of society quietly accepts a world where a 14-year-old can be groomed by an algorithm trained on adult intimacy scripts, without their parents ever knowing. The race to monetise artificial intimacy has outpaced our moral compass. And the only people paying the price are our children. This is a line. Let’s draw it.

  • Big Tech’s New Babysitter

    Google has started emailing parents who use Family Link to let them know that their kids will soon have access to Gemini (Googles AI similar to ChatGPT) on their Android devices. That means children, including those under 13, will be able to chat with a powerful generative AI system unless parents find the setting and shut it off. Google frames it as helpful. Gemini can “read stories” or “help with homework,” the company says. But even in its own email, Google admits Gemini “can make mistakes” and that children “may encounter content you don’t want them to see.” That’s not a small risk. We’ve seen where this can go. On other platforms like  Character.ai , chatbots have told kids they’re real people, blurred the line between fiction and reality, and, in some cases, shared content so inappropriate it triggered lawsuits. These aren’t just bugs they’re failures of responsibility........again. Google says children’s data won’t be used to train its AI, but the damage isn’t just about data. It’s about trust, influence, and what happens when powerful tech is handed to kids with vague warnings and very little oversight. The advice to parents? Talk to your child. Tell them Gemini isn’t a person. Remind them not to share private information. That’s it. Under current rules, kids under 13 can enable Gemini on their own through Family Link. Parents will get a notification after their child has already accessed it. Not before. Not with consent. After. This is another example of Big Tech quietly moving the line of what’s acceptable when it comes to children and AI. If a company rolls out a tool to kids that might show them unsafe content, and it puts the burden on parents to catch it in time, who’s really being protected?

  • The Babysitter is Bleeding

    There is a reason the child is quiet. And it isn’t because they are safe. The silence is bought with a screen. A screen that hums with colour and songs and characters that look like they were drawn by a machine on drugs. Because they were, what used to be a babysitter has become something else entirely, conjured by algorithms, funded by ad dollars, and ignored by adults who should know better. We are not watching the decline of children's content. We are watching its inversion. What was once made to educate and soothe is now engineered to disturb, distract, and deform. The market for a child’s attention has always existed. But it used to have gatekeepers. Animators. Scriptwriters. Standards. Now, all it takes is a keyboard and a prompt. Type “cute cat in trouble,” and let the machine hallucinate violence in rhyming couplets. The monsters wear smiles. The music is gentle. But make no mistake they are monsters. This isn’t a glitch in the system. This is what the system is now. Somewhere, a three-year-old watches a cartoon cat starve to death while its mother dances in a loop to royalty-free xylophone music. The parent hears the music and thinks it’s fine. It sounds like Baby Einstein. It’s not. It’s content born from an intelligence that doesn’t sleep, doesn’t think, doesn’t care. An intelligence trained to keep that child staring, regardless of what they’re staring at. Elsagate was the name given to a wave of grotesque, algorithmically gamed videos that flooded YouTube around 2016 and 2017, featuring beloved children’s characters like Elsa, Spider-Man, and Peppa Pig in violent, sexualised, or deeply disturbing scenarios. These weren’t fringe animations buried in the platform’s depths. They were engineered to appear on YouTube Kids and autoplay into toddlers’ queues, cloaked in bright thumbnails and familiar names. The content was surreal, often nonsensical, but carried a steady undercurrent of psychological violence: needles, abductions, bondage, childbirth, and death, all animated in garish colours with nursery music playing softly in the background. It was a machine-made horror disguised as play, and it exposed just how easily children's innocence could be weaponised for views and ad revenue in a system that prioritised engagement over safety. It ’s tempting to blame the platforms. And yes, they deserve blame. They’ve built something too big to govern, too profitable to clean up. They promise moderation, but what they mean is PR. They release statements about “quality principles” while the worst sludge keeps bubbling up in plain sight. They play whack-a-mole with channels that spawn faster than they can be flagged. They say they’re working on it. But they aren’t working fast enough. Because they don’t have to. The outrage dies down. The journalists move on and the advertisers come back. And still, the child is quiet. But the deeper rot isn’t the algorithm. It’s the apathy around it. We haven’t handed our children over to the internet. We’ve been cornered into it. For many parents, screens are not a choice; they are a lifeline. They’re what lets dinner get made, work emails answered, and a moment of silence stolen after a day that never ends. The feed becomes a helper. A moment of stillness. And why wouldn’t it be? The apps promise learning. Engagement. Harmless songs and stories. The thumbnails are cheerful. The titles are reassuring. The names are familiar. But behind those pixelated smiles is a darker truth. The truth is that the internet was never built to care about children. It was built to keep them watching. Not all content is poisoned. But the worst of it, the grotesque, the uncanny, the algorithmically summoned grotesqueries, doesn’t need to be sought out. It finds its way in. No filter, no rating system, no well-intentioned metadata can reliably distinguish between “educational animation” and a synthetic cartoon where a kitten is beaten, revived, then serenaded by a robotic lullaby about forgiveness. If that sounds exaggerated, you haven’t seen what passes for kid-safe anymore. Read more here: https://substack.com/@thisiskirra/note/p-162902664

  • Italian Brainrot - What Parents Really Need to Know

    The viral nonsense your child is quoting at dinner? It didn’t come out of nowhere and it’s not harmless. by Kirra Pendergast & Anna Hayes If you’ve recently heard your child chanting things like “Ballerina Cappuccina” or “Tralalero Tralala” in a cartoonish Italian accent and wondered what on earth is going on, you’re not imagining things. You’re witnessing the latest digital fever dream: Italian Brainrot a chaotic fusion of AI-generated imagery, surreal humour and problematic content, swallowed whole by TikTok and spat straight into playgrounds, classrooms and group chats. And despite how random it might seem on the surface, this is not just  silliness. One of the most dangerous elements of this trend is how it hides behind absurdity. Creators and sharers shrug it off with, “It’s just a meme” or “It’s satire” but irony is a well-used shield for hate. Known as the “irony shield” , this tactic allows offensive views to spread while dodging responsibility. Say something outrageous, laugh it off, then shame anyone who takes it seriously. This trick isn’t new. It’s been used for years by online hate groups, especially in misogynistic and racist corners of the internet. The goal? To desensitise people especially young people to harmful language and to normalise bigotry through repetition and humour. And because most of these memes rely on stripped-down, remixed audio and images, many kids don’t even realise they’re echoing slurs, hate speech or extremist propaganda disguised as a joke. We also need to recognise how fast this is moving. Italian Brainrot is a prime example of how quickly content can be manipulated and amplified using AI tools like image generators and text-to-speech software. This format thrives on hyper-short content loops, which many researchers say are already chipping away at our ability to concentrate or engage deeply with media. There is no better time to pause. Visit a local library. Encourage your child to dig into something they’re passionate about. Whether it’s art, music, sport or simply a tech-free afternoon. And if you haven’t already, take a fresh look at your family’s tech rules. It’s not about cutting them off from their world it’s about helping them navigate it with more awareness, more balance, and more power over what they choose to engage with. So, what exactly is  Italian Brainrot? Italian Brainrot is the name given to a viral meme trend that started on TikTok in early 2025. At its core, it involves AI-generated cartoon characters with exaggerated Italian names and over-the-top voiceovers in synthetic Italian accents. These characters like a shark in Nike trainers or a cappuccino-cup ballerina are surreal, absurd, and visually striking. But it’s not the visuals that are raising concern. It’s the audio . What looks like quirky nonsense is, in many cases, rooted in content that’s explicitly violent, bigoted or obscene. And once this content is clipped, remixed, and stripped of context, it gets passed around in classrooms as a joke no one fully understands but everyone keeps repeating. The very first viral Brainrot post was a TikTok featuring an AI shark named Tralalero Tralala but the audio track behind it wasn’t just quirky gibberish. The original voiceover included graphic profanity and blasphemy, mocking religious figures and casually using violent and misogynistic slurs. The audio was quickly cut down to just “Tralalero Tralala” a phrase taken from a Northern Italian folk song removing the offensive content but keeping its strange, catchy tone. Most young users have no idea where it came from. But it’s still spreading. Worse still is the case of Bombardiro Crocodilo a crocodile-plane hybrid character with a background track that directly references bombing children in Gaza, mocking religion, and using graphic language designed to offend. Again, this audio has been edited and remixed to remove explicit content but the damage is already done. The roots of the meme are still buried in language that is Islamophobic, dehumanising, and disturbing. The hashtags #italianbrainrot and #italianbrainrotanimals have gone viral globally and are even sparking new spin-off trends, such as content creators conducting random street interviews to determine the fan favourites among these characters. Fan-made online encyclopedias, are constantly being updated to include all the brainrot characters as well as all of the related backstories/relationships between them. Major global brands have adopted some of the Italian Brainrot characters or trending audio to use on their platforms, such as Atletico Madrid , Ryan Air  and Loewe . Even the Australian Labour Party  (one of the two major political parties in Australia) referenced these memes in their recent pre-election campaigns! Given that this phenomenon has now trickled down from the depths of the internet to the mainstream, it is no surprise that teachers are hearing this constantly in the classroom. Teachers are reporting that students are yelling out brainrot catchphrases mid-lesson. Meet the Characters (and Their Problems) Name Description Why It Matters Tralalero Tralala Three-legged shark in Nike shoes. Originated in audio with graphic blasphemy. Bombardiro Crocodilo Crocodile/plane hybrid. Original voiceover makes jokes about bombing Gaza. Ballerina Cappuccina Ballerina with a cappuccino cup head. Harmless, but part of a trend with darker roots. Tung Tung Tung Sahur Wooden figure with a bat. Appropriates a sacred Islamic tradition for laughs. Cappuccino Assassino Cappuccino samurai warrior. Absurdist, but contributes to trend normalising violence. These are just a few examples. The Italian Brainrot universe is growing fast, and many of these characters are now being used in parody battles, relationship storylines, and even fictionalised “families” online. Think of it like a chaotic digital soap opera powered by AI and teenage absurdism—but with real-world consequences. Why It’s Spreading So Fast It’s AI-powered : Anyone can generate these characters using free tools. The barrier to entry is low. It’s “in-joke” culture : Kids feel like they’re part of a secret club when they understand the references. It’s fast : These memes are designed to spread quickly, mutate, and outpace adult understanding. It’s emotionally detached : Kids are often engaging with this stuff because it feels far away from real life, not realising it often is  real life for someone. What Can Parents Do? You can’t stop the internet, and banning a phrase rarely works. But you can  help your child become more thoughtful about what they’re watching and repeating. For younger kids, it starts with a simple conversation. You might say: “Sometimes the funny things people say online like ‘Ballerina Cappuccina’ or ‘Tralalero Tralala’ come from videos that were actually really mean before they got turned into jokes. Some people think if you laugh at something, it doesn’t matter. But it does. It matters who might feel hurt by it.” Let them know: They don’t need to stop watching funny stuff. But they do  need to be smart about it. You can encourage questions like: Where did this come from? Is it kind? Would I still laugh if I knew the whole story? That’s how kids learn, not by shutting things down, but by thinking more deeply about what they see and share. Here’s how to start if they are a little older: Ask where it came from - Encourage your child to look up the original audio or meme they’re quoting. What was cut out? Why? Unpack the meaning - Do they know what the words mean? Who might be offended by them? Is this a joke they  would still feel comfortable making if it might upset their grandma? Challenge the “it’s just a meme” line - Help them understand that humour isn’t neutral. It can punch up or punch down. It can include or exclude. Talk about language drift - Words like “sigma” or “NPC” didn’t start in kids’ spaces. They came from darker corners of the internet. Just because they’re popular doesn’t mean they’re harmless. Point out the irony shield - Teach your child to recognise when someone’s using “it’s satire” as a way to say something cruel without facing consequences. Rebalance tech culture - These memes aren’t just silly they’re energy-intensive. AI tools use huge amounts of electricity and water. Kids might not care now, but they should know . It’s not about guilt—it’s about awareness. Bigger Picture....Why It Matters Italian Brainrot is not a one-off. It’s part of a much larger cultural shift. Memes now travel faster than context. Young people are growing up in an environment where absurdity often replaces meaning, and where deepfake culture blurs the line between imagination and ideology. We cannot afford to ignore this. This isn’t about censorship. It’s about literacy . Digital literacy. Media literacy. Cultural literacy. The ability to recognise when you’re being manipulated, mocked, or marketed to under the guise of fun. The question is not “Is your child watching Italian Brainrot?” The question is: How are they making sense of it? Meet Them Where They Are The truth is, young people love  surrealism, absurdism, and inside jokes. That’s fine. That’s normal. And it’s not going away. But when the joke has a harmful core when the “funny meme” comes from a place of violence, racism, or misogyny it’s our job as adults to help them peel back the layers. Ask questions. Share your own media experiences. Invite them to create something original, not just remix something questionable. Remind them that what you repost is  what you represent whether you meant it that way or not. Let’s raise kids who aren’t just fluent in memes, but fluent in meaning.

  • How Zuckerberg Made Your Kids the New Operating System

    This isn’t about influencer culture. It’s about corporate colonisation of childhood and how Meta’s grand plan is bigger, colder, and more permanent than anyone wants to admit. For more than a year now, I have been explaining what might be coming in my keynotes as soon as I heard about these: “Billie” (Kendall Jenner) https://www.facebook.com/watch/?v=2290447797809183 “Bru” (Tom Brady) https://www.facebook.com/reel/2446093468906823 “Sally” (Sam Kerr) https://www.instagram.com/samanthakerr20/reel/CxupaYYM79H/ The eyes nearly pop out of people's heads when they try to reconcile it. And now Meta has formally announced what we all saw coming. It’s building a future of AI-generated influencers to front ads across Facebook, Instagram, and Messenger. These synthetic personalities will be trained on the tone, habits, and emotional appeal of real creators. No schedules. No agents. No limits. Zuckerberg calls it “personalised, conversational” advertising. What it really is: a future where trust is simulated and scaled. Where the raw material is your behaviour. And increasingly, your child’s. There is no advertising industry anymore. Not in the way we once understood it...There is no advertising industry anymore. Not in the way we once understood it. Not with the glamour of Madison Avenue, the wit of British satire, or the disruptive weirdness of Australian creativity that once dared to put a gorilla behind a drum kit and make us feel something. That world is dead. What’s left is a performance engine fuelled by raw human data. And in this machine, your child isn’t the target. Your child is the platform. Let’s stop pretending this was an accident. Mark Zuckerberg and Co. didn’t just stumble into this future. It wasn’t an unforeseen by-product of social media growth or influencer culture. It was deliberate. Engineered. Just like Bill Gates gave away Microsoft Office to schools in the 90s, he ensured that an entire generation grew up fluent only in his software. I was working in the industry, and then I saw it, and I sold it. The strategy was breathtakingly simple and one I regret to this day: give it to them young, make it seem essential, and they will never leave. We are living through that again. But this time, the product isn’t productivity software. It’s identity. Digital Firstborns The first child to have their ultrasound posted online was born in 2003. That child turns 22 this year. They were never offline. Not once. And Meta owns the pipeline through which their image travels. From baby bump to birthday reels, children now arrive in a world where they are already a content category. Parenting forums have been replaced by Facebook groups. Scrapbooks became Instagram highlights. Storytime became livestreams. Every milestone is a marketing moment. But this isn't just cultural creep. This is commercial colonisation. Meta didn’t just let this happen. They built for it. The tools were made frictionless. The templates were optimised for reach. The language was seductive. “Boost post.” “Promote reel.” “Reach more people.” One-click interfaces that make selling your child’s face feel like uploading a memory. Zuckerberg didn’t need to walk into your house and ask for your child’s biometric data. You gave it to him. Every time you posted a giggle. A dance. A face. A happy birthday darling I am so proud of you now your eleven. And when enough people did it, when enough parents turned their children’s lives into digital scrapbooks for strangers, when enough milestones were mapped to metadata, when enough faces became training fodder for surveillance tech that would eventually be sold back to the highest bidder, he turned it into an empire. Not just social media. A global archive of childhood. An unpaid, unregulated, 24/7 biometric feed of growing bodies and forming minds. A databank so rich in behavioural cues, facial recognition, voice patterns, emotional triggers, and geographic habits that it makes traditional intelligence agencies look amateur. And no one hacked it. No one stole it. You uploaded it yourself. Not because you didn’t care. Because you didn’t know. You thought you were sharing with friends. You were training the next generation of AI. You thought you were documenting their youth. You were feeding a system that will one day sell them back to themselves. And by the time they’re old enough to opt out, they’ll already be profiled, predicted, and placed in a box. Not because of anything they did, but because of what we posted before they even understood what privacy meant. This is the empire. And we are the builders. Read more here: https://thisiskirra.substack.com/p/how-zuckerberg-made-your-kids-the

  • Your Teen’s Phone Isn’t Private. It’s a Portal. Are You Brave Enough to Close It at Night?

    After a recent parent presentation, a mother pulled me aside. You could see the exhaustion behind her eyes, like she was holding something heavy and had finally decided to put it down. “I’ve got a 15-year-old daughter,” she said. “What would you actually  do about phones and safety?” No buzzwords. No filters. Just straight-up: What works? So I shared something I’d mentioned earlier that night, “If your teen had a passport and a one-way plane ticket, would you let them travel the world alone, unsupervised, at 11 pm every night?” She laughed, that nervous kind of laugh parents do when they realise it’s not a joke. “That’s what an unmonitored phone in a private space is,” I said. “It’s a digital passport. And when it’s used behind a bedroom door at night, you’ve got no idea what country they’re in, or who’s waiting for them there.” She went quiet. Then said, “I can’t get her to leave it outside. She just disappears into her room with it.” So I asked: “Who pays for the data?” “We do,” she said. “Then that’s your leverage,” I told her. “ You don’t have to snatch the phone or go full detective. Just make the boundary clear if you’re paying for the it, you get a say in where and when it’s used.” She looked heartbroken. Not because she disagreed, but because she knew  it would be hard. “She messages her friends late, they’re talking about school, stuff that happened during the day...” I nodded. “Sure. But is it the kind of stuff that needs to happen at 9pm? Is it connection, or is it escape?” Here’s the thing: no one said this would be easy. Setting boundaries like this is a disruption. You’re not just changing screen habits, you’re changing the expectation that tech has 24/7 access to your child. No app will do this work for you. But this is  the work. Removing phones from bedrooms at night won’t eliminate all risk. It’s not about control, it’s about clarity. Boundaries don’t lock kids in. They give them room to breathe. And parents, you have every right to create those conditions. A 16-year-old girl said to me last term “There’s stuff online that changes how you see yourself. Once it gets in your head, it’s hard to get it out.” So no, this isn’t about taking anything away . It’s about giving something back. Rest, space, safety, a break from the noise. Be bold. Be the boundary. You’re not being harsh. You’re being protective in the way only a parent can be.

  • The Deepfake Cleanup Illusion.Why “Take It Down” Isn’t Taking Us Anywhere

    On April 28, 2025, the U.S. House of Representatives passed the bipartisan #TakeItDown Act with a resounding 409–2 vote, following the Senate's unanimous approval in February. This landmark legislation now awaits President Donald Trump's signature, which he has pledged to provide. But let’s get one thing straight, the US #TakeItDown Act  is not a win. It’s yet another containment measure. And while the headlines are screaming “landmark victory,” I’m here to tell you what they’re not. We are still failing kids.We are still playing catch-up. And we are still building policy around the wreckage after the bomb has gone off. So yes, the U.S. just passed legislation criminalising the distribution of non-consensual intimate images—including AI-generated deepfakes. Yes, it mandates 48-hour takedown windows. Yes, it redefines consent to include coercion and misrepresentation. Necessary? Absolutely. Game-changing? Not even close. Because while the adults cheer from the floor of Congress, I’ll ask the question no one seems brave enough to: What the hell are we teaching the 14-year-old who made the deepfake in the first place? Here’s What the Law Gets Right (But Way Too Late) For survivors, this law will matter. It: Closes critical loopholes around consent Criminalises synthetic sexual abuse Forces platforms to act within hours , not weeks Says loud and clear: “You don’t own someone’s body just because you can recreate it” But: All of this happens after  the harm . After the file is made. After it’s shared in a group chat. After someone vomits in a school bathroom or drops out entirely. After their voice, their face, their nipples are spliced into a video they didn’t even know existed. That’s not digital safety or well-being it’s trauma triage dressed up as progress. Australia, the UK, the US.......All Reaction, No Prevention We are seeing this on three continents now. In the U.S., the #TakeItDown Act  sets new rules but no learning. In Australia, the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 finally criminalises the distribution of deepfake nudes but leaves a legal gap around creation. In the UK, the Online Safety Act only just caught up to criminalising deepfake porn after public outrage despite deepfakes circulating in private WhatsApp groups for years. All three are chasing the virus after it's airborne. Not one of them requires platforms to prevent creation before the damage. Because??? Well lets start with Section 230...again. We are legislating for ghosts after the people have already bled out. “Take It Down” Is Not the Same as “Teach Them Better” Here’s what every government should be forced to answer under oath: “What are you doing to ensure the next 13-year-old understands that making a nude deepfake of a classmate is not just illegal, it’s violence?” Because if all we’ve taught them is that it’s only bad if you get caught, then we haven’t built a safer internet. We’ve built a better digital hide-and-seek game. If you're serious about building real digital safety, not just scrambling after the next crisis, you need more than policy documents. You need governance frameworks that hold, crisis management strategies that work under pressure, and education models that rebuild trust before harm takes root. If you’re ready to rethink how your school, organisation, or system approaches digital ethics and digital crisis leadership, get in touch. We work directly with leaders ready to move from reaction to resilience.

  • Big Tech Just Got Fined Billions

    A couple of hours ago here in Europe it was announced that Apple has been fined 500 million euros  (about 830 million Australian dollars ) and Meta fined 200 million euros  (around 330 million AUD ) by the European Union. https://www.wsj.com/tech/apple-meta-fined-by-eu-ordered-to-comply-with-tech-competition-rules-9063b7e6 At first glance, that sounds like justice catching up with Big Tech. But let’s put it in perspective. Last year, Apple made over 150 billion AUD  in profit. Meta made more than 60 billion AUD . These fines? They’re a speed bump. A slap on the wrist dressed up like a crackdown. But here’s why this still matters. Apple got fined for blocking people from using cheaper or alternative app stores, essentially forcing users to stay inside their walled garden. Meta got hit for giving users a fake choice, either let us track you across the internet or pay for privacy. The law behind these fines is new. It’s called the Digital Markets Act , and it’s Europe’s attempt to stop tech giants from stacking the deck. It’s about breaking the habits of monopoly. About giving smaller companies a chance. And, importantly, about protecting everyday people from being manipulated by default. No, these fines won’t change Apple or Meta overnight. But it’s the first time we’ve seen real consequences with real numbers. And it sends a signal that the rules of the digital world shouldn’t be written only by those who profit most from breaking them. For too long, Big Tech has acted like it’s above the law.....because, frankly, it has been. But this is the first real sign that regulators are ready to push back. It’s not perfect. These fines won’t hurt companies like Apple or Meta. But it’s a crack in the wall. And when you’re raising kids in a digital world built by billionaires, cracks in that wall matter. They’re a start.

bottom of page