Search Results.
84 results found with an empty search
- Why some student images have become a foreseeable risk — and what calm, child-safe leadership looks like now.
If you work in a school as a principal, in advancement or marketing, board member, wellbeing lead, ICT manager or educator, you may already know this feeling. That quiet pause before approving a photo for social media and the moment of unease that didn’t exist five years ago. The sense that something has shifted, even if the policies haven’t yet caught up. You’re not imagining it. The risk environment has changed, and with the amount of coverage now appearing across Australian and international media, this risk is no longer abstract. It is foreseeable. For many years, schools relied on parental consent and permission to publish forms as the primary legal and ethical basis for publishing student images. Prior to 2021 that approach was considered sufficient. Today, unless consent is genuinely informed, specific, and auditable, it no longer reflects how images are captured, stored, transmitted, and reused in the context of a school's social media use and across multiple EdTech systems. Loss of control increasingly occurs before anything is posted publicly at the point of image capture, storage and transmission. Photos taken on personal devices or unmanaged school systems can automatically synchronise to private cloud accounts, be retained beyond their original purpose, or be accessed, copied or repurposed in ways never anticipated at the time consent was given. Once an image leaves a controlled school environment, schools can no longer reliably ensure that its use remains aligned with the purpose parents agreed to, that security standards are met, or that the image will not be misused. This is why leading child-safety bodies, privacy regulators, and the eSafety Commissioner now treat image-based harm as a material governance and duty-of-care issue, not a technical one. Privacy settings do not completely prevent copying and screenshots. Consent forms do not prevent third-party misuse, and schools are often unaware of harm until long after it has occurred. Under Australian law and the National Child Safe Principles, schools are required to take reasonable, proactive steps to prevent foreseeable harm, including online and technology-facilitated abuse. Image-based exploitation, deepfakes and AI-enabled misuse are now recognised psychosocial hazards with serious implications for student and staff wellbeing, learning, attendance and long-term mental health. When risk is foreseeable, and harm could be severe, the duty of care requires action before a crisis, not after a police report or media inquiry. This is why many Australian schools that have worked with us are quietly but confidently changing their approach. Some have gone public with their reasons for no longer publishing student images on public platforms that are easily deepfaked. Others have removed student photos from social media entirely, limiting use to closed, authenticated environments. Others have changed the type of photos they publish. Because leadership means responding to what is now known and what is clearly coming. Schools did not create this digital ecosystem. Technology companies did. But schools are still the ones placing children into it, often without full visibility of how images and other identifiable data can be used downstream. This isn’t an argument for shutting down communication or community engagement, as there are ways you can keep your school's social media just by changing the way you do things. It’s an argument for modernising risk assessment and setting safer defaults that align with contemporary child-safe expectations. The question many schools are now asking is not “How do we keep posting safely?” but rather, “What is the minimum public digital footprint our students actually need?”. It reframes the issue from fear to governance, from reaction to leadership. We’ve supported hundreds of schools to make this transition calmly and confidently, without fear campaigns, without parent backlash, and without adding pressure to already stretched staff. This work is not about going backwards. It’s about moving forward with integrity, clarity and child-centred decision-making. This risk isn’t going away; in fact, it may increase a recent IWF survey showed a 26,362% increase in CSAM in the last year. Schools that respond early, thoughtfully and systemically are already setting themselves apart as leaders in child safety, wellbeing and digital responsibility. _________________________________________________________________ What comes next — and how schools are responding We are now inviting our next tranche of 50 schools to participate in our CTRL+SHFT+OS Early Adopter Program. CTRL+SHFT+OS is not a policy pack or a one-off intervention. It is the operating system schools use to run and prove the duty of care across the whole school community in one connected system. It gives leadership teams a single, defensible system to capture early risk signals, run the correct response pathways for compliance, support students and staff in real time, and generate regulator- and board-ready evidence automatically without relying on memory, inboxes or reconstruction after the fact. If your school is ready to lead early or if you are navigating board, leadership or community conversations about what responsible digital practice now requires, we would welcome you to get in touch to organise a live demo. Contact: kirra@ctrlshft.global
- “Who here is still able to access all the apps?”
For the past two days, I began each student's talk with these questions: “Who here is still able to access all the apps?” And for the older students: “Who has a younger sibling still with access?” In the first session, Yrs 9/10, there was hesitation. A few brave hands. I could feel the uncertainty in the room, maybe even shame or fear from those still under 16 years that they might get in trouble. I broke it open and spoke as I have become known for on LinkedIn and in international media for many months now. I reframed it to reflect what should have been in the media before big tech's PR spin reached it. I told them how I correct adults, governments, and leaders: The Australian Government didn’t ban under 16s from social media. It banned social media companies from accessing Australian children under 16. The energy in the room shifted immediately. Because when you tell young people the truth without fear, without condescension, and without calling the Social Media Minimum Age Law a failure, something remarkable happens. They listen. Not because they’re scared. Because they’re smart. I explained the why. The government banned Big Tech from using persuasive design to manipulate young minds. From harvesting their data. From nudging them through dark algorithmic loops. From monetising their moods. From twisting their bodies into shapes that the algorithm tells them are worthy. From shaping their identities to serve a machine that has never loved them, only to feed on their attention and grow profit and power. I explained that calling it a “ban” was strategic. Probably coined by a Big Tech PR team. That is normalised and stuck so fast that even the Prime Minister uses it, because that’s what people recognise. How that’s the point. If Big Tech paints the government as the villain, they get to keep being the architect of harm. Unaccountable. Hiding behind parental controls and layers of “child online safety washing” that were never built to work. And they know it. We gently cracked open the truth. The pull of the attention economy. The way the attachment economy has been reshaped less by families and more by profit. We talked about how, in the absence of enough time, presence, or support, many young people now turn to their devices not just for entertainment, but for comfort. For connection. And increasingly, they’re met not by a friend or a safe adult, but by a chatbot trained to mimic care, while harvesting emotion as data. We talked about how none of this is accidental. It’s surveillance capitalism. We spoke about the systems behind the screen. How the simple act of their parents saying “Happy Birthday, darling, I’m so proud of you” on Facebook has been reframed as the modern ritual of love and how parents feel guilty if they don’t perform it online. In the Year 7 and 8 session, I asked how many had just received their first phone “for safety.” A third raised their hands. Their honesty was humbling. But not surprising. I talked about the myth that giving a child a smartphone is always an act of safety, a phone with less functionality is all they need. AS all to often the gift of a smartphone to young is an unexamined handover to a billion-dollar industry with no duty of care for a child’s emotional lives. In the Year 11/12 session, we went deeper. I told them that boys are not the problem. They’re the target market. Vulnerable. Curious. So often misunderstood. Swept into a digital storm that no adult fully prepared them for. What we’re seeing now is the impact of a system carefully designed to prey on adolescent uncertainty. The online world doesn’t wait for boys to grow into themselves. It pushes. Repeats. Rewards. It draws them into content that appears bold, funny, and rebellious, and slowly becomes darker. No 14-year-old boy is born with hatred. Misogyny isn’t innate. It’s learned. Drip by drip. Hidden in jokes, laced through online personalities, offered as belonging. One click out of curiosity becomes a pattern. A moment of feeling lost becomes a digital door. And behind that door are echo chambers where empathy is mocked, and bravado becomes a currency. We must stop assuming bad intent. We must see what’s really going on. Because many boys are not choosing hate. They’re being groomed into it quietly, cleverly, by systems that care more about screen time than their well-being. Our job isn’t to blame them. It’s to reach them. Gently, truthfully, before the algorithms do. I taught the seniors how to identify it if it has happened to them and how they can talk to younger brothers and cousins who may be caught up or showing the signs. I told the girls the truth about online consent, filters, dysmorphia, hustle culture, and the new insidious failure narrative that if you’re not a billionaire by 22, if you’re not flying private, if you’re not selling self-branded content and counting your passive income before breakfast, you’re failing. This isn’t fringe. These are the dominant narratives on their feeds. We talked about algorithmic bias and digital echo chambers, about AI and the environmental impact, about how to build pause to combat outrage and division of communities and countries and how to protect their cognitive sovereignty, and how feeds differ across continents and why that is and why we need to understand what is happening. I told them the truth about the Social Media Minimum Age Law: it's not there to punish kids. It's there to hold tech companies accountable. So, if something goes wrong, and they're under 16, they won’t get in trouble for speaking up. They should speak up because their safety always comes first, and I told them that while change might feel slow, systemic public health shifts always are. We discussed that when I was a kid, cars didn’t have seatbelts. It took time, pressure, and public awareness to make a safety standard. That this is their seatbelt moment. That yes, on December 10th, they might have lost access. Maybe not. Maybe next week. Maybe in three months. But we need to be prepared for the fact that it may happen when they least expect it, and they deserve to understand why. It was never about them. I spoke about 1984 in Byron Bay. How, at 14, I caught little bits about the Cold War in the background of the 6 pm news as I walked through the family room, if my dad was watching the news. That was about as bad as it got for me. Not out of nostalgia, but to give them context. Now, they live inside it. A 24/7 livestream of Wars, Epstein files, Adult content that is not love, Violence, Misinformation, Hate, Climate Catastrophe, Polarisation, and Manufactured Identity. If you don’t get it right, they roll their eyes so hard you can hear it. I don’t scare kids. I inform them truthfully, and when you do that, they hear you. In rooms of 300+ students in each session, there was barely a whisper of distraction. When they did chatter, it was talking to a friend about what I had just said. Curiously unpacking it in their personal space. It happens when someone they don’t know shows them respect and treats them like the capable, intelligent human beings they are. Young humans whose lives are already deeply entwined in digital systems that most adults barely understand. The kind of respect they get from their teachers who know them well, but rarely from a complete stranger. I don’t usually speak directly to students anymore. I spend most of my time with leadership teams, educators, governance and risk teams and policymakers because systems shape behaviour, and this is a system-wide problem that I can help guide leadership through so it filters through the organisation. At St Ignatius College in Adelaide , I work with their students because the incredible leadership team and educators, under the thought leadership of Principal Lauren Brooks, understand that this isn’t a tech issue, a discipline issue, or a one-off assembly fix. They are deeply invested in shifting digital culture across the school and their wider community. St Ignatius didn’t just book a once-a-year speaker for their students; I also met with all of the leadership, heads of house, and curriculum. I have presented to parents and hosted all staff PD, and they participate in our CTRL+SHFT+AAA program year-round. St Ignatius continues to positively amplify the shift they made last year and their leading Tech Smart framework. They understand that the line between online and offline no longer exists. This is just life now, and it needs to be lived well. That means changing how we use devices in school, how we educate about the ethical and safe use of technology, and the importance of remaining human at the centre, not once a term or once a crisis, but all year, across the whole school and parent community. This is what real change looks like. Systemic. Sustained. Embedded. After the sessions, one student said, “I’ve seen so many cyber safety talks, but that was the best.” Another said, “That was the first time someone actually explained things I didn’t know.” One more told me, “At my old school, it was the exact same presentation every year.” Students told me they’d be deleting their accounts not out of fear, but because, for the first time, they understood why . Many came up to say thank you. Quietly. Even while I was waiting for my taxi. We need to understand something urgently. This generation is not desensitised. They’re overwhelmed. They’re not disengaged, but some are drowning in noise and AI slop, desperately scanning for a signal that keeps them deeply, messily, and gloriously human. My final student session at St Ignatius was a combined Year 5/6 session, and it went to a whole new level I did not see coming. We discussed safer, more effective ways to use the games they love (I never say "don’t"). We also discussed climate change and AI's impact on the planet. The students led the conversation into environmental engineering, energy transfer, data centres, and the invisible architecture behind the digital world they were born into. I told them how cities like Helsinki don’t waste the hot water pumped out after it has been used to cool data centres; they redirect it. The waste heat expelled by data centres is captured and channelled to heat entire neighbourhoods. In winter. In sub-zero temperatures. The room erupted. “That’s genius,” they said. And before I could ask another question, they were asking theirs. Could we do that here? Could Adelaide become a city that thinks like that? This was Year 5 and 6!!!! This is what happens when we speak to kids like they are already part of the future, because they are. When we move past finger‑wagging and fear and offer them the real, raw brilliance of human innovation, their minds ignite. When you tell them about the extraordinary innovation of Australia’s First Nations peoples like the boomerang, a throwing stick designed to return so it could be used again, or ancient stone fish traps engineered for sustainable food, the room lights up with recognition. With the understanding that some of the most sophisticated innovation in human history was imagined, tested, and perfected right here thousands of years before them, with no computers or AI in sight. When we do the work. When we show them the whole picture and invite them into it, they don’t just care. They lead. Not because we told them to, but because we trusted them to. Our job isn’t to hand them watered-down online safety warnings. It’s to give them better truths to stand on and then step back gradually, so they can build what comes next.
- Why Emoji's Over Faces Will No Longer Protect Our Kids Privacy
If you’ve ever posted a photo of your child with a smiley face or heart emoji over their face, you’re not alone. It’s become a kind of digital parenting ritual. A signal that says, I love them, but I’m protecting them too. That instinct, to protect while still sharing, is deeply human. But here’s what most of us haven’t been told: the tools we’ve been using to protect our kids online haven’t kept up with the technology, which is now learning from them. It’s time to gently, but honestly, update what we understand about online safety because the internet doesn’t just watch anymore. It learns. What’s Changed Artificial intelligence no longer needs a full face to identify a child. It can spot them from the shape of an ear, the tilt of a head, the badge on a school uniform, the pattern of candles on a birthday cake, or the garden behind your house. It pieces it all together, not because it’s malicious, but because it’s designed to learn. Once it has learned, it doesn't forget. Even one image, posted at the right time and place, can become part of a data set. That data can be used to create synthetic images of children. Sometimes for advertising. Sometimes for uses that are deeply disturbing. While we were focusing on covering faces with emojis, the systems were watching everything else. The backgrounds, captions, hashtags, geotagged location and more. They weren’t just learning from our photos. They were learning from us how much engagement a post received, what we posted more of, what got attention and what didn’t. We weren’t just encouraged to share, we were trained to, and that’s what no one told us. The platforms did not wait for us to become influencers. They turned everyday parenting into a feedback loop. Each time a photo of a child received more likes than a sunset, more comments than an adult achievement, the system took note. Each time a birthday post travelled further than a work milestone, the algorithm quietly learned what mattered most to us and then fed it back, amplified. Over time, visibility became confused with love. Not posting started to feel like hiding, or worse, like not being proud. That was not a coincidence; it is the basis of a social media platform's business model, known as the attention economy. The dopamine wasn’t accidental either. The surge of connection after sharing a milestone, the reassurance of being seen in our parenting, the relief of belonging to a community that seemed to value our children as much as we did. These responses were measured, refined, and reinforced. The system algorithms rewarded emotional exposure, not because it cared, but because it performed. Slowly, without any announcement or consent, parenting became public-facing. Not because parents were careless, but because the architecture of these platforms was designed to exploit the most vulnerable, loving instincts we have. This is why emojis became a ritual. They offered the illusion of control in a system that had already moved past it. They allowed us to believe we were protecting our children while still feeding the machine. Understanding this matters. Once you see how the training worked, the pressure to participate begins to ease. You realise you were never failing to protect your child. You were operating inside a system that quietly taught you that sharing was the price of belonging, and now that we know better, we are allowed to choose differently. What We Can Do Now This isn’t about blame. It’s about awareness. Once we know better, we do better. So here are some grounded, gentle steps to consider as you navigate this space, especially as the school year begins and those first-day photos flood our feeds: 1. Take a pause before you post Ask: Who is this really for? If it’s for grandparents or loved ones, consider a private message, a shared album, or a printed photo you can stick on the fridge. Sharing doesn’t always need to be public. 2. Be mindful of context It’s not just the face. It’s the school logo, the front of your house, the birthday banner, the uniform, the personalised water bottle in the corner. These are breadcrumbs. AI doesn’t need the whole puzzle; it only needs a few pieces. 3. Think beyond the emoji Covering a face can give the illusion of safety, but it doesn’t hide identity from machines. If you’re going to post, consider photos that show hands, backs, or silhouettes without revealing identifying context. 4. Delay before sharing Let the moment happen offline first. Give yourself a day or two. If it still feels important to post later, you can do it with more clarity and intention. 5. Build digital consent early Even with young children, begin asking, Do you want me to post this photo? Involving them in the decision, even when they’re small, models respect and give them a sense of ownership over their own image. 6. Don’t let the algorithm shape your parenting If it starts to feel like not posting means you’re not proud — pause. That’s not your instinct speaking. That’s the system doing what it was designed to do…reward content, not parenting. The Bigger Picture We have been told that sharing our children online is a way to stay connected to family and friends. That it was harmless. That a face sticker was enough to keep them safe. But none of that was true, and not because we failed, but because no one warned us what we were feeding. Our kids are growing up in a world where their likeness is data. Machines are trained on their images, their homes, their routines, long before they understand what any of it means. Tech platforms are not designed to protect them, but we can step back into our power. We can choose fewer public posts. Presence over performance. We can choose not to give their story away before they’ve had a chance to live it. Please remember that pride doesn’t need an audience; it just needs to be real. One day, our children will ask us not just what we shared — but why. Let’s make sure we can say – “Because I didn’t know at first. But when I did, I chose you over the feed" That answer will be more than enough.
- The Toys That Listen And What Parents Need to Know This Christmas
Walk into any department store this Christmas and you’ll find shelves of smiling toys powered by AI. Teddy bears that “listen with love.” Dolls that promise to “grow with your child.” Robots that say they can help with maths, literacy and loneliness. These aren’t screens, and yet they’re wired just the same with microphones, sensors, data connections, companion apps, and, crucially, an invisible thread to corporate servers far beyond parental reach. Audio becomes text, text becomes data, and data becomes insight. Insight, in turn, becomes a product to sell. So where does that leave families looking for gifts?…..In the toy aisle! You don’t need to ban tech, but you do need to know what it’s doing. If a toy contains a microphone, it can record. If it talks back with personal insight, it has memory. If it requires an app, it’s probably storing data. If it connects to the internet, it can be breached. The most powerful thing you can do is ask the question companies don’t want you to ask: Why does this toy need to know so much about my child? The answer is rarely about education. Read more here: https://kirrapendergast.substack.com/p/the-toys-that-listen-and-what-parents
- Staff Personal Phones and Child Images in 2025/26
If any of our personal phones were ever checked because something unexpected happened at school or childcare, many of us would probably discover a handful of student photos sitting quietly alongside our own family pictures, cloud backups, messages and apps. Not because anyone set out to do the wrong thing, but because for years this has simply been the familiar way of capturing learning, connection and those small, beautiful moments in a child’s day. But the world has shifted around us. Technology is different, expectations are different, and the risks, especially for children, are far more complex than they once were especially in a world of AI and Deepfakes. What felt harmless even a few years ago now sits in a grey zone where privacy law, child-safety responsibilities and everyday digital habits collide. This isn’t about blame or finger-pointing. It’s about understanding how the world has changed, and why something as simple as a quick photo on a personal device now needs a second look. Because when we know better, we can do better, and we can protect the children in our care with the same gentleness and wisdom we bring to every other part of their wellbeing. Most permission to publish forms give consent for a school or service to use a child’s image not to scatter raw photos across the personal tech of every staff member. They don’t magically make personal devices secure, compliant, or safe and with new regulations in early childhood banning personal phones outright, and school systems tightening expectations everywhere else, the gap between “what we’ve always done” and “what is legally required” has become a canyon. I wrote the blog below in July 2024. In the past few weeks I have been rewriting policy for schools to address evolving privacy legislation. I wrote the blog not to shame or judge but to highlight to Australian schools, with absolute clarity, the risks we can no longer ignore. ______________________ Across Australia, many educators and carers are still using their personal phones to take photos of children for learning documentation, quick parent updates, or spontaneous moments worth sharing. Often, the intention is kind and the moment is genuine. But with deep respect for the work you do, it’s time we acknowledge something important: This practice, however well-meant, puts children, staff, and services at risk. Personal mobile phones are not designed for secure, professional use in educational settings. When photos of children are taken on a personal device even just once the data may: Automatically upload to cloud platforms like iCloud or Google Photos Sync across other personal devices smartwatches, tablets, laptops Be accessed by third-party apps, often without the user’s knowledge Remain in backups or deleted folders for weeks, months, or longer Create a digital trail that can’t be tracked, audited, or recalled This isn’t about blame, it’s simply how the technology works. Even with the best of intentions, once an image of a child is captured on a personal phone, the organisation loses control over where that image might go, or how long it might be stored. Under the Australian Privacy Act 1988 , any image that identifies a child is considered personal information. That means there are legal responsibilities under the Australian Privacy Principles (APPs) about how that information is collected, stored, and shared. APP 3 – Collection: Must be lawful, fair, and necessary APP 6 – Use and Disclosure: Limited to the original purpose, in authorised systems APP 11 – Security: Organisations must take reasonable steps to protect personal data Personal phones, no matter how careful we think we’re being, simply can’t meet that legal benchmark. “An organisation must take reasonable steps to protect the personal information it holds from misuse, interference and loss.”— Office of the Australian Information Commissioner What About Department Guidelines? Most state and territory education departments now explicitly state that personal devices are not to be used for taking or storing photos of children. For example, the Victorian Department of Education says: “Schools must ensure that photographs, video or recordings of students are not taken or stored on personal devices.”— DET: Photographing Students Policy This reflects a growing understanding of the need for consistent, secure, and professional systems when it comes to documenting children’s lives. This Is About Child Safety, Not Just Digital Systems The National Principles for Child Safe Organisations call on all of us schools, centres, staff, and leadership to do everything possible to protect children’s privacy, including how their images are captured and stored. Using personal devices, no matter how informally, can: Undermine institutional safeguards Bypass accountability processes Increase the risk of accidental breach or misuse In child safety, it’s often not what goes wrong that matters most, it’s whether we had systems in place to prevent it. “Child safe organisations need to have systems in place to protect children’s personal information, including images and recordings.”— Australian Human Rights Commission Moving Gently Toward Best Practice If your school or service is still using personal phones for images, know that you are not alone. This has been standard practice in many places for years. Change is not about shame, it’s about moving forward with more awareness, better systems, and stronger safeguards. You didn't know what you didn't know. Here’s what many centres and schools are now doing: Providing organisation-owned devices for documentation Updating internal policies to align with legal and departmental expectations Training staff on privacy obligations and child safety implications Ensuring any old or non-compliant images are reviewed, deleted securely, and reported where needed Seeking support from digital safety professionals to strengthen systems It’s Okay Not to Have It All Perfect......Yet The important thing is to act now, with care and commitment. Our shared goal is to keep children safe not just physically, but digitally and emotionally as well. If you’re unsure whether your current practices are in line with: The Privacy Act The Australian Privacy Principles Your state or territory’s education department guidelines The National Child Safety Framework Then now is a good time to pause, reassess, and reach out for support, I am here to help. No blame. No judgement. Just a shared responsibility to do better, together. Helpful References: Privacy Act 1988: https://www.legislation.gov.au/Series/C2004A03712 Australian Privacy Principles: https://www.oaic.gov.au/privacy/australian-privacy-principles Phoitos and Videos: https://www.oaic.gov.au/privacy/your-privacy-rights/social-media-and-online-privacy/photos-and-videos APP 11 – Securing Personal Information: https://www.oaic.gov.au/privacy/guidance-and-advice/securing-personal-information ACECQA Guidelines: https://www.acecqa.gov.au/sites/default/files/2024-07/Guidelines Google Photos Deletion Policy: https://support.google.com/photos/answer/6128858 OAIC Children and Young People: https://www.oaic.gov.au/privacy/your-privacy-rights/more-privacy-rights/children-and-young-people Child Safe Principles: https://humanrights.gov.au/our-work/childrens-rights/projects/child-safe-organisations DET Student Photography Policy (VIC): https://www2.education.vic.gov.au/pal/photographing-students/policy
- Gmail users, please listen up.
If you don’t want your emails, chats, and digital habits feeding into Google’s AI systems (yes, even when it doesn’t say “AI” outright) there’s something buried in your settings you need to switch off. Google isn’t going to wave a flag or drop you a notification about it. These so-called smart features ? That’s just AI hiding under another name. Predictive writing. Autocomplete. Auto-summarise. “Help me write.” They’re generative AI tools in everything but name. And they’re hoovering up your data to keep learning. Your data trains their models, your habits improve their products. If that doesn’t sit right with you, here’s how to take back a little control. Step 1: Turn off the smart features for Gmail, Chat, and Meet Go to Gmail on your computer Hit the settings gear top rightClick See all settings Scroll down until you find “Smart features” Untick the box that allows Gmail, Chat and Meet to run on smart features. It may boot you back to the screen so you have to hit settings again each time. They don't make it easy. Step 2: Kill it for Google Workspace and other services too Still in the General tab Find “Google Workspace smart features” Click through to Manage Workspace smart feature settings Turn both toggles OFF That’s it. Not particularly hard but also not obvious and most users won’t know to go looking unless someone tells them. Now, depending on where you live Switzerland, the UK, Japan or the European Economic Area these features may be off by default. Because those regions have tighter data laws. The rest of us? We’re left fending for ourselves in an invisible game of opt-out. The bigger issue here isn’t just privacy, it’s the quiet erosion of autonomy. These AI-infused features aren’t always helpful. They’re designed to reshape how you write, respond, work. To nudge your behaviour, subtly, constantly through dark patterns, and the more you use them, the more they learn. Not just about language patterns, but about you. We’ve seen it over and over in the past few years as AI becomes baked into everything, rarely named, never fully explained no informed consent. Features are rolled out at speed and opt-outs are buried permissions are rolled back to when you first ticked a box saying "accept terms" and it has become your job to monitor it all. This is what happens when regulatory frameworks can’t keep up with product roadmaps due to legislation passed way before AI in the way we use it today was a thing. When companies aren’t afraid of penalties and when user rights are treated as settings, not standards. Unless that changes and AI enforcement becomes more than just a wishlist item in policy drafts, people will continue to be datafied by default without ever knowing what they gave up.
- Meta is moving early and texting Australian teenagers. That part is real.
To every parent, teacher and trusted adult — talk to your teens today. Not with fear, but with clarity. Meta is texting Australian teenagers. That part is real. They’ve begun sending messages via SMS, email and in-app notifications, warning young users they have just days left on Instagram, Facebook and Threads if they’re under 16. The law banning under-16s from these platforms officially starts on December 10 , but Meta is moving early — accounts will begin disappearing from December 4 . This is a big, complex shift. But it’s also the perfect opportunity for scammers. They know teens are confused. They know panic makes people click. I am fearful they may start sending fake messages that look exactly like Meta’s — urging young people to "download your data here", or "verify your age now" through links that aren’t safe. So here’s what we need to tell our kids today — not with alarm bells, but with calm, informed authority: Never click through from a text claiming to be from an app. Not now. Not ever. If a young person receives a message about the ban, tell them to go straight to the Instagram or Facebook app, or visit meta.com directly through a browser. That’s the only place they should be downloading data or verifying age. Not through a random SMS or email link, no matter how real it looks. If a teen is wrongly flagged as under 16, Meta will ask them to verify their age using ID or a video selfie. That process is done securely within the app , and not through a shortcut or suspicious link. The technology behind it, Yoti , is trusted but only when accessed through the official platform you can see the results of the Age Assurance Trial here: https://ageassurance.com.au/report/ The key here is timing. Right now, we have a narrow window before confusion peaks and scams escalate. This is the moment to step in gently, confidently, and have the conversation. Reassure your teen that they haven’t done anything wrong. Remind them that real warnings from Meta are coming via text but it is never safe to click through from an SMS EVER! There’s no need for panic. But there is a need for precision. If we wait, the risk grows. If we act now — clearly, calmly, together — we protect not just their accounts, but their confidence and safety online. Talk to them today. Even if they roll their eyes. Even if they say they already know. Just open the door because digital scams don’t care how old you are. But being prepared? That starts with us. Download Instagram Memories here: https://help.instagram.com/181231772500920/?helpref=uf_share Other Useful Links: https://www.facebook.com/help/2199535317224012/?helpref=uf_share Our Free Resources: https://www.safeonsocial.com/shop eSafety Social Media Minimum Age Hub: https://www.esafety.gov.au/about-us/industry-regulation/social-media-age-restrictions-hub
- Before They Break Out
Six years ago, I was invited to speak at a school where one of the students was already a well-known influencer who was having a profound effect on her peers. She was 11. Her platform? Skincare routines. Today, it’s not just worse. It’s everywhere. “Get Ready With Me” videos are now a daily ritual for primary school girls. Morning routine. After-school routine. Night routine. Cleanser, toner, serum, lip mask, ice roller. Ten steps, tapped out by tiny gel nails on 10-year-olds unboxing yet another haul from the same three beauty chains. Some of these children are now being gifted products directly by brands. Products that cost those companies less than $20 to manufacture but the influence? It’s worth far more, and they will not be paid as a child model or actor would. With one unboxing, they’re broadcasting a message to thousands of little girls that you need this to be beautiful. You need this to be enough. What about those that can’t afford it? What about those watching that walk start believing their skin, their perfect childhood skin, is already something to fix? This isn’t about play. This is a marketing strategy with a child’s face on it. We are not just witnessing the collapse of childhood. We’re watching it be monetised. The question for us, especially those of us in digital, education, governance, and leadership, is no longer “what’s going on?” It’s how long will we stay silent? Read more here: https://kirrapendergast.substack.com/p/before-they-break-out
- Questions from Students about "The Ban" this week.
In less than a month on December 10th the Australian Social Media Minimum Age Law starts being enforced. From 10 December 2025, social media platforms must stop Australians under 16 from having accounts and remove or deactivate existing under‑16 accounts. The legal duty is on the platforms, not on children or parents. Platforms must offer clear information, let users download their data, and provide simple review/appeal options if a mistake is made. They cannot make government ID the only way to prove age; a non‑ID option must always be available. Penalties for systemic non‑compliance can be very large (up to $49.5 million). Here is some of the questions and answers to some of the questions students have asked our team this week. “If I’m under 16, still have social media after the start date and something goes wrong online, will I get in trouble if I tell someone?” No. Under these laws the consequences fall on platforms, not young people or parents. eSafety’s compliance focus is on the systems and processes platforms use; it isn’t about punishing individual kids for having an account. Even if some under‑16 accounts slip through, that alone doesn’t mean a platform is automatically non‑compliant. Always speak up. Platforms must provide easy in‑app ways to report problems (including suspected under‑age accounts) and must handle those reports; if an account is deactivated, the user must be told what’s happening, how to save their content, and how to ask for a review. What to say to students: “You won’t be fined or charged under these rules. If something goes wrong, tell a trusted adult and report it in‑app. The point of the law is harm reduction and support, not blame.” ----- “Will this actually work? How can they tell 15 years 10 months from 16?” Platforms can use a mix of age‑checking tools for example, age estimation (like face or voice analysis), age inference (patterns in activity), and age verification (confirming a real date of birth). No single method is perfect, so the guidance encourages a layered ‘successive validation’ approach: if one method is unsure especially near the 16‑year threshold the platform may ask for another check before deciding. Many systems use buffer zones near the cut-off so borderline results trigger more checks rather than a straight yes/no. The guidance also notes accuracy around legal thresholds is the hardest part, so platforms are expected to keep improving their settings over time and back them up with easy review options for users. Privacy note: This is not a Digital ID scheme. Platforms cannot require government ID as the only option; they must offer a non‑ID alternative (for example, an estimation method). ----- “Is Pinterest covered? What about CapCut?” The law applies to any service where a key purpose is social interaction, users can link/interact with each other, and users can post material. Services excluded by the Minister’s rules aren’t covered. Pinterest: Because people post Pins, follow, and interact, Pinterest fits that definition so it may be covered in Australia. CapCut: If the version used here includes a social feed where users post, link and interact within CapCut itself, then it may be covered. The test is what the service actually does for Australian users. Keep an eye on www.esafety.gov.au for updates but be prepared for December 10 th by downloading things you want to keep. ----- “I could get around it by…?” Students will try these ideas; here’s what the guidance expects platforms to do: “Change my country / use a VPN.” Platforms are expected to use several location signals IP address, GPS, device settings, phone number, app‑store data and to detect VPN/proxy use. So a VPN alone is unlikely to work for long. “Use my parent’s photo for face ID.” Age‑estimation systems include liveness checks to stop use of someone else’s photo or a deepfake. If signals conflict (e.g., activity looks clearly under‑16), the platform should escalate to another check. “Make an account in my parent’s name.” Platforms are expected to monitor for account takeovers or transfers (e.g., sudden changes in details, many accounts from one device) and act on them. “Set ‘parent‑managed’ on Instagram / tweak my age later.” Relying on self‑declared ages isn’t enough, and platforms should block age changes without proper checks and prevent quick re‑registration after removal. Circumvention attempts are anticipated and should be limited by design, but if a young person slips through, the focus remains on removing the account safely not punishing the child. What this means in practice For platforms (what they must do): Detect and deactivate/remove under‑16 accounts with kindness, care and clear communication, including data‑download options and review/appeal. Put age checks at sign‑up (with a non‑ID choice), use layered checks if needed, and prevent immediate re‑registration. Monitor/limit circumvention (VPN detection, liveness, device/IP checks). For young people: If something goes wrong online, tell a trusted adult and report it in‑app. If your account is flagged by mistake, use the review process the platform must provide. For parents/educators: Reassure kids that they won’t be fined under these laws, and that speaking up is the safest way to get help. Platforms must provide clear information and support links when taking action on accounts. Quick script you can use in class or with your kids From 10 December 2025, social media companies, not kids, are responsible for making sure under‑16s don’t have accounts. If you’re under 16 and something goes wrong online, tell someone. You won’t be in legal trouble under these rules for speaking up. The company must remove under‑age accounts safely, let you save your stuff, and give you a way to challenge mistakes. Trying VPNs or using a parent’s photo is risky and often spotted. If you see a mistake or need help, report it in‑app and talk to a trusted adult. For all of our free school and parent resources click here:
- One Month Until The Australian Age Delay and Here is What We Still Get to Keep
In a month, the age delay kicks in. For many families, that means TikTok and others go dark. For a generation of kids who’ve grown up dancing, lip-syncing, creating and sharing online, it might feel like something’s being taken away. But here’s what’s not being banned: The music. The movement. The joy of being silly, being seen, being together. Music and movement are how kids (and adults) let things out without having to explain. They help regulate emotion, build trust, and give kids a way to feel like themselves again. Especially when everything else feels a bit shaky. None of that disappears with the social media age delay. If anything, this is a chance to bring it closer to home. The app access will shift. That’s the nature of it. But kids still need rhythm. Still need to move their bodies, blow off steam, laugh with their friends, and feel connected. That doesn’t need a screen it needs space and a bit of imagination. So here’s what we can do. Let music become part of the everyday again. A song in the morning to set the tone. A family dance off while dinner’s on. Let kids DJ their moods. Let them teach you their latest routine no cameras, just company. Give the little ones chalk to draw a hopscotch in the driveway. Let them drag the speaker outside. Let your teens claim the garage as a dance floor or a jam session room. If they used to film videos with friends, help them find ways to keep the creativity going an old film camera from a market for example. Offline doesn’t mean alone. Teachers and youth workers: build music and movement into the day. Not as a reward. As a right. Kids need ways to move stress through their bodies. They need spaces where they can be expressive without performing. Parents/carers/grandparents: get in there too. Dance badly. Sing out of tune. Make it fun. Make it real. The ban might close one door, but it’s also a good chance to open others. Less about rules, more about rhythm. Less about control, more about connection because even without the apps, kids still know how to move. Still know how to feel. Still want to be part of something bigger than themselves so let’s make sure they can. You can still film and you can still create. You can still laugh till your ribs hurt or you cry. You just don’t need to post it to prove it. Here are some ways families can keep the energy going without needing the algorithm to clap back: 1. Family Dance-Offs (Private Edition) Pick a song. One that gets everyone moving, even the reluctant ones. Split into teams (parents vs kids is always a good one), learn your own routine, and perform it in the lounge. Film it if you want but keep it on your phone. Turn it into a family tradition. Watch the old ones back in a year and see how far you've come (or how ridiculous you looked). 2. Challenge Vault Get the kids to create a jar of challenges. Silly dance moves, weird remixes, or new steps they invent. They can film these, keep them on their device, and share them in person with cousins, grandparents, select friends privately. 3. ‘Pass the Move’ Videos Each person records a move, passes the phone, and the next person adds theirs. Keep passing till you’ve built a full routine. Edit it if they want to practise those skills. No one needs to see it online. It’s yours. 4. Soundtrack Saturdays This is straight from my childhood. I know every word to Earth Wind And Fire and Fleetwood Mac, Bryan Ferry and Grace Jones thanks to my beautiful Mum breaking out the Vinyl every saturday! In fact if I want to learn something I sing it as I have a superpower for remember lyrics! There the secret is out!. Each week, pick a theme 80s throwbacks, movie musicals, songs from your childhood. Everyone dresses up, picks a song, and dances. Think kitchen disco meets karaoke with fewer rules. Film it or don’t. Just keep the music up loud. 5. Friends-Only Collabs If your kids used to do collab videos, encourage them to keep doing it just differently. Invite their mates over for a “dance and record” day. They can share clips through AirDrop or messages instead of posting them. Still creative, still connected. 6. Year in Dance Set up a private folder on your phone: “2025 Dance Year.” Add clips from each week or month. At the end of the year, you’ve got your own personal highlight reel. No likes needed. Just memories that hit play when you need them. 7. School or Community Showcases Work with teachers or youth centres to run in-person dance or music nights. The kind where no one cares if you’re good just that you showed up. Let kids plan, choreograph, and perform for real people in real time. No comments section required. 8.Car Karaoke My personal favourite . .... Car karaoke is one of the easiest ways to keep connection alive without needing a screen just load up a shared playlist, let everyone pick their favourite songs (no judgement) add in Opera, Heavy Metal the whole lot!! and turn even the school run into a full-blown concert. Film it if you want, but keep it for yourselves. It’s messy, loud, off-key fun that doesn’t need to be posted to matter and those are often the moments that stick. The point is kids don’t stop being creative just because the platform goes and connection doesn’t disappear just because it’s not being broadcast. If anything, this is a chance to remind them that they’re allowed to create just for fun. Not for likes. Not for views. Just because it feels good.
- Six Fake Names, One Predator, and the Digital Silence That Let Him In
A 14-year-old girl in Greater Manchester was groomed across Discord and Snapchat by a man pretending to be six different people. Not one platform raised an alert. Not one system joined the dots. Karl Davies was just sentenced to 20 years in prison. But the real story isn’t what happened to him. It’s what didn’t happen online. Every major platform has moderation tools for content. None has a working protocol for how danger moves from app to app, erasing itself as it goes. When harm crosses platforms, the trail disappears. So does accountability. We talk endlessly about “AI safety” and “trust & safety,” yet a child can still be groomed across five platforms and there is no shared channel to raise a single, unified flag. This isn’t just a content and contact problem. It’s a coordination problem. Until big tech learns to communicate with itself, children will keep paying the price. Read more here: https://thisiskirra.substack.com/p/six-fake-names-one-predator-and-the _____________ Kirra Pendergast is has limited availability for bookings for parents, educators or conferences onsite at the following locations in 2025/26 (please note Kirra no longer presents to students but we have facilitators available). Dublin, Ireland: December - 8th, 11th, 12th London, England: December - 15th, 16th Perth, Australia: January - 21st, 22nd, 26th Melbourne, Australia: February - 11th, 12th, 13th Gold Coast & Brisbane, Australia: February - 17th, 18th, 18th, 23rd, 24th, 25th UK & Europe: March - May Sydney, Australia: May - 4th, 5th, 6th Gold Coast, Australia: May - 13th, 14th Hong Kong: May - 18th, 19th UK & Europe: June-August Australia: September - 7th - 25th Online bookings also available with more availability. To book simply reply to this email or hello@ctrlshft.com
- FREE VIDEO 16+ DELAY RESOURCE PACK FOR SCHOOLS
We’re heading into a new chapter in the story between young people and social media. This past week we have become deeply concerned that many Australian children don't even know this is happening. From 10 December 2025, social media platforms will be legally required to block or remove accounts held by Australians under the age of 16. They’ll also need to use privacy-safe age-checking systems and give young people the right to download their data or challenge mistakes. It’s a significant shift. But right now, not enough is being done to help the people it affects most understand what’s coming. We’ve created a free, plain-language resource pack to help young people, parents, and schools make sense of the new age rules. This is not about fear. It’s about fairness. Every young person deserves to know what’s changing and why.....before they find out the hard way. We support the delay. Fully. It’s a critical step in protecting children online. And it isn’t going away. But care and compassion are just as essential as regulation. We need to do more to make sure young people are informed, respected, and supported through the transition. Inside the pack, we explain: What the new law means both for under-16s and for the platformsHow the age checks will actually work including the requirement for a privacy-friendly, non-ID option What to do if a young person’s account is wrongly flagged or removed And how families and teachers can start the right conversations now, before the deadline arrives We made this because we believe in informed kids, not blindsided ones. We believe no child should wake up one morning to find their account gone, their connections severed, and no one able to explain why. The new rules are coming. This guide will help you prepare. 👉 Download the free pack: https://www.safeonsocial.com/product-page/social-media-minimum-age-school-video-pack











