top of page

Search Results.

87 results found with an empty search

  • AI Tools are Finally Helping Teachers Breathe. Now Comes the Hard Part.

    Teachers are not looking for miracles. They’re looking for time.Not the philosophical kind. Not the romanticised version. They are looking for 27 minutes back from the endless documentation of student behaviour. They are looking for relief from inboxes that refill faster than a coffee cup in the staffroom. They are looking for lesson plans that don’t take four hours to create and get shredded by a single disengaged classroom. They are looking for one less form to fill, one less login, one less click. And for many of them, AI has started to feel like an answer to a question they didn’t even have time to ask. That’s not a fantasy. That’s not hype. That is happening, right now, in classrooms where the lights flicker and the heating doesn’t always work. I see it. I see them. I am fortunate enough to work with educators around the world, and I get to see something the headlines rarely capture. Not fear. Not overwhelm. Not grand declarations about the future of learning. I see something quieter, more human, a visible exhale. Let’s start by rejecting the smug cynicism that often surrounds AI adoption in education. It usually comes from people who haven’t stepped into a public school since their own graduation. They call it lazy, or worse, false innovation. They reduce it to trend-chasing, a shiny object in an already tech-sick system. But here’s the truth no one wants to say out loud, many teachers are drowning. And the people mocking their lifeboats have never been in the water. The average teacher spends 50 to 60 hours a week on their job, but only about half of that is direct teaching. The rest is prep, marking, admin, meetings, reports, supervision, crisis management, and emotionally supporting students whose needs are deeper and more complex than ever before. Add to this the collapse in respect for the profession, the surge in behavioural incidents, and the unrelenting scrutiny of social media parents, and you’ve got a system that eats its own. So when a teacher finds a way to cut five hours off their weekly planning using generative AI, that’s not a shortcut. That’s survival. It is not “inauthentic” to use an AI tool to differentiate a reading comprehension task for three different literacy levels. It is smart and it is humane. It is the exact kind of decision-making we need in a system that has stretched teachers so thin they are breaking. There’s a kind of joy I have seen come over a teacher when they realise an AI tool can write a parent update in under a minute, generate differentiated tasks in the time it takes the kettle to boil, or summarise a dense curriculum document into something they can actually use. It’s not the joy of novelty, of being seen, and finally supported, by something that doesn’t ask for more in return. Educators are not out to overhaul the system, they are trying to survive it. They are showing up every day inside structures that are stretched thin by staff shortages, rising needs, outdated tools, and the bureaucratic weight of compliance culture. What they want is not transformation. They want time. Sanity. A little breathing room between the grind of expectations and the reality of what one person can actually do. We do not measure progress by how futuristic it looks. We measure it by what it frees us to do. The ability to sit longer with a struggling student because your lesson planning took half the time. The headspace to reflect on your teaching practice because your reporting load didn’t consume your weekend. These are not small wins, they are the building blocks of retention, wellbeing, and quality education. But here’s where we have to be honest, we cannot separate the relief AI brings from the responsibility it demands. Because every new tool that makes something easier also changes the terrain beneath our feet. When educators bring AI into their workflow, they are not just adopting a tool. They are taking on its risks whether or not they’ve been trained to see them. Generative AI systems do not exist in a vacuum. They are built on data, designed by companies, and deployed in contexts that are often poorly understood. That means they carry bias and they make mistakes. They reflect the values and assumptions of the people who build them, and the data they’ve been trained on. For educators, that matters. A lot. That’s why risk literacy can no longer be an optional add-on. It has to be built into the rollout of every AI initiative in education. If a school introduces a tool, it must also introduce a clear, living policy on how that tool should be used, what data it collects, where that data goes, and what recourse an educator has if something goes wrong. These policies must be updated often. Not bi-annually. Often. Because the technology is changing monthly. And if we treat policy as a static document, something to appease procurement or satisfy a governance checklist, we are not managing risk we are manufacturing it. And just as urgently, we need to shift how we think about what educators teach. AI ethics cannot be a niche conversation reserved for Year 11 students. It must be a baseline for every student, from Grade 3 and up. They are stepping into a world shaped by algorithms, predictive systems, and invisible design decisions. They need to understand power, privacy, fairness, and agency in a machine-mediated world. That starts with educators being equipped to teach those things not from fear, but from confidence. That doesn’t mean every teacher becomes a computer scientist. It means they understand enough to ask good questions, model healthy skepticism, and show students that technology is not magic, it is made. And what is made can be remade, if you know how to look under the hood. So yes, there is joy and there is excitement. There is a genuine sense of momentum as AI begins to ease the pressure on educators who have carried too much for too long. But if we stop there, if we confuse utility with immunity, we miss the deeper opportunity. The opportunity in front of us isn’t just about making the system smarter. It’s about making it safer. Smarter systems can automate tasks, reduce workloads, and offer personalised support at scale. But a system that is merely efficient without being ethical will only replicate and accelerate the harms we already struggle to contain. Safety isn’t about limiting innovation. It’s about ensuring that what we build doesn’t come at the cost of the most vulnerable, the student mislabelled by an algorithm, the teacher held accountable by a system they don’t control, the community left out of the dataset altogether. Speed is seductive. AI promises faster processes, quicker turnarounds, streamlined reporting. But speed without scrutiny is a trap. What we need is not just acceleration, but equity. Fairer systems demand that we slow down long enough to ask, fair for whom? Fair by whose standards? Fair in whose language, whose context, whose version of the truth? Because a process that saves time but deepens bias isn’t innovation. It’s negligence in fast-forward. And yes, AI is giving educators breathing room. But breathing room is only useful if we use that breath well. If that extra hour means a deeper connection with a student, a sharper focus in the classroom, a little less burnout at the end of the week, then we’ve done something worth celebrating. But if we use that space to do more of the same, to double down on a system already creaking under its own contradictions, then we’ve simply automated the dysfunction. Real progress is not measured by gains alone. It is measured by what we preserve in the process. Trust. Humanity. Autonomy. These are the things we cannot afford to lose, no matter how powerful the tools become.

  • Love Means Learning How to Protect Your Grandkids (and Yourself) Online

    There were no smartphones when you were raising kids. No group chats to monitor. No YouTube wormholes. No having to explain why an app designed for dance videos is now showing ten-year-olds content about war, sex, and self-harm. You parented in a world where you could see the danger coming. Where you knew the names of the kids your child played with. Where the front door locked and that meant something. But that’s not the world your grandkids are growing up in. And here’s the hard part you are still one of their protectors . Which means you need to understand their world when online and off is just "life". Yes... you’re still in it. Every school holiday, every birthday visit, every moment you reach for your phone to capture a memory you're shaping their digital world as well. It can happen in seconds. All the cousins come over. One has a phone. Others don't. You hear laughter, maybe a silly voice filter, maybe a dog video. It seems innocent. It seems fine. What you don’t see is the next swipe. Because the algorithm doesn’t care that your grandchild is only seven.It doesn’t care that you had no idea what “For You Page” meant. The internet doesn’t wait for understanding. And by the time you’re wondering whether it’s appropriate, the content has already done its damage. This is not about being afraid of technology.This is about putting boundaries around it. When a child enters your home, and a device comes with them, you have a right and a responsibility to set rules. Ask that phones stay off during visits. Keep screens in shared spaces. Don’t assume “funny videos” are always safe. Because they're not. Privacy Is Not Paranoia. It’s Protection. Let’s talk about your social media. It seems like a beautiful thing, sharing moments with friends, showing off a proud photo of your grandchild’s dance recital, or that sweet smile in their school uniform. But here’s what’s really happening when your account is public: You are handing strangers a scrapbook of your grandchild’s life. Their name. Their school. Their location. Their birthday. What days they’re at your house. Predators don’t need much more. Identity thieves need even less. And that harmless snap you uploaded to Facebook could be used in ways you can’t imagine. So here’s the simplest set of rules: 1. Set your accounts to private. 2. Think before you post. 3. Never share photos with other people’s children/grandchildren in them especially at school event. Even if they’re in the background. Even if you know them. Even if it seems harmless. Other families may have serious legal, cultural, or safety reasons for keeping their kids offline. You don’t need to understand it to respect it. Your Curiosity Could Be a Lifeline Your grandkids might show you things that seem strange. A meme you don’t get. A game that feels chaotic. A TikTok that makes your stomach turn. This is not the moment to say, “I don’t understand that stuff.” This is the moment to lean in. Ask: “Can you show me how that works?” Ask: “Who are you talking to when you play that game?” Ask: “What do you do when something scary comes up?” Just keep asking!! You don’t need to be tech-savvy, you just need to be present. Kids aren’t looking for safety and boundaries, not experts. And if you freeze up or shut down the moment something digital comes into the room, they’ll stop coming to you. Not because they don’t trust you, but because they don’t want to overwhelm you. Let them teach you. Show them you’re listening. Curiosity builds connection, and connection builds safety. Support the Parents.....Even When You Don’t Fully Get It Maybe you think the rules are too strict. Maybe you’d rather be the “fun” one. Maybe you think “just one game” or “just a little screen time” doesn’t hurt. If a parent has set limits on screen time, apps, or device use, your job is to support them not sabotage them. When you say things like, “It’s just for today,” or“You can use mine just don’t tell Mum,” you’re not being generous. You’re teaching that rules are flexible when adults want to be liked. That secrets are okay if they’re fun. And that consequences only exist until someone nicer comes along. Kids don’t need that confusion they need consistency. Five Digital House Rules Every Home Should Have Whether it’s a weekend visit or the school holidays, here’s how to create a safe, healthy digital environment in your home: No devices behind closed doors or in bedrooms...ever. No screens at the dinner table or before bed One screen, one app, no jumping between tabs and chats Check in often. Ask what they’re doing, who they’re playing with If something goes wrong, stay calm. Say “Thank you for telling me. Let's figure it out together.” ​ And If You Get It Wrong? You post a photo you shouldn’t have, you let them watch something too mature. You don’t notice a message that seems off.It happens. Don’t spiral into shame and don’t disappear, instead say: “I got that wrong. I’m still learning.”“ " Thank you for telling me.” " I care enough to do better next time.” That is what digital grand parenting looks like. Not knowing everything, but refusing to opt out. Staying in the room, staying in the conversation and staying aware. Because love, now, looks different. Love today means locking down your phone before handing it to a child.It means not sharing that school photo without permission.It means asking awkward questions and learning unfamiliar things.It means protecting their future by understanding their present. The internet won’t slow down. The world won’t get simpler. But you?You can be the steady one. The safe one. The one who shows up again and again. Because that’s what love looks like now and your grandkids are counting on it.

  • AI, Aesthetics, and the Carbon Behind the Curtain

    It’s raining in today Florence. That slow, steady kind of rain that makes the city feel like it’s breathing under the weight of its own history. From my home office, a few blocks from where Michelangelo brought David out of flawed marble and into the world, I’m watching people hand their faces to machines. Not metaphorically. Literally. One after another, while I research for a new program I am writing, I scroll past videos and portraits and selfies that have been fed into AI filters to be turned into cartoons, avatars, Barbie-fied. Action dollified. Over the past month people have been turning themselves into smooth plastic action figure versions of themselves, caught somewhere between nostalgia and pathology. There are millions of these images now. Faces stripped of pores, limbs narrowed into factory perfection. Each one served with that familiar shrug, I t’s just fun . I know what I am doing. That’s always the line, isn’t it? It was just fun  when we handed over our faces to Snapchat’s baby filter. Just fun  when we let FaceApp guess our age. Just fun  when we gave TikTok permission to track every blink and pause to tune the algorithm to our moods. Just fun  is how surveillance got in the door. And now it’s back again, dressed as Barbie. But here’s the part no one’s saying loud enough. The part I said to two nineteen-year-olds from Byron Bay who were staying with us this week. They’ve been raised with the right language. Offsets. Regenerative farming. Microplastics. They know how to compost. They’ve been taught that climate responsibility lives in daily choices, in recycled packaging and carbon footprints. But no one had ever spoken to them about server farms. About what it takes to make a single AI-generated image. The water it consumes. The fossil fuels burned. The energy grid behind the doll filters and Lensa portraits. The carbon debt tied to each flicker of novelty. So I told them. And when I laid it out, the training cycles, the GPU clusters, the constant data churn, the look on their faces said everything. It wasn’t confusion. It wasn’t guilt. It was grief. That quiet, heavy recognition that they were trying to do the right thing in a system that had left them blind to its real mechanics. And the betrayal in that moment was real. Because they were trying. They were told composting and everything else they had learned was enough. That caring was enough. And no one mentioned that every time someone uploads a photo to be Barbie-fied, they were contributing to the same ecological collapse they were taught to fight. That’s the cruelty of the current moment. AI has been framed as a marvel, a tool for play, an inevitability. But what it really is what it depends on is energy. Unseen, unchecked, and growing fast. A study from the University of Massachusetts Amherst in 2019 estimated that training a single large AI model can emit as much carbon as five cars in their entire lifespans [ https://arxiv.org/abs/1906.02243 ]. And it’s getting worse. The newer models are hungrier. The demand is higher. The content, because that’s all this becomes, is being generated at a rate that dwarfs anything we've seen before. And while we feed our likenesses into systems that turn us into dolls, the real world burns. Quite literally. Water sources are being tapped to cool AI data centres [ https://arxiv.org/abs/2304.03271 ]. Electricity grids are being overloaded to support the always-on, always-generating machine. This is not theoretical. It is happening now. Quietly. Invisibly. Back in Florence, the rain hasn’t stopped. The streets are slick with the kind of wet that makes the stone shine. The statues carved by hands that belonged to men who never saw their work as disposable are drenched. Michelangelo was twenty-six when he began David. He worked for over two years on a single piece of marble that had been deemed unusable. Botticelli painted Venus for the Medici family in a time when myth and theology bled into each other like pigment into plaster. These artists weren’t perfect. But their work held weight. Real time. Real stakes. Real cost. What we are doing now has none of that. We are generating images that look like art, feel like art, but are as light as vapour. Not because they’re digital, but because they’re instant. Because they require nothing of us except a face and a prompt. And we are calling that creativity. We are calling it culture. But culture is built on memory, discipline and tension. Culture cannot be copy-pasted. It cannot be scaled at the speed of trend cycles. When you dress up a machine in the aesthetics of the Renaissance, or Barbie, or 90s cartoons, it doesn’t make it meaningful. It just makes it familiar. And familiar is dangerous, because it numbs us to cost. And make no mistake, there is  a cost. Every time someone makes an AI version of themselves, they are participating in an industrial-scale operation of unseen environmental damage. Not because they mean to. Not because they are careless. But because we’ve allowed this technology to grow in a vacuum of accountability. We’ve taught a generation to be fluent in tools but illiterate in consequence. So no, this is not a campaign against fun. But it is  a call to stop pretending fun is neutral. Fun has always been the delivery system. That’s how they got us with filters. With quizzes. With harmless little games that mapped our personalities and stored our preferences. That’s how they trained us not to ask questions. Now AI is training us to believe that creation should be frictionless. That it should feel good, instantly. That effort is a flaw and efficiency is enlightenment. But Florence stands as a reminder that anything that lasts demands more of you. It demands time. Skill. Doubt. Care. The things that make us human, not perfect. We owe it to the next generation to tell them this. Not to shame them. To inform  them. Because if they care about the planet, and I believe they do, they deserve to know what’s underneath the cute. They deserve to know what it is doing to the only world they have. Because it’s not just aesthetic fluff. It’s power-hungry infrastructure. Some estimates suggest a single detailed prompt to a large model like Stable Diffusion or Midjourney can use between 5 to 20 times more energy than a Google search [ https://arxiv.org/abs/2211.02001 ]. That might sound small. But scale it up. Multiply that by trend cycles of what ChatGPT knows about you in an action toy vibe, by reposts, by every “just trying it out” user. And remember those servers don’t live in the cloud. They live in real places. Often in drought-hit regions. In Utah. In Arizona. In Western Europe. Running 24/7, cooled by water that could be sustaining crops or ecosystems, powered by grids still dependent on fossil fuels. So if we’re going to use these tools, and we will, we have to start treating them with the same seriousness we apply to everything else we’ve been taught to ration. Like water. Like fuel. Don’t just hit "generate" like you’re flicking through outfits for a doll. Approach it like turning on every light in your house. Ask yourself if it’s worth it. If you could sketch first. Think first. Plan your prompts instead of hammering the model with randomness. Map what you actually need, not just what might go viral. Respect the cycle. Because the energy to run this tech is  coming from somewhere. And one day, it might be coming from your own home. That same home where you turn off the lights when you leave the room. Where you time your showers and carry your tote bag to the shops. Where you fight to live with less. This is no different. In fact, it's worse when it hides behind entertainment. We can’t just teach our kids how to use AI. We have to teach them why  and when and at what cost. This is digital responsibility, not just digital literacy. Because the planet can’t tell the difference between a viral Barbie filter and a serious research prompt. It just feels the heat. We have the chance to shape a generation that knows better. That still creates, still experiments, still plays but mindfully not mindlessly. Not endlessly. Not with the lights blazing and the tap running. Do what the artists of Renaissance Florence did. Know the weight of the work before you begin. Honour the process. Turn off what doesn’t need to be on. And never confuse automation with intention.

  • The Face Is Fake. The Risk Is Real.

    Here we are, in 2025, where the most dangerous stranger your kid, your staff, or your leadership team will ever meet doesn’t even exist, and no one is teaching anyone how to spot them. Because the new stranger isn’t a shady dude lurking on a playground or some random profile in a chatroom, it’s a live, AI-generated, real-time mimic of someone you trust, designed to look you in the eye, speak in a warm, familiar voice, and manipulate you into believing everything they say, because they know exactly what you want to hear and how to make you feel safe. And if you think that sounds like sci-fi, if you think this is tomorrow’s concern, then you are already dangerously behind because this isn’t a hypothetical. This isn’t a test. This is now. A $25 Million Lesson in Trust World Economic Forum, February 2025 — https://www.weforum.org/stories/2025/02/deepfake-ai-cybercrime-arup In one of the most devastating and completely avoidable frauds of the year (so far), a multinational employee wired over $25 million USD during what appeared to be a completely normal video call with senior leadership, a group of people they had worked with for years, people whose faces and voices they knew intimately, people whose authority they trusted without hesitation. Except every single person in that room every face, every voice, every subtle facial tic and reassuring gesture was fake. Not a recording. Not a video edit. A live, AI-driven, real-time impersonation, orchestrated using generative deepfake software that manipulated every detail of the call, allowing criminals to impersonate multiple executives at once, in perfect sync, in real-time. This wasn’t a case of clicking the wrong link.This wasn’t someone falling for a typo-ridden email from a fake prince.This was an employee doing exactly what they were trained to do trust the system, trust the meeting, trust the face and losing millions because the system didn’t have a single layer of defence against synthetic presence. When Law Enforcement Plays God with Synthetic People WIRED, April 2025 — https://www.wired.com/story/massive-blue-overwatch-ai-personas-police-suspects In a report that should have sent shockwaves through every civil liberties office in the world (and yet somehow barely made a ripple), it was revealed that U.S. police agencies have been deploying AI-generated personas synthetic people with fake names, deepfake faces, and fully fabricated digital histories inside online communities, protest networks, and group chats as part of covert surveillance operations. These fake people aren't just passive observers. They comment. They befriend. They provoke. They escalate. They infiltrate online spaces under the pretence of being fellow activists, community members, even children and they do it with full legal backing and zero requirement to disclose to anyone that they aren’t real, because right now, no law says they have to. And if you’re not deeply disturbed by that, you need to re-examine who you think is protected by the word “safety.” The Tools Exist, The Barrier Is Zero, and The Clock Is Ticking Ars Technica, August 2024 — https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns It takes nothing to become someone else online now. Thanks to free, open-source tools that are fully operational as of last year, anyone with a webcam and a bit of internet access can replace their face in real-time during a live call, add voice modulation with frightening accuracy, and pass themselves off as anyone your kid’s teacher, your school counsellor, your HR manager, your therapist, your mother. Just a few clicks, and a synthetic identity walks into a virtual room undetected. So while your organisation is still proudly doing “cyber safety awareness week” with posters and outdated phishing drills, the real threat has already arrived, and it doesn’t give an eff about your training manual. You Want a Solution? Start By Admitting the System Is Broken You cannot solve this with stricter email policies or by telling people to “be careful on Zoom.” You need to burn the old assumptions to the ground and start over with policies, tools, and mindsets that begin with this one simple truth: If you cannot verify identity outside of face, voice, or familiarity, you are already compromised. The safety theatre needs to end. The performative panels. The corporate checklists. The feel-good campaigns that mean nothing when a child is speaking to a synthetic predator through a school portal or when a company’s entire capital reserve disappears into the hands of a fake CFO. This is not about being scared, it is about being ready and right now, we’re not. When the next breach happens and it will you won’t hear alarm bells.You ’ll hear a familiar voice. You’ll see a warm smile. You’ll feel relief because the person on the screen “gets you.” And then you’ll do what millions will do this year, you’ll trust the wrong person, in the wrong moment, because your system was built on illusion, and you never built the tools to spot the lie. What you can do: Audit Every System Where AI Can Enter Undetected Ask these five questions in every tech review: Can this platform be accessed via a fake identity? Do we verify users beyond login? Who’s responsible for identity checks and how often do they fail? Can our staff or students report suspicious interactions without retaliation ? What’s our fallback plan if trust is breached? Normalise the Phrase “Let’s Confirm This Another Way” This should be your go-to line, and it should never offend someone real. “Let’s confirm this through another channel.” Use it in professional emails, in your kid’s group chats, during video calls. Normalise verification as a form of care, not suspicion.

  • They Just Stole Ghibli, Called It Cute. You Just Made It Go Viral.

    “Ghibli-style,” they call it. But this isn’t Ghibli. It’s not even close. It’s a knockoff, flattened and stripped of soul, funnelled through the same AI pipeline that’s been grinding human creativity into dust. And once again, it’s gone viral. In March, OpenAI released its GPT-4o update a new toy, marketed as image generation with “a vast variety of styles.” That phrase? It’s corporate PR for “we scraped the entire internet.” They didn’t just take data. They devoured style, story, tone, and the entire lineage of visual language. They scraped palettes. Brushstrokes. Moods. Aesthetic legacies from cultures and creators who never consented. Then they boxed it. Push a button, get a miracle. Upload your face, become a Ghibli character. Millions rushed in. Instagram is drowned in cartoon clones. TikTok followed. AI Ghibli influencers, presidents, porn. All candy-coloured. All fake. All part of the same game. OpenAI didn’t just get engagement. They got fuel. One million new users in an hour. A historic revenue spike. Billions of training images uploaded for free. This was never an accident. This was bait. And we were the product. Now OpenAI is building a social network around it not to “connect” people, but to scale the system. The engine needs more. More faces. More angles. More expressions. More you. So they’re creating a feed. A platform. A dopamine drip disguised as creativity. This isn’t a feature. It’s infrastructure. And it’s built on theft. You don’t own that Ghibli avatar. You don’t own your uploads. You don’t own the data your likeness is training. OpenAI does.  And while you were busy watching your cartoon self-blink, something bigger happened they scorched the planet. After the Ghibli trend exploded, the load was so brutal it crashed systems. OpenAI had to roll back features. Cute didn’t just break the internet. It broke the grid. But the environmental cost is only half the crime. Let’s talk about who gets erased when this machine runs. Studio Ghibli didn’t authorise this trend. They didn’t partner, license, or collaborate. It doesn’t matter. Their aesthetic was taken anyway. That’s the current model of AI development: scrape first, monetise second, apologise never . Even Sam Altman, OpenAI’s CEO, got in on the joke. “Someone made me a Ghibli twink,” he posted. People laughed. But it wasn’t self-deprecating. It was a power flex. He wasn’t mocking himself. He was showing you how easily your trust could be gamified. He laughed because it worked. What’s worse is the model that generates your soft-eyed avatar didn’t train on harmless fan art. It fed on the work of real artists many now displaced, invisible, broke. It learned from cultures it doesn’t understand, from communities that weren’t consulted, from aesthetics born of centuries of storytelling and struggle. It chewed it up. Flattened it. Decontextualised it. Sold it back to you as fun. Miyazaki, the creator behind Studio Ghibli, once said AI-generated art was “an insult to life itself.” He saw what was coming. Art made without hands, without soul, without sorrow. A machine that mimics meaning but feels nothing. Creates nothing. Cares about nothing but scale. Now his life’s work has been reduced to a style pack for a platform he never touched. We’ve been trained to love our own exploitation .  Because what this generation of AI has mastered isn’t image generation. It’s emotional capture . The illusion of creativity and the simulation of connection. Are we so starved for beauty, so desperate to be seen, that we’ll take anything even a synthetic version of ourselves and call it art. Even when it might cost us everything. This isn’t just about fan art or filters. It’s about agency. It’s about who gets to create. Who gets paid and who gets erased. It’s about what kind of future we’re training these models to serve and who gets sacrificed to build it. OpenAI doesn’t want artists. They don’t want your creativity they want your compliance, your clicks, your face turned into fuel for a model that only sees you as another variable to optimise. They are not building platforms for human expression; they are constructing pipelines where you are not the artist, but the raw material. What matters to them isn’t your imagination or your story it’s your output, your submission, your seamless integration into a system that profits every time you mistake extraction for engagement.  We cannot keep mistaking viral trends for real value when the price of participation is our creative autonomy and the cost is being charged to the planet and the people least able to afford it. We are not training these machines. These machines are training us to become more predictable, pliable, and willing to hand over our identities for a fleeting hit of algorithmic affirmation. They teach us to smile while our likeness is harvested, to celebrate while our labour is replicated, to call it art even as they call it product, even as they copyright the simulations and discard the source. This isn’t just another tech trend. This is the line we need to draw and we don’t want to let them cross.

  • Corporate-Enabled Child Abuse and Snapchat Is the Crime Scene

    What’s happening on Snapchat isn’t a “tech challenge” or a “youth behaviour issue.” It’s corporate-enabled child abuse . Jon Haidt and Zach Rausch have dropped a brutal, evidence-loaded report that rips the mask off the ghost icon. If you missed it, here’s the core message: "Snapchat is knowingly and systematically harming children. At an industrial scale". This isn’t just about “some bad actors.” This is about a billion-dollar company building and maintaining a platform that predators rely on, that drug dealers thrive in, and that abusers use as their hunting ground and Snapchat is doing jack shit to stop it. According to Snap’s own internal comms, they receive 10,000 reports of sextortion per month .  And they still refuse to implement meaningful safety changes. So how the hell is anyone in Australia........hell, anywhere still anti-ban ? Let’s talk about the rising chorus of critics many of them in “cyber safety,” youth wellbeing, and academia who argue against restricting apps like Snapchat. You know the line “Banning isn’t the answer.” “We need to teach them to navigate.” “It’s about resilience, not removal.” Children do not learn digital literacy from a disappearing dick pic. No young person has ever built resilience while being extorted for nudes or sold fentanyl-laced pills via Snap Map. And no teen was ever “empowered” by being ghosted, harassed, blackmailed, or groomed in a vanishing message thread. I sit on a taskforce with police and health professionals. Kids are buying vapes on Snapchat, getting addicted, and when they can’t afford the next hit they’re trafficked.That’s the reality. No one learns digital literacy from a drug deal in a disappearing chat. And no one builds resilience by being pimped out to pay off a debt!!!! So if you’re still clinging to the idea that “bans don’t work,” I need you to ask yourself....... Who are you protecting?  Because it sure as hell isn’t the kids. If your strategy requires children to survive harm in order to learn from it, you’re not building capacity you’re building cover for the tech companies doing the damage. Wake up. And if you still think “early exposure with guidance” is the hill to die on, ask yourself this, Guidance from who?   Certainly not from Snapchat. They’ve made it untraceable by design. Snapchat is the opposite of education. It’s a black box of addictive UX, deliberately engineered opacity, and features that actively prevent adults from intervening. "But we can’t ban it—they’ll just go elsewhere." Stop. That’s Big Tech’s favourite excuse, and it’s pure rot. We ban harmful environments in the real world all the time. You can’t sell booze to kids because “they’ll find a way.” You can’t hand a 12-year-old the keys to a car because “ they’ll drive eventually.” So why are we still letting tech companies run zero-verification child data farms under the guise of “connection” ? If your digital safety strategy boils down to “they’ll do it anyway,” you’re not managing risk you’re surrendering to it. This is systemic, deliberate, and brutal. Snapchat’s entire business model hinges on creating a user experience that feels unmonitored, untouchable, and untraceable. That’s the appeal. That’s the point . And in doing so, they’ve created the perfect storm for: Sextortion Grooming Suicide baiting Drug sales Unrelenting harassment And yet they refuse age verification. In Australia they even tried to call themselves a message service when the ban was announced.  They refuse accountability. They refuse to remove features that actively endanger kids. Because safety doesn’t drive growth. Addiction does. The safety theatre must burn. We do not need more “trusted partner” programs with Snap. We do not need more resources written with  Snap’s approval. And we sure as hell do not need their blood money sponsoring wellbeing summits and people calling themselves cyber safety experts.  If you're a charity, a mental health org, or a school education provider taking Snap’s (or any other platforms) money in any way shape or form while parroting “we care about kids” you’re laundering harm. And if you block or silence critics, who call this out? You’ve chosen your side. And it’s not ours. Pick your line in the sand. Now. Because here’s the truth, there is no safe version of Snapchat for kids. None. It’s built to evade safety. That’s the design. So either we stand up and say enough or we keep trading child protection for corporate access. There is no neutral here. You're either on the side of systemic child safety or you're in service of the machine. And if you're still trying to “find the balance”? Kids are the ones being crushed underneath it.   Read the full report from Haidt and Rausch here: https://www.afterbabel.com/p/industrial-scale-snapchat And keep a watch for this, I am proud to have worked with the incredible Olivia Carville from Bloomberg in the past on a Roblox story and this documentary is driven by her work. Can't Look Away follows a team of lawyers battling tech giants, fighting for families whose children suffered devastating harm linked to social media. The film serves as both a wake-up call about the dangers of social media and a call to action to protect future generations.   https://www.youtube.com/watch?v=kSmyNHKMYB4

  • Love, Trust, and Passwords - When Teen Digital Intimacy Crosses the Line

    It starts out looking like young love. Your teen is on their phone a little more than usual—laughing, smiling, texting. They mention someone new. Maybe they’re “just talking,” or maybe they’re dating. You’re not panicking. This feels like part of growing up. You might even feel a bit relieved. They're letting you see it. They're not hiding this. But then something shifts. They're a little withdrawn, anxious, and the openness is replaced by subtle secrecy. Their conversations move behind locked doors and tilted screens. And then, offhandedly, almost like it’s no big deal, they mention that their partner asked for their Instagram password. “They gave me theirs first, so it’s fair.” “They said If you really trust me, you’d show it.” And just like that, a line has been crossed.Because this isn’t about trust. It’s about control. I’m hearing about this dynamic more and more in schools across regions, across socioeconomic backgrounds, and across all genders. It's not limited to one type of relationship or one type of child. It’s not a phase or a trend. It’s a pattern and it’s playing out silently in the lives of young people who are still learning what love is supposed to feel like. The request for a password is rarely the beginning. And it’s almost never the end. It doesn’t start with threats or shouting. Sometimes, it begins with a question disguised as closeness. A subtle test wrapped in romance. When someone pressures your teen to hand over a password “as a sign of trust,” they’re not deepening the connection they’re quietly stripping away autonomy. And the more your child gives up their privacy, their space, their independence the harder it becomes to recognise the relationship as unhealthy. For a teenager, especially one in their first relationship, these lines can be blurry. They’re still learning how to say no. Still figuring out what feels good and what feels off. Still trying to understand whether setting a boundary means they’re doing something wrong. It’s not always obvious to them that what’s happening isn’t love, it’s manipulation. And because this kind of coercive control often masquerades as attention, or care, it can go unnoticed by adults too. But we have to be clear that asking for someone’s password to prove love isn’t a harmless gesture. It’s a power play. Teen relationships are deeply digital. Passwords are not just about logging in. They unlock entire social lives. Private chats, saved photos, location settings, late-night confessions. They hold memories, identities, even reputations. And when a relationship turns sour, access becomes ammunition. I've heard stories across the board messages deleted, private photos leaked, accounts hijacked during a breakup, teens locked out of their own lives by someone who once claimed to love them. We’re not overreacting by calling it out we’re finally naming what’s actually happening. To many young people, sharing passwords feels normal. Expected, even. A gesture of trust. A way to prove loyalty in a world that constantly tells them to stay connected or risk being replaced. But here’s what we need to help them understand privacy isn’t secrecy. And boundaries aren’t disloyalty. Real trust doesn’t demand constant access or total transparency. A lot of teens don’t share because they want to. They share because they’re scared of what will happen if they don’t. Being accused of hiding something. Being shouted at. Being dumped. Being flooded with messages until they cave.That’s not choice. That’s pressure. And pressure is the bedrock of coercive control. You don’t need to interrogate or snoop. You don’t need to control your child’s phone or monitor every app. That’s the same logic their controlling partner might use and it teaches all the wrong lessons. What you do  need is curiosity. Awareness. A willingness to see the signs without jumping to punishment. Maybe your child suddenly deletes an app they used daily. Maybe they’re frantic to reply instantly to someone, even in the middle of dinner. Maybe you hear them say, “It’s easier to just let them have it” or “They get upset when I take too long.” Maybe they’ve already shared a password, and now they don’t know how to get that control back. That’s your moment. Not to scold but to show up. Teens need words that make sense of their experience. They need to hear things like: “You’re allowed to have privacy. That’s not suspicious—it’s healthy.” “Real love never asks you to give up who you are, just to keep it.” “If someone demands access to prove your loyalty, that’s not a sign of closeness. That’s a red flag.” What they don’t  need is shame. “This is why you’re too young to date.” “You’re being naïve.” “Why would you let that happen?” That kind of response doesn't protect them—it pushes them into silence. And silence is exactly where coercive control thrives. If they want to leave, help them leave safely. No drama. No “I told you so.” Just practical support. Help them change their passwords. Quietly. Turn on two-factor authentication. Encourage them to document any threatening messages, even if they feel “minor.” If needed, support them in blocking or muting. Help them reach out to a trusted adult at school or in your family network if it is a bit much to tell you all the details. Most importantly believe them. Even if it seems small or childish or typical from the outside. Inside, it might feel suffocating. This isn’t about banning phones or banning love. It’s about teaching our children across all communities, all genders, all schools that love isn’t ownership. That closeness doesn’t require access. That the right to be private is not a sign of guilt, it’s a sign of growth.The goal isn’t to stop our teens from falling in love. The goal is to help them recognise when something that looks like love is really a loss of self. So start the conversation early. And keep it going. Because one simple message, repeated often, can make all the difference........You never have to hand over your identity to be worthy of love.

  • Barbie Box or Action Hero? No thanks.

    There’s a trend sweeping LinkedIn right now that is part career flex, part creative experiment, part nostalgia hit. You’ve probably seen it. Smiling faces in plastic doll packaging. Action-figure you. “Hero Mode Activated.” “Limited Edition.” “Collectible.” Or the Barbie Box Challenge. People are posting them by the hundreds. Safety Barbie. Engineer Barbie. UX Strategist Barbie. Anti-Bullying Hero. I have to say, my mind is a bit blown about why so many smart, successful women are choosing to climb in (I have seen very few men). It’s everywhere from recruitment agencies to banking and tech execs to public sector teams. I’ve even seen teachers suggesting it as a classroom activity. That’s when the alarm bells got louder for me. Because while it looks  like harmless fun, I can’t stop thinking about what’s underneath it. So no, I’m not jumping on the ChatGPT Barbie Box or Action Hero trend. And this isn’t because I am the fun police. It’s because I understand the system. This isn’t the first time we’ve willingly turned ourselves into data points dressed as digital art. We’ve been here before. Face-swap filters trained on facial recognition datasets. Turn yourself old or young portraits that quietly collect biometric mapping data. AI avatar generators powered by scraped artwork from unpaid artists. Every time, it’s wrapped in language like “creative,” “fun,” “personal.”But scratch the surface and it’s all the same data extraction disguised as play. This trend is just the latest version. But this time, it seems slicker. And this time, it’s smart, progressive professionals hitting share. People who normally question tech. People who talk about AI ethics and consent and climate impact. And yet, here they are posing like toys, willingly packaging themselves into tiny algorithm-friendly boxes. So what’s so appealing? Part of it, I think, is the illusion of control. But look a little closer and you’ll see you don’t control what gets softened, idealised, or erased.This isn’t just aesthetic. This is cultural conditioning.This is what happens when generative AI models are trained on narrow ideals of beauty, age, professionalism. The image generator behind this trend? DALL·E, owned by OpenAI the same company that has quietly dismantled the teams tasked with ensuring its work doesn’t harm humanity. Earlier versions were trained on massive image datasets scraped from across the internet including art, portfolios, photography and creative work made by actual people, without permission or pay. Yes, they’ve since added filters. No, that doesn’t erase the foundations. This trend is powered by stolen labour, and it’s being fed fresh data with every prompt. And then there’s the energy cost that no one is thinking of while they create an action figure of themselves. These models aren’t just running in the background they require serious computational muscle. Creating one AI-generated image can use up to ten times the energy of a Google search. That’s a lot of environmental impact for a selfie with sparkles and a job title in a plastic box generated image. And we’re calling this fun? But it goes even further than just power-hungry data centres.Behind every generative AI system are humans, doing the invisible, cheap labour that keeps it all running. Content moderation? Done by traumatised workers, often in the Global South, exposed daily to violence, abuse, and exploitation so we don’t have to see it. Training data? Labeled and categorised by click workers, paid pennies per image to teach the AI how to “see.” Prompt tuning and fine-tuning? Also people often under NDAs and zero labour protections working in isolation to polish the outputs. This Barbie Box trend is not just a cute use of tech that you think you are using wisely because you have half a clue of how it works. There is no need for barbie and action figure you - it is a waste of energy, save it like you would turn off a light, only use it when you actually need it. Don't become part of an industrial pipeline of human and environmental cost built to make us feel clever, creative, and visible while hiding the systems that make it possible. I’ve been asking myself why is this trend hitting so hard, so fast, with so many people who should know better? The answer, I think, sits at the intersection of performance culture, platform pressure, and a deep hunger to be seen. We’ve spent years learning how to brand ourselves.We’ve been told to be polished but authentic, confident but relatable, visible but never too loud. This trend hits all the sweet spots and it makes us look good, sound clever, and fit neatly into the “personality + professionalism” aesthetic that platforms like LinkedIn reward. It’s validation wrapped in visual sugar. And we’re so tired, so busy, so desperate to stay in the conversation, that we don’t stop to ask what we’re trading for that hit of attention. It’s not about individual shame. It’s about collective patterns. And this one? It should worry us. Especially when it’s starting to show up in classrooms. Because what does it teach kids, really? That their identities should be flattened into aesthetic boxes? That it’s normal to upload your image to an opaque machine in exchange for applause? That’s not empowerment it's user onboarding.

  • How Predatory Apps Weaponise Legal Loopholes Against Kids

    Just because something isn’t explicitly illegal doesn’t make it safe, moral, or remotely okay. The image is a screenshot straight from a “clothes off” nudify app. Yes, the app that uses machine learning to generate fake nude images of real people is marketing itself as “legit” because, technically, they’re operating in a grey zone of the law. And that is the most dangerous kind of legit there is. What You’re Looking At Is Digital Gaslighting It banks on the fact that users especially teens, won’t understand the deep difference between what’s legal and what’s ethical, or what’s legal now and what could destroy their life forever. It’s a masterclass in plausible deniability, dressed up in soft fonts and legal-sounding fluff. Let’s Talk About What These Apps Actually Do These apps use AI to strip clothes off images often of unsuspecting people and generate synthetic nudes. Whether it's your daughter’s class photo, a teenage boy's TikTok selfie, or a group shot from a school trip if someone’s got access to their image, they can undress them virtually. And the AI doesn’t ask for consent. Kids. Can. Use. This. And they are. This tech isn’t being used for giggles. It’s being used for bullying, blackmail, revenge, harassment, and in the worst cases, child sexual abuse imagery (yes, that includes AI-generated deepfakes). But these apps don’t care because they’ve crafted little clauses like this to wriggle out of responsibility. Section 230 and the “Not Our Fault” Loophole Section 230 of the U.S. Communications Decency Act is a law that says tech platforms aren’t liable for what users create or post. This is why you can’t sue Facebook itself if someone posts something awful (you can sue the person), and it’s why these AI apps sleep well at night while kids’ lives are getting torn apart. They are as liable as the manufacturers of the paper that a major newspaper is printed on. But here's what they don't say: when whichever of the 1000’s available app lets users generate fake nudes, they aren’t just a platform anymore they’re the factory. They’re not hosting content. They’re a tool a 3rd party uses to create it, the enabler, but not the distributor. And they hide behind weasel words like: “ensures full confidentiality at all stages of use.” Confidential for whom? Not for the victims.   Imagine you’re 14. You don’t know what Section 230 is. You barely understand how to read a privacy policy.......that you would never read anyway, let alone a clause like this. But you do  know you’ve got a crush on someone. Or someone pissed you off at school. Or your mates dared you to do something. Enter a Nudify/Deepfake/Sexual Poses App. The barrier to entry? Basically zero. No meaningful age verification. No ethical guardrails. No consent tools. No protections for the person in the photo. Once that image is created, it can be shared, sold, or weaponised. And the damage? Permanent. No amount of “I didn’t know” undoes the trauma. This Isn’t About “Personal Use” It’s About Real-World Fallout The phrase “for personal purposes” is another linguistic Trojan horse. It makes it sound like someone’s just privately undressing stock photos of celebrities in their basement. But that’s not what’s happening. We’ve already seen reports of AI-generated nudes being used: To coerce teens into sending real ones (sextortion) In cyberbullying rings that target specific students As image based abuse substitutes when real images aren’t available As DIY porn when boys find a girl attractive and create a deepfake of them that they store in a secret vault app or folder for their eyes only The Legal System Is Still Playing Catch-Up Right now, many countries don’t have laws that directly criminalise deepfake nudes unless  they’re used for specific purposes like blackmail or image based abuse. That’s changing fast but tech always outruns regulation. A few countries have stepped up: Australia:  Amended its laws to include deepfake imagery as a form of image-based abuse. UK: Under the Online Safety Act, sharing deepfake nudes without consent is criminalised. South Korea:  Has strict laws targeting digital sex crimes, including synthetic media. But globally? It’s a mess. And companies like these are thriving in that legal chaos.   Teach Your Kid to Spot Legal Weasel Words That clause? The one claiming it’s all “legal” and “for personal use”? It’s not just bad faith it’s strategic. These apps use carefully worded nonsense to trick users into thinking there’s no harm, and no one’s responsible. But your child doesn’t have to fall for it. So teach them this: “Legal” doesn’t mean safe. And just because an app says it’s allowed, doesn’t mean it’s right. Help your child build their BS radar. Show them real examples (like the screenshot clause), and talk through why that language exists…. to protect the company, not the user. Explain how terms like “confidential” or “within legal frameworks” often translate to “we’ll deny everything if someone gets hurt.” They don’t need a law degree they just need to know that if something feels wrong, it probably is. And if an app’s telling them, “Don’t worry, this is totally fine,” that’s exactly when they should  worry. We’ve let Big Tech raise our kids with disclaimers and deniability for too long. That ends with us. Let’s raise kids who don’t just scroll they question, they pause, and when needed, they shut it down.   So, What Should Parents Actually Do? Let’s skip the hand-wringing and get real. Here’s what you need to know and what you can do: 1. Talk About Consent in a Digital Age Consent isn’t just about physical touch it’s about image ownership , digital manipulation , and emotional fallout . Kids need to understand: just because you can doesn’t mean you should. 2. Name the Apps. Yes, Even the Gross Ones Don’t say “bad apps” or “dangerous sites.” Say their names: Undress AI. Clothes Off. DeepNude. OnlyFake. FaceSwapLive.  You can’t protect kids from a threat they can’t name. 3. Don’t Assume They’re Not Involved Even the most “well-behaved” kids could be curious, coerced, or caught up. This isn’t about shame it’s about resilience . Open conversations, not accusations. 4. Push for Platform Accountability Pressure lawmakers. Support digital rights orgs. This isn’t something we can fix in the family home alone. We need teeth in legislation that makes these companies liable not just morally, but financially and criminally. 5. Protect the Targets, Not Just the Users If your child is targeted, it’s not their fault. But they’ll need your help—emotionally, legally, digitally. Start by getting screenshots. All of them. Don’t let your child message or confront the person responsible, especially if it’s another kid at school. That only gives the perpetrator time to delete everything. And without evidence, there’s no case. Once you've got documentation, report it to the platform, report it to the police, and line up mental health support. Fast, calm, and clear. For assistance: Australia 1800RESPECT: 1800 737 732 (National Sexual Assault, Domestic and Family Violence Counselling Service) https://www.1800respect.org.au/ Australian Centre to Counter Child Exploitation (ACCCE): Provides resources and reporting avenues for online child exploitation. https://www.accce.gov.au/ United States National Sexual Assault Hotline: 1-800-656-4673 (RAINN - Rape, Abuse & Incest National Network) https://www.rainn.org/ National Center for Missing & Exploited Children (NCMEC): 1-800-THE-LOST (1-800-843-5678) https://www.missingkids.org/ Cyber Civil Rights Initiative Crisis Helpline: 1-844-878-2274 (for victims of non-consensual pornography) https://www.cybercivilrights.org/ European Union EU Sexual Violence Helpline: Available through national helplines; check the European Women’s Lobby for country-specific contacts https://www.womenlobby.org/ INHOPE: A network of hotlines for reporting illegal content, including deepfake pornography. https://www.inhope.org/EN European Cybercrime Centre (EC3): Provides resources and support for cybercrime victims. https://www.europol.europa.eu/about-europol/european-cybercrime-centre-ec3 United Kingdom Revenge Porn Helpline: 0345 6000 459 https://revengepornhelpline.org.uk/ The National Domestic Abuse Helpline: 0808 2000 247 (24/7 helpline run by Refuge) https://www.nationaldahelpline.org.uk/ CEOP (Child Exploitation and Online Protection Command): Provides advice and resources for children and adults dealing with online exploitation. https://www.ceop.police.uk/safety-centre/ Hong Kong RainLily: 24-hour Sexual Violence Crisis Support Hotline: 2375 5322 https://rainlily.org.hk/en The Family Planning Association of Hong Kong: Provides counselling and support services. https://www.famplan.org.hk/en Hong Kong Police Force Cyber Security and Technology Crime Bureau: Offers resources and avenues for reporting cybercrime, including sextortion. https://www.police.gov.hk/ppp_en/04_crime_matters/tcd/ Canada Cybertip.ca : 1-866-658-9022 (for reporting the online sexual exploitation of children, but can provide resources for adults as well) https://www.cybertip.ca/ Kids Help Phone: 1-800-668-6868 (provides resources for young people, but can direct to appropriate services) https://kidshelpphone.ca/ Canadian Centre for Child Protection: Offers resources and support for victims of sextortion. https://www.protectchildren.ca/

  • AI Is Now Sniffing Out Cheaters...But at What Cost?

    Right. So here we are. A new AI tool called CheatEye  just dropped, and it’s already throwing petrol on the bonfire of modern relationships. For the low, low price of your dignity and someone else’s privacy, you can now upload your partner’s photo and let an algorithm trawl Tinder to “check” if they’ve got an active profile. No awkward chats. No “Hey, I feel like something’s off.” Just cold, hard machine-driven surveillance . Welcome to love in the age of AI. This Isn’t Tech for Trust. It’s Tech for Paranoia. CheatEye AI  uses facial recognition to scan dating apps for matches. Ostensibly, it's for people who “just want to know.” But that’s a slippery slope greased with insecurity, fear, and a whole lot of Silicon Valley sleaze. The marketing is straight-up emotional bait: “Is he still on a dating app?”“Catch him in the act.”“Don’t be the last to know.” It’s not subtle. It’s not healthy. And it’s definitely not neutral. This kind of tech doesn’t show up in a vacuum. It feeds on a culture already marinated in mistrust and oversharing. We’ve been conditioned to think that if we can know everything , we’ll feel safe. Spoiler....we won’t. And this stuff doesn’t just affect the person being “caught.” It rewires everyone’s  sense of what’s okay in a relationship.Just because tech makes it possible doesn’t mean it makes it right. So What’s the Big Deal? It’s Just a Search, Right? Wrong. Here’s why this deserves more than a shrug: 1. Consent Just Left the Chat Your partner doesn’t opt into this scan. Their photo gets fed into a facial recognition engine without permission. You’re basically deputising AI to do private detective work they never agreed to. That’s a huge privacy violation. And let’s remember - this isn’t just one person scanning a partner. It's a tool that can be abused, badly ..........by stalkers, exes, or literally anyone with a grudge and a photo. 2. Normalising Surveillance in Intimacy If you have to spy to feel safe, that’s not safety. That’s hypervigilance dressed in digital drag. And the more tools like this are marketed as “solutions,” the more we let surveillance  become the new standard for communication . This is a cultural shift in real-time and it should terrify us. What happens when watching replaces trusting? When we start managing love like we manage cybersecurity? 3. False Positives, Real Damage Let’s not pretend AI is infallible. Facial recognition has a spotty track record especially with people of colour, gender-nonconforming folks, or anyone with a slightly outdated selfie. So now we’ve got tech that might  give you a wrong result, and your entire relationship spirals from there? Great. The Bigger Picture ........Tech Is Replacing Talk It’s tempting, isn’t it?Why confront someone when you can just feed their face into an app?Why say “Hey, something’s bothering me,” when you can quietly play detective? If trust is already so fractured that you’re running a digital sting operation, you don’t need an app you need a conversation. Or, let’s be honest, maybe a breakup. AI is making it easier  to avoid hard conversations, but that doesn’t make it better. It just delays the inevitable and erodes whatever dignity the relationship had left. We’ve Been Here Before .......Sort Of This isn’t entirely new.Think about checking someone’s texts while they’re in the shower.Looking at browser history.Scrolling through likes and DMs to decipher meaning. We’ve been playing amateur sleuths for years but tools like CheatEye supercharge it with the illusion of legitimacy. Now it’s not you  being paranoid it’s “data.” It’s “evidence.” This is a trust problem masquerading as a tech solution. So What Do We Actually Do? Let’s not pretend relationships are easy. Trust is hard-earned, easily shaken, and always a bit messy. But surveillance isn’t a shortcut it’s a detour that leads you off a cliff. Instead of feeding the beast, we need to talk louder about: Mutual consent in digital spaces. Your face, your data. No one should be scanned without knowing. Redefining “proof.” If you need tech to tell you something feels wrong, chances are you already know. Healthy conflict skills. We’re in a generation that knows how to swipe but not how to sit with discomfort. That’s not our fault, but it is  our work. Modelling trust and repair for our kids. If we want future generations to know how to build real intimacy, we have to show them it doesn’t start with spying. It starts with respect. Let’s Talk About It (Because Damn, We Need To) If you're a parent, a partner, or even just a person trying to figure out what the hell healthy love looks like anymore, these are questions worth wrestling with: Is it okay to use tools like CheatEye if you suspect something's up? What are we teaching ourselves and our kids when we outsource trust to AI? Would you feel safe in a relationship where someone was scanning you behind your back? We need spaces........real, raw, respectful ones where we can unpack this stuff without judgement. Because if we let AI define the new normal for relationships, we’re in for a deeply disconnected future. So yeah, CheatEye  might be the first. But it won’t be the last. What their terms of use say You're uploading someones face  to a company that: Admits it can’t fully protect your data Claims zero responsibility for anything that happens Collects and potentially shares your personal and financial info Can flip the terms without notice Can sell your data if the company gets sold All to… check if someone still has a dating profile? It’s not just sketchy. It’s dystopian. They call this "relationship insurance." I call it DIY digital surveillance with an EULA (End User License Agreement) that covers them , not you . If you’re already uneasy about your relationship, this isn’t your answer. And if you value privacy, trust, and basic consent? CheatEye’s fine print should have you running not signing up.

  • Boys aren’t just watching it. They’re producing it. Yes....that.

    Not with cameras or with consent. But with screenshots, casual photos, AI, and the faces of real girls they know. This isn’t about looking at the plethora of pornography sites on the net a click away. It’s about making it , manipulating it, reshaping what used to be an innocent image into something sexualised, that objectifies and is completely non-consensual. And no, it’s not happening in the deep web or some sketchy Reddit 18+ forum. It’s happening on school buses, screenshots taking in class group chats, on Snapchat threads that should have vanished in 24 hours. In the pockets of children with fully charged smartphones and zero accountability...............There’s another siren going off and too many adults are still not hearing it. Now before the default defence kicks in “ My son would never…or not all boys” let’s pause. Because that sentence is doing more damage than you think. This isn’t about criminal intent or about "bad" children. Boys don’t need to be tech-savvy or malicious they have the perfect storm. Curiosity, easy access, shaky impulse control, peer pressure, raging hormones, and a few public photos of someone they know—someone they follow online, sit next to in maths, or see on the bus. Add powerful digital tools and a culture where adults stopped paying attention, or understanding what is really going on and what you get is the collision of underdeveloped brains with unchecked power.That’s all it takes. That's all it takes to create a private folder, generate a deepfake, or drop an image into an AI tool that does the rest. It’s happening because we’ve allowed children to be raised in our absence, often while we are in the same room. Not because we didn’t care or because we don’t love them. But because fifteen years ago, when we first started handing smartphones to kids, the warnings were inconvenient. When those of us working in the field were screaming from every rooftop said “wait" we were dismissed as alarmists. When we begged parents to delay tech until kids had a solid sense of the self-worth, resilience, and critical thinking skills, we were told "everyone else’s kid had one" it was “just how things are now.” And so the phones went in the hands, the boundaries went out the window, and now we’re here.What we thought we were giving them was social connection. What we actually gave them was unchecked power. What we thought would teach responsibility ended up teaching objectification, manipulation, and quiet exploitation because that’s what’s rewarded in some online culture. I was talking to my partner about this, and I explained to him I’ve been watching this rot grow since the beginning of the net. When I first started in the tech industry, the internet was new, it was flipping our green-screen terminals into digital playgrounds, and I remember thinking, “ This is going to be a legal Wild West.”   And I was right. As one of the very few women in the room, I’d get “accidentally” cc’d on internal email chains one I clearly recall was called " Tasty Tuesday Treats" a collection of voyeur-style images of women photographed without consent, (it would be labelled "caught lacking" now). Images of women sunbathing topless or walking the beach in their G-strings. One of those photos was of a friend of mine, taken on the beach in my hometown Byron Bay. I was working in Sydney. She would have had no idea she was being watched, captured, passed around digitally. That was the moment something in me snapped. Not long after, a very loud, very straight to the point 25-year-old version of me (some things don't change) called that shit out—loud. The company scrambled and brought in a tool called Pornsweeper   it could tell with 95% accuracy whether pictures circulated via email were pornographic or not, enabling those pictures to be blocked. It measured skin tone and amounts of skin shown percentages and designed to block this kind of crap on the corporate network. That was the mid 90s . If it could be stopped then , with clunky old software and dial-up speeds why the hell are we still pretending we’re powerless now? The only thing that’s changed is the scale and the silence. It’s more automated. More invisible. More acceptable. But the exploitation? Still thriving, normalised swept under the digital rug. So no, parents, we don’t get to be shocked anymore. We need to be informed. This starts early, and it escalates fast. Here is how it may happen: A boy sees a girl on the bus, snaps a quick photo without her knowing. A classmate sends a disappearing Snapchat. He screenshots her Instagram story. He zooms in on a group photo and crops the rest out just her. At first it feels like a crush. But this isn’t admiration it’s documentation. And it doesn’t stop at one photo. It turns into secret albums in Google Drive. Private folders saved under innocent names. Collections labelled “Wifey,” “Caught Lacking,” “Smash or Pass,” “The Vault,” or worse. Sometimes they’re just saving images. Other times, they’re ranking girls. Rating them. Trading folders with friends like it’s a game. These aren’t strangers. These are girls they go to school with. Sometimes girls they’ve never even spoken to. And yes, there are AI-generated nudes. Yes, there are deepfakes. A And no, the girls almost never know it’s happening until it leaks. Until someone talks. Until it’s already done. Parents, you need to know this. The images aren’t always sexual to start with. They’re often just normal photos. A class picture. A TikTok dance. A selfie. But once they’re taken, saved, or shared without consent, the context doesn’t matter anymore. The intent  is what matters. These images become currency. A twisted kind of social capital that feeds ego, status, and group validation. And here’s the part that hits hardest this isn’t even rare anymore. It’s becoming normalised. Boys don’t even recognise it as harmful, because the culture around them keeps telling them it’s funny, harmless, even flattering. And because the girls have posted this, well that is like that old excuse of wearing a short skirt or she wanted it. That’s what happens when silence fills the gaps. Let’s stop pretending this is a fringe issue. It’s not. This is systemic, embedded in everyday tech, and being fuelled by tools they’re not equipped to handle. AI apps. Vault apps. Hidden folders. Discord groups. Cloud backups disguised as “Math Notes.” This isn’t sci-fi. It’s right now. This is about broken systems, unchecked platforms, and a generation growing up without digital guardrails. They’re learning from TikTok, Reddit, and porn subcultures because no one at home is having the conversation. No one is disrupting the cycle. If you’re reading this and your instinct is to get defensive or to feel ashamed you’re not alone. And you’re not a bad parent. Some of the most passionate voices in this space right now are from parents who got it wrong . Parents who gave the phone too early. Who missed the signs. Who trusted that everything would be okay… until it wasn’t. They live with that pain. With the fallout. With the guilt. And now, they’re speaking up not out of judgment, but out of love . Out of urgency. Their words might sound intense, even harsh. But underneath, there’s a broken heart and a fierce hope that maybe they can help someone else avoid the same regret. Their message is about trying to protect kids before the damage is done. It’s heart-centred, even when it comes off a little full-on. Please, listen not to be shamed, but to be supported  by people who wish someone had shaken them hard enough to make them hear. Now they’re trying to be that voice for you. Hear them. We can’t go back. But we can speak up. We can interrupt this cycle. We can start the conversation, even if it’s uncomfortable. Here’s what you can do: Start with simple chats. At the dinner table, in the car, on a walk. Not lectures just honest talk. Say things like: “Have you ever seen something online that made you feel weird or unsure?” “Do your friends talk about taking screenshots or saving people’s photos? How do you feel about that?” Be Curious, Not Controlling. This isn’t about snooping through their phones at night. It’s about being present. Involved. Know what apps they’re using and how.Ask them to teach you. Sit beside them when they’re scrolling.Show interest, not interrogation. You ’re not just checking up—you’re showing up . Teach That Consent Isn’t Just Physical It’s Digital Too We teach our kids not to touch others without permission. The same goes for images. A photo, a story, a post it still belongs to the person in it.Saving it, editing it, or sharing it without asking?That’s not “just being funny.” It’s a form of harm. Make it real for them “Imagine someone did that to a photo of you . How would that feel?” Help Boys Understand Respect Beyond Rules This isn’t about punishing boys it’s about giving them better tools.Many boys don’t mean harm. They’re copying what they see, chasing peer approval, or just not thinking it through. So let’s show them: Real confidence doesn’t come from mocking others. Real mates don’t trade photos they protect a female friends dignity. Real power is using tech to be kind, not cruel. Talk to them about how to be a safe place not just how to avoid getting in trouble. Know What the Digital “Red Flags” Look Like Today’s hidden folders aren’t under the bed they’re in calculator apps and cloud drives. Things to look out for: Vault apps disguised as boring utilities Folders named “Math Notes” or “Don’t Open” Google Drives or Discord groups used like private galleries It’s not about assuming the worst it’s about being one step ahead. Quietly informed. Calmly ready. Model the Values You Want Them to Learn Kids mirror what they see, not what we say.If we gossip using screenshots, or post family photos without asking, they pick up the message that boundaries are optional. Model consent in everyday life: “Can I post this photo of you?” “That video doesn’t sit right with me what do you think?” We can’t ask our kids to act with respect if we’re careless ourselves. Create a Home Where Telling the Truth Feels Safe Your child might mess up. They might be sent something awful. They might even take part in something they don’t understand. What they need is to know they can come to you. No explosions. No shame.Just love, boundaries, and guidance. Let them know “You’ll never be in trouble for telling me the truth. Ever.” That safety net might be the one thing that stops something from going too far. Because the truth is brutal, but it’s also necessary. This isn’t just what they’re watching, It's not sneaking a sealed magazine. It’s what they’re making in secret. And the only way it stops is if we stop being scared to talk. ___________________________________________________________________________________ Here is a glossary of the things you may want to know more about: THE FILING SYSTEM “Wifey” It’s used to label girls a boy is especially obsessed with, attracted to, or sees as “girlfriend material.” But in reality, it often includes curated photos of the girl (or girls) he’s sexualising, without her knowledge. This is about possession, not affection. “Caught Lacking” This means the girl was photographed without knowing, often in unflattering or vulnerable moments slouched in class, eating, bending over, laughing.“Lacking” implies she wasn’t “on guard” or “looking her best.” The whole point is mockery and power “I got this pic of you, and you don’t even know.” “Favourites” / “Faves” A ranking system.Photos of girls sorted by personal preference like a playlist of people.Some folders even include scores, comments, or emoji ratings (🔥, ❤️, 😍).It’s gamifying attraction and turning real girls into a scrollable, ranked feed. “Smash or Pass” Straight from hookup culture.Often used for group chats or shared folders, this one refers to rating girls as sexually desirable or not, based solely on looks. It’s rarely private. It’s part of a collective game where boys vote, share, and judge together. The girls almost never know it’s happening. “The Vault” Sounds cool. But it’s where the worst stuff goes.This is usually a locked or hidden folder (sometimes in a disguised calculator app) where boys save: The most revealing screenshots Edited or AI-generated images Possibly even deepfakes or non-consensual nudes in sexual poses. It’s secret, curated, and intentionally hidden. “Private” / “Don’t Open” / “Math Notes” Disguises. These folders exist to fool parents, teachers, or anyone who might casually look.They’re made to seem boring or innocent, but inside is where they hide sexualised content.This is deception by design and it works, unless you know what to look for. “Hot List” / “Baddies Only” / “Thirst Traps” These are blatantly sexualised folders. Usually pulled from social media bikini photos, mirror selfies, dancing videos, or filtered pics that are totally public, but still used without consent. It’s a warped version of admiration, but it’s still objectification there’s no humanity, just appearance. “Zoom Screenshots” Yes, it’s exactly what it sounds like. Screenshots taken during online classes especially during lockdown or remote learning. Girls caught mid-expression, turning their camera on for a moment, or just existing. It’s not just creepy. It’s predatory and calculated. “OnlyFans (LOL)” A joke—meant to degrade. Usually filled with regular photos of girls, but labelled as if the girl is “selling herself.” It’s a way of mocking or shaming girls for how they look or dress online. This reinforces toxic thinking: “If she posts it, it’s public property.” Initials or Nicknames (e.g., “KM Collection”, “JessSzn”) The most subtle and the hardest to catch. These are individual galleries named after a specific girl, often using her initials, a nickname, or a coded name only the boy or his friends would recognise. It gives him deniability if questioned, but still lets him organise his content however he wants.

  • The MANipulation Pipeline (And How It Works).

    They're being groomed. Not in the traditional sense, but digitally by algorithms designed to manipulate their identity. TikTok, YouTube, Reddit, Discord these platforms aren't just "where they hang out." They're the loudest voices in your child's life right now. And those voices? They're smart. Slick. Often disguised as "funny," "motivational," or "alpha male truth bombs." But what they're really doing is hijacking your child's need for identity, certainty, and connection and selling them a worldview that's cold, cruel, and addictive. It starts with jokes. Then it becomes identity. Then it becomes worldview. Teens start echoing words like "red pill," "sigma," or "high value woman" because it feels powerful.  But behind the meme is a message: "Emotions make you weak." "Women are objects." "Compassion is for losers." "Real men dominate or get dominated." Sound dramatic? It's not. This content is engineered  to be addictive, empowering, and radicalising one post at a time. Your teen is out in a wild sea of opinions, performance pressure, and nonstop digital chaos. The influencers yelling at them to "be a Chad" or "reject the woke mob" are offering certainty and teens crave certainty. So when you speak, you need to be the one voice that says: "You don't need to prove yourself to earn love." "You don't have to act tough to be worthy." "You're not weak for feeling things. You're human." The Long Game Wins Sometimes you won't get the perfect response. Sometimes you'll feel like nothing landed. But every time you stay grounded… Every time you choose understanding over control… You're building the trust they'll need later when the real questions come. The goal is to raise a human who knows the difference between borrowed power… …and real integrity. Teen Slang - From Harmless to Harmful Common & Harmless (Usually) Rizz  Charisma or flirt game. "He's got rizz" means he's smooth or charming. Thirst Trap  A sexy or flirty photo posted to attract attention online. Used both playfully and critically. Delulu  Short for "delusional," often self-deprecating. "I'm delulu thinking he'll text back." Used as a way to laugh off disappointment. Main Character Energy  A person acting like they're the star of the show. Confident, dramatic, or attention-grabbing. Glow-Up  A positive transformation, usually in looks or confidence. "He had a massive glow-up after Year 10." Dry Texter  Someone who texts with no enthusiasm or effort. "He just said 'k'—such a dry texter." Concerning or Influencer-Driven (Alarm Bells) Sigma Male  A twisted evolution of "alpha male." Sigmas are supposedly lone wolves who reject societal rules and dominate on their own terms. Huge red flag. Promoted heavily by toxic masculinity influencers. Often used to excuse emotional coldness, antisocial behaviour, or lack of empathy. High Body Count  Used to judge or shame women based on the number of people they've slept with. Deeply misogynistic. Reinforces double standards and purity culture. For the Streets  A derogatory phrase suggesting a woman is promiscuous or unworthy of respect. Sexist and dehumanising. Common in Andrew Tate and Sneako's communities. Red Pill  Originally from The Matrix, but now used to describe "waking up" to a reality where women are manipulative, society is unfair to men, and masculinity is under attack. Code for entering the misogynist pipeline. If a teen says they've "taken the red pill," serious intervention is needed. Chad  The "ideal man" according to toxic internet culture. Good-looking, dominant, gets all the girls. Used to mock or idolise others. Can lead to unhealthy comparisons or envy. Femcel / Stacy / Trad Wife  Slang from incel communities to categorise women as undesirable (femcel), overly desirable and shallow (Stacy), or submissive and obedient (trad wife). These are not neutral terms. They come from toxic online subcultures and should be challenged immediately. Beta / Cuck  Insults aimed at men seen as weak, respectful to women, or not traditionally masculine. Designed to shame empathy and emotional intelligence. Alpha Grindset / Hustle Culture  Obsession with power, dominance, and making money, often tied to crypto, AI scams, or pyramid schemes. Can lead to teens being exploited or scammed. Woke Mob / NPCs / Sheeple  Used to dismiss anyone who holds progressive views or questions problematic behaviour. Often used to shut down conversations or excuse hate speech. GYATT  Exaggerated slang for someone's curves or butt—usually yelled in reaction videos or comments. Often sexualises young girls. Red flag if used in school settings or toward peers. Skibidi Rizzler / Skibidi Sigma  Silly-sounding, meme-fueled combos that make dangerous ideas sound like a joke. Used to mask toxic beliefs under layers of irony. Red Flag Phrases to Watch Like a Hawk Phrase Translation What It Really Means "Red Pill" "Woke up to the truth" Misogynist pipeline initiation "For the streets" "She's trash" Sexist slut-shaming "Sigma Male" "Lone wolf alpha" Emotionless domination fantasy "High Value Woman" "Deserves me" Control masked as preference "Beta/Cuck" "Weak man" Used to shame kindness/empathy "Trad Wife" "Obedient girl" Anti-woman propaganda "NPC / Sheeple" "Follower" Used to silence empathy or truth Understanding these phrases is crucial for parents to identify when their teens might be exposed to harmful content online. These terms aren't just slang - they're entry points to potentially dangerous ideologies that can shape how young people view relationships, gender roles, and their own identity. When you hear these phrases, it's important to approach the conversation with curiosity rather than judgment. Ask where they heard it, what they think it means, and use it as an opportunity to discuss the underlying messages these terms carry. Remember that many teens repeat these phrases without fully understanding their origins or implications. Your goal isn't to shame them but to help them develop critical thinking skills about the content they consume online. What’s Actually Going On Your teen’s not turning into a jerk, they’re shape-shifting. Right now, they’re under pressure to be someone . And fast. Online, there’s no room for uncertainty. No space to figure it out slowly. So they start trying on identities like outfits , seeing what gets a reaction, what makes them feel powerful, what helps them belong. That’s not evil. It’s survival. They’re not aiming to hurt anyone. They’re just navigating a world where being bold, even if it’s toxic, feels safer than being unsure. If your son parrots something like: "She's for the streets," or "I'm on my sigma grind," he's not trying to hurt anyone. He's saying: "I'm trying to figure out who I am." "I want to sound strong." "I want to belong." And if you shut it down with shame or panic? He just learned: "Okay, I can't be real with you." And next time, he'll hide it better. What Your Teen Actually Needs Not perfection. Not digital fluency. Just presence. You don't need to "get" every meme or decode every trend. You just need to be: The calm in the chaos. The soft landing when the world gets hard. The only voice in their life that won't make them feel stupid, gross, or beyond help. Because here's the truth When they drop those words, they're not testing your rules. They're testing your capacity to love them through the confusion. Your job is to show them: "Yes. I see you. And I'm strong enough to hold this with you." "I'm not afraid of what you're becoming. But I will fight to make sure it's really you not some algorithm's version of you." How to Handle It When It Happens Let's say your teen repeats something toxic. Here's a framework to guide you no panic, no lectures, just grounded parenting: Pause and Breathe Your first reaction is often emotional. That's okay. But don't lead with it. Responding with calm is how you keep the door open. " Okay. I'm hearing that word, and I want to understand where it's coming from." Ask, Don't Accuse Curiosity disarms. Judgement closes doors. "Where did you come across that?" "Do you know what that phrase actually means, or just repeating it from a video?" "What do you think about what that guy was saying?" You're not playing dumb you're creating space. Offer Context Without Condemnation Most teens don't know the origins. They just know it "sounds cool."  "That term? It was actually made up by a group that treats women like property. That doesn't sound like you." "I get that it feels powerful. That's on purpose. These creators are trying to hook you and yeah, it's working on a lot of people." No blame. Just truth. Stay With Them This is the part many miss. Teens need to wrestle with this stuff and they need someone to do it with. "You're smart. You're strong. I trust that. I just want to make sure the voice you're following is actually yours."  Let them know you're not going anywhere, even if they push back.

bottom of page