top of page

The Evolution of Online Exploitation and Teen to Teen Image-Based Abuse using "Nudify Apps" and what we can do.

Writer's picture: Kirra PendergastKirra Pendergast


What we are currently witnessing in schools is not new. In the last hour, I have had multiple conversations with mothers of girls affected by the Deepfake Nudify app CRIME in Victoria. Fifty female students had fake, sexually explicit images created using AI shared on Instagram and Snapchat. Images of their faces were obtained from innocent social media accounts. These mothers are heartbroken because the perpetrator is reportedly remorseful and has been let off with caution. The police have referred the platform used to the Australian eSafety Commissioner. The US Communications Decency Act's Section 230c likely shields the platform from liability, making the referral of a platform to eSafety a required move but one that may now hold much weight.

 

I wonder if the investigation considered the perpetrator may have used multiple platforms. Whilst busy being remorseful, he was certainly investing a huge amount of time or money. Most of these platforms, you get the first image for free, and then have to pay through cryptocurrency subscriptions for further access. There are thousands of these apps in the surface and deep web. Teens can, and do, easily bypass age verifications with VPNs, highlighting the need for significant legislative changes to be able to hold tech companies more accountable rather than the current protection they and their profits have whilst they can hide behind Section 230. This whilst our children’s self-esteem is systematically dismantled. This outcome is by no means the fault of the Police. Current legislation surrounding teen-to-teen image-based abuse needs immediate change. We must support the innocent victims of this ongoing and increasingly exploitative teen-to-teen image-based abuse, which is a crime. They must know they did nothing wrong, and we stand 100% behind them. This kind of trauma at this time of their young lives can have lifelong affects.


I have been asked what I would do if I had the power and it would be this:

I would immediately implement a year-long rehab program for perpetrators. The key components should include regular check-ins with a Youth or School Police Liaison, akin to parole. This would ensure accountability and support, helping the perpetrator comprehend their actions' legal and emotional gravity. Intensive counselling and therapy should also be mandatory, involving sessions with therapists specialising in adolescent behaviour and digital ethics. Perpetrators must participate in educational workshops on online ethics, consent, and the impact of image-based abuse, educating them on the consequences of their actions and the importance of respecting others' privacy. During the program, their access to any device connected to the internet should be restricted to supervised use at school for educational purposes only. This helps mitigate the risk of further misuse and teaches responsible technology use. Incorporating restorative justice practices would require the perpetrator to acknowledge the harm caused and work towards making amends, including written apologies to the victims. In 2003, Mark Zuckerberg launched Facemash, the predecessor to Facebook. This site notoriously featured images of Harvard students posted without their consent, comparing them to barnyard animals to rate their attractiveness. This early instance of online objectification set a disturbing precedent for the misuse of personal images on the internet, reflecting the values and flaws of our society.

Enter Nudify apps. These have been on Safe on Social’s radar for some time. They are one of the many things we never speak about with kids directly, so they don’t all look them up! We have quietly taught kids how to stay safe without mentioning specifics. 

So what are they?  “Nudify Apps” as they are most commonly known, are software applications that use artificial intelligence to create fake nude images of individuals without their consent. These apps manipulate existing photos by removing clothing and generating authentic hyper-realistic images. This technology leverages machine learning algorithms and neural networks to analyse and alter the original images. The ease with which these apps operate makes it alarmingly simple for anyone to create and distribute these invasive images. Nudity apps are built upon technology and techniques that were significantly advanced by the adult entertainment industry. The connection to the adult entertainment industry lies in the foundational technologies developed and refined within this sector. The industry has long been a driver of innovations in digital manipulation tools, including those used for creating explicit content. Developers of Nudify apps have repurposed these advancements, enabling them to produce realistic and convincing fake images quickly and easily. 

These apps often operate by training their algorithms on large datasets of explicit images, which helps the AI understand and recreate human anatomy realistically. The use of such technology raises significant ethical and legal concerns, as these apps can cause severe privacy violations and psychological harm to victims. The ease of access to open-source AI models has further facilitated the development of these apps, making them more prevalent and disturbing. 

Efforts to combat the spread and impact of Nudify apps include actions by major social media platforms to remove advertisements for these apps and block associated keywords. The legal framework in many regions struggles to keep up with these rapid technological advancements, leaving a gap in protection against such exploitation. Researchers at Graphika, a social network analysis firm, found that in September 2023 alone, over 24 million users engaged with “Nudify” apps. Search engines record over 200,000 searches per month for keywords related to undressing apps. 

In 2023, there was a notable increase in AI-generated child sexual abuse materials (CSAM), alongside prosecutions of offenders and various legislative efforts to combat these issues. AI-generated deepfakes can be categorised into two main types. Deepfakes of actual individuals, where real children's images are manipulated to create explicit content and entirely virtual yet realistic depictions of children. For example, in South Korea, a defendant generated 360 images of children in sexual situations using AI. The court ruled that these AI-generated images of virtual humans could be considered sexually exploitative. In North Carolina, child psychiatrist David Tatum used AI to alter images of minors, including school event photos, including altering photos from a school dance and a first-day-of-school celebration using a web-based AI application. 

These cases highlight the emerging use of AI in creating harmful content and the need for robust legal responses. Australia has implemented several legislative measures to address the issue of deepfakes and non-consensual intimate images. The Online Safety Act 2021 is a key component, making it a civil offence to post or threaten to post intimate images without consent. The eSafety Commissioner has the authority to request the removal of such content, with penalties for non-compliance. The Privacy Act 1988 also requires informed consent for collecting, using, or disclosing personal information, including images, and has seen increased penalties for serious breaches. Recently, the Albanese government announced further reforms to strengthen these protections. 

As these technologies become more sophisticated and accessible, the risks to personal privacy and mental health increase exponentially. Understanding the mechanics and implications of these apps is the first step in recognising the potential dangers and implementing protective measures. 

Let’s start with reform of Section 230 of the US Federal Communications Decency Act. This has been a cornerstone of internet law since the late 1990s, shielding online platforms from liability for third-party content. This protection extends to a wide range of user-generated content, including some controversial and harmful content. Specifically, Section 230(c)(1) states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." 

This may mean that platforms hosting nudify apps or other similar AI-generated content may also claim immunity from liability for their users' actions. However, this protection is not absolute. Section 230(c)(2) also allows platforms to moderate content in good faith without losing this immunity. Recent discussions and legislative proposals in the U.S. aim to amend Section 230 to address issues related to harmful content, including deepfakes and non-consensual explicit images. While there is significant momentum and numerous proposals to reform Section 230, the path to achieving legislative change is fraught with political and practical obstacles, making it unclear when substantial reform might occur.

As these technologies evolve, so must our strategies for combating their misuse. By shedding light on the pervasive issue of online exploitation and its impact on individuals teen to teen and having big conversations with our kids we can work together to be the change. Reforming legal protections, fostering open communication, and promoting digital literacy are essential steps in this ongoing battle. 

Now while you catch you breathe let's add in to this insidious mix the rise and rise of sextortion. Sextortion is a form of exploitation where perpetrators threaten to release explicit images or videos of an individual unless demands, often for money or more explicit content, are met. This form of blackmail has become increasingly common, particularly with the rise of social media. The emergence of AI nudify apps exacerbates the risk of sextortion. These apps use AI to create fake nude images by manipulating existing photos, making it easy for perpetrators to fabricate compromising images that can be used for blackmail. There are thousands of Nudify Apps all with a surface web as the entry point. They usually offer free first use then through to to setting up an account in the deepweb not the darkweb. The internet is made up of three main layers. The surface web, the deep web, and the dark web. The surface web is what we use every day, including websites, social media, and online stores that can be found through search engines like Google. It accounts for only about 4% of the internet. The deep web is much larger and contains information not indexed by search engines, such as private databases, academic records, and subscription services. This content is not meant to be publicly searchable and requires specific permissions to access it, such as a username and password, but it is not illegal or dangerous. Your bank accounts are deep web. The dark web, a small part of the deep web, requires special software like Tor (The Onion Router) to access and is often associated with illegal activities, though it also serves as a haven for those needing privacy, like activists and whistle-blowers. 

Laws face challenges in keeping up with the rapid advancements in technology. They always have. Detection and enforcement are difficult due to the sophisticated nature of the tech and the ease with which content can be distributed. While we are talking about this and sextortion, for many kids, the profound fear of speaking up if they are a victim of sextortion stems from a profound sense of shame and the belief that they will bring disgrace upon their families. This sense of embarrassment is deeply rooted in societal stigma, making it difficult for young people to seek help when they find themselves in vulnerable situations. This reluctance is further compounded by how cyber safety education has traditionally been delivered in schools. For a long time, the primary voices in this field were self-trained former police officers who did not fully understand the tech and certainly had no training provided by the police force unless they were from computer crime units. These individuals often approached the subject from a place of fear, using scare tactics to try to deter children from engaging in risky online behaviours. The intention was to protect them, but the strategy had unintended consequences. Instead of fostering an environment where kids felt safe to discuss their online experiences and seek help when needed, these fear-based methods have, by default, instilled a deep sense of paranoia and self-censorship. As an example; I will never forget when I was asked to speak at a school in Victoria, and I asked a question about a widely popular app and who was using it to a Yr5 cohort, knowing full well there would be a lot of them. Absolute silence. When a room full of kids falls silent, and not one puts their hand up when I am asking about apps and games they love, I know there is a “speaking up” problem. A teacher walked over to me and said quietly, “They are not raising their hands because the last presenter in here informed this year 5 cohort that they would go to jail for using apps under the age of 13 years!!! There is no law on this earth where that would happen, the 13+ put it place by COPPA is a law directed at the apps not the kids. Apps put “must be 13yrs” in their terms so when a kid ticks the box the app is off the hook and no longer liable. So the outcome from that presenter did nothing but  it did was scare a room full of little girls so much they would barely speak. We need to be honest with kids. But there also need to be consequences. They are not going to jail for any of this. They are being teenagers, and yes, it is illegal and might come back to bight in years to come, but we have to balance that with making sure they know it is safe to speak up and get help when they need it most without fear. But knowing if they commit the crime there will be a consequence not just a caution.

 

Image-based abuse, or the distribution of sexual or intimate images without consent, has become a pressing social and legal issue. There are significant gaps in education and comprehension among law enforcement, especially in regional areas, posing hurdles for victims seeking justice. Recently in regional NSW I was told by a school that when they had contacted Police to tell them that there was an issue with nudes being collected and distributed across Snapchat and other platforms they were told “What do you want us to do about that” and told it was a school problem. The creation and implementation of laws are only as effective as those who enforce them. Within the police force, there is a wide range of attitudes towards online abuse between teens. Some officers believe there's too much abuse to handle and schools should take responsibility. Others lack basic app knowledge, questioning what TikTok is. This gap in understanding and capability hampers effective response to teen online abuse. Police need more education and support. 

The internet, much like society, reflects and amplifies our existing values and biases. The psychological impact of being a victim of teen to teen image-based abuse, nudify apps, or deepfakes can be profound and long-lasting. Teens targeted by these technologies often experience severe emotional distress, including anxiety, depression, and feelings of helplessness and school refusal. The violation of personal privacy and the public exposure of images, whether self-produced or deepfakes created through a Nudify app, using a photo screenshot what they thought was only accessed by friends they trust can lead to a diminished sense of self-worth and safety. Victims may struggle with trust issues, social withdrawal, and a pervasive fear of being watched or judged. The emotional toll can also extend to family and friends, exacerbating the distress and complicating recovery. I have had Mothers and burst into tears in my sessions, telling me their child has been a victim and they chose not to speak up from the shame of it all, and Teachers so traumatised by students “deepfaking” them they have had to take stress leave. 

Most will not speak up. 

Many victims of non-consensual deepfakes and nudified images feel ashamed and embarrassed, making them reluctant to report these incidents formally. Understanding the psychological consequences is crucial for providing the appropriate emotional support and counseling to those affected. We need to normalise having big discussions about these issues to remove the associated shame, and we need to do this fast. The reputational damage inflicted by these manipulated images can be devastating. Once shared online, they can spread rapidly across social media platforms, reaching a wide audience. For students, this can affect academic opportunities and peer interactions. The stigma associated with these false images is hard to overcome, as the internet seldom forgets, and removing content from all corners of the web is nearly impossible. The enduring nature of online content means that the harm can resurface repeatedly, causing ongoing damage. 

Fostering a Safe Space for Big Conversations 

Creating a safe environment where kids feel empowered to speak up is crucial. Start by initiating conversations in a non-threatening way, such as saying, "I heard about this issue of apps that take your clothes off happening... Has anything like that ever happened at your school? Remember, if anyone you know ever experiences this, you can come to me for help. They won't get in trouble; they are the victim, and it's important to get support." This way the child feels empowered as they know they can help their friends and they obviously then know that it would be the same if they were ever in the situation. Parents should foster an environment where children feel comfortable discussing their online experiences without fear of judgment or punishment. Honest conversations about the risks of sharing personal images online and the potential misuse using deepfake and nudify tools are essential. Educate children using up-to-date resources on the importance of privacy and the long-term consequences of their digital footprint. I hate to say it but a lot of “evidence-based resources” are relying on evidence that is out of date very quickly after it is written. I believe this kind of content should state “evidence based as at….and the date of the research must be added” it builds a false sense of security in such a face paced sector. 

Encourage kids to ask questions and express their concerns about online safety. Have big conversations, do the research with them, making sure you are getting information from organisations that are working with kids face to face and often so you know what is actually happening. This way, adults can better understand the challenges children face online and what is happening right now and be able to provide timely guidance. Creating a supportive and open space to speak up and ask questions makes children more likely to share their concerns and experiences. This proactive approach helps build trust and equips young people with the knowledge and confidence to navigate online spaces responsibly, ultimately reducing the likelihood of falling victim to harmful technologies like Nudify apps. 

Privacy Settings 

Teaching children to use strong privacy settings on social media and other online platforms is crucial for preventing the misuse of their content. Privacy settings allow users to control who can see and interact with their posts, photos, and personal information. Parents should be able to guide children in configuring these settings to the highest level of security, limiting access to trusted friends and family members on every single app the child is using. If you don’t know about Snapchat as an example – get your child to show you and work through it together to become an expert with them. Or start by downloading this ----------Encourage children to regularly review and update their privacy settings, especially when platforms introduce new features or policies, which is almost every month, but at least check in every school holiday for the new trends on each app. Educate them on the importance of being selective about friend requests and followers, as accepting unknown individuals can increase exposure to potential threats. 

Digital Footprint Awareness 

Educating children about their digital footprint is essential for long-term online safety. A digital footprint is the trail of data one leaves behind while using the internet, including social media posts, comments, and shared images. Parents and teachers should help children understand that everything they share online can be permanent and potentially accessible to a wide audience. Highlight the risks associated with oversharing and encourage thoughtful consideration before posting personal information or images. Explain that even content shared privately can be copied or misused by others. They need to know that once something is online, it's permanent. Even if you delete it, traces can remain. They are NEVER anonymous online and everything you do online leaves a trail that can be tracked. Yes, EVERYTHING. Posts and comments can resurface years later, impacting them whether they think it is fair or not. By raising awareness about their digital footprint, parents can help their kids to make more informed decisions, thereby reducing the risk of their content being exploited for deepfakes or other malicious purposes. 

Promoting Digital Literacy 

Critical thinking Encouraging critical thinking and scepticism about online content is essential. Teach children to question the authenticity of images and videos they encounter online. Explain how easily content can be manipulated and the importance of verifying information before accepting it as true. Help them to understand why they need to think about the angle of the image they are posting and to never post an image where they are looking directly into the camera passport style. 

Staying Informed 

Staying updated on the latest developments in AI and digital manipulation technologies is crucial for understanding the evolving risks and protective measures. Parents should regularly educate themselves about new threats and the tools available to counteract them. By staying informed, adults can provide accurate and current advice to children, helping them stay safe online. 

Emotional Support 

Offering emotional support and understanding to victims is crucial. Emphasise that they are not at fault and connect them with counselling services if needed. Recognise the psychological impact of these violations and provide a safe space for victims to express their feelings and seek help. 

By understanding these challenges and how to address them, we can better protect our children from the evolving threats of online exploitation. Together, we can create a safer digital world for the next generation.

 

Resources 

If you or someone you know is affected by non-consensual deepfake nude images, there are resources available for support and legal advice. Here are some help lines and resources for each of the regions mentioned: 

Australia:

United States:

European Union:

  • EU Sexual Violence Helpline: Available through national helplines; check the European Women’s Lobby for country-specific contacts https://www.womenlobby.org/

  • INHOPE: A network of hotlines for reporting illegal content, including deepfake pornography. https://www.inhope.org/EN 

United Kingdom:

Hong Kong:

Canada:

279 views0 comments

Recent Posts

See All

Safe on Social has left Facebook

A few days ago, I made a decision that had been brewing for a while. I stopped paying Meta for the little blue verification tick on my...

Comentários


bottom of page