top of page

A Band-Aid on a Bigger Wound - Meta’s New Teen Restrictions


Meta, the parent company of Facebook and Instagram, has announced new measures to protect teenagers from harmful content. These changes automatically place all teen accounts under the most restrictive content settings, limiting their exposure to sensitive material like nudity, violence, drugs, firearms, and discussions around self-harm.

While updates like limiting sensitive content, blocking specific searches, and adding parental controls may be small positive steps, they fail to address social media platforms' core mechanics. Algorithms, which drive user engagement, continue to serve teens more of the content they interact with whether considered harmful or not. What is serious and harmful to some may not necessarily be serious and harmful to others. Without greater transparency and independent oversight, there’s no guarantee that teens won’t still be exposed to dangerous material. Just take a quick moment to remember how well their community standards work and how often even reporting bullying or a user as underage even gets actioned….there is still a long way to go and this puts yet another dangerous link in the “set and forget” chain.

It’s important to know how social media algorithms work to understand why these measures are flawed. Algorithms are designed to maximise user engagement by analysing the content users interact with, whether liking, sharing, or simply spending extra time on a post. Based on this data, algorithms recommend similar content to keep users on the platform for as long as possible. This engagement-driven model often prioritises sensational or emotionally charged content, which can include harmful material that may not always fall into the categories considered “dangerous and harmful” by Meta. .

For example, imagine a teenager clicks on a post about dieting. Meta’s algorithm will then suggest similar content, which can easily spiral into a rabbit hole of posts promoting body image issues, disordered eating, or mental health struggles. Meta claims that its new safety measures will block harmful terms like “bulimic” and direct users to helpful resources, but this barely scratches the surface. Without transparency or independent audits of these algorithms, it’s impossible to gauge how effective such controls will be in practice.

The issue is compounded by “algospeak”, a term that describes the clever ways users bypass content filters. Teens, for example, might use coded language such as “unalive” to reference suicide or “corn” as a euphemism for pornography. These tricks that are constantly evolving allow harmful content to evade detection, turning the battle against harmful material into an ongoing game of cat and mouse.

Meta has also partnered with Yoti, a company specialising in AI-driven age verification, to prevent teens from lying about their age. Yoti estimates users’ ages using facial recognition technology, which analyses images or video selfies. While this might sound like a robust solution, we need to think about how biometric age verification system collects, stores, and processes data especially for minors. Facial data is susceptible, and mishandling or unauthorised access to this information can have severe consequences. Although Yoti’s privacy policy claims that it deletes data after 30 days, this short window still leaves room for potential misuse or unauthorised access.

The broader issue here is surveillance. Using facial recognition for age verification opens the door to potential misuse by governments or corporations. The collection of minors' biometric data for social media platforms sets a dangerous precedent, as it’s unclear how this data may be used in the future. Consider that many people didn’t anticipate that the photos and videos they posted on Facebook 17 years ago would later be used to train artificial intelligence models. This raises serious concerns about how biometric data could be repurposed in unforeseen ways in the future.

Meta’s reliance on parental controls is another flaw in its safety measures. According to Meta’s own research, fewer than 10% of teens use Instagram’s parental control features. Many parents either don’t activate these controls or don’t know how placing the burden of safety on parents rather than the platform itself. Even with age verification systems, the over-reliance on parental oversight has proven largely ineffective.

Another major issue often overlooked is the role of parents in contributing to online harm. Parents who manage their children’s accounts especially those of "kidfluencers" may inadvertently expose their children to risks by oversharing content without the proper digital literacy to manage these accounts securely. It is unlikely that Meta’s new safety features will restrict access to these accounts only to other children, thus still leaving them exposed to potential predators or harmful interactions.

While there are significant risks, it’s essential to acknowledge that social media platforms are not inherently harmful. For many teens, especially those in marginalised communities, platforms like Instagram provide vital support and connection. For example, LGBTQ+ teens often find communities online that they may not have access to at home. Social media can also be a space for self-expression, learning, and building friendships.

The challenge lies in balancing these benefits with the risks. Simply restricting content without addressing how teens engage with these platforms will not solve the problem. Teens are resourceful and will find ways to access the content they want, regardless of the controls in place.

Meta’s new safety measures may reduce some exposure to harmful content, but they fail to address the root cause of the algorithms that prioritise engagement over safety. Until Meta opens its algorithms for public audit, and supports more meaningful regulation, these measures will likely provide only temporary relief.

Without greater transparency, nuanced content moderation, and a commitment to safety by design, these changes are merely an attempt at another attempt at a "set and forget" band-aid on a much larger wound.


To truly protect young users, Meta needs to go beyond surface-level solutions and take accountability for how its platforms operate.

 

245 views0 comments

Recent Posts

See All

Comments


bottom of page