top of page
  • Writer's pictureKirra Pendergast

Meta's new safety measures against explicit content




Meta is introducing new safety measures to protect users, especially teenagers, from the risks associated with explicit content on its platforms. This move, affecting both Facebook and Instagram, comes in the wake of criticism the company faced for encrypting Messenger chats, a decision some have argued hampers the detection of child abuse.

The new feature is designed to prevent the sending and receiving of nude images, with a focus on safeguarding women and teenagers from unsolicited content and the pressures of sharing such material. While children under the age of 13 are already prohibited from using Meta’s platforms, the new measures specifically target teenagers, making it harder for them to receive explicit material via Messenger on both Facebook and Instagram. The tool will also discourage teenagers from sending such images, although Meta has not specified how this will be implemented. Adults will also have the option to integrate these safety tools into their accounts for enhanced online protection.

In addition to these measures, Meta is changing the default settings for minors on Instagram and Facebook Messenger. Under the new default, teens will only be able to receive messages or be added to group chats by people they already follow or are connected to. This aims to protect them from unwanted contact and give parents and teens more confidence in their online interactions. Teens in supervised accounts will need their parent’s permission to change this setting, which applies to all users under the age of 16, and in some countries, under the age of 18.

These changes are part of Meta's broader effort to address concerns about online safety, particularly for younger users. The company has faced legal challenges and public scrutiny over its handling of user safety, with recent US lawsuit filings alleging that an estimated 100,000 teen users of Facebook and Instagram experience sexual harassment daily. Meta has responded by stating that the lawsuit mischaracterizes their efforts.

The company's move to protect Facebook Messenger chats with end-to-end encryption by default has also drawn criticism from governments, police forces, and children's charities. Critics argue that this level of encryption makes it difficult for the company to detect and report child abuse material. Some have suggested that platforms should employ 'client-side scanning' to detect child abuse in encrypted messages. This system would scan messages for matches with known child abuse images before they are encrypted and sent, automatically reporting any suspected illegal activity to the company.

Meta has announced that these new safety tools will also work in encrypted chat messages, with more details expected to be released later this year. On their blog, the company has emphasized its commitment to child safety, stating that it has introduced over 30 tools and resources to keep children safe and plans to introduce more measures over time.

According to Australia’s eSafety Commissioner, in 50% to 70% of cases of online child sexual abuse, the abuser is known to the child. They also state that children who have been sexually abused online are four times more likely to experience mental health problems both immediately and throughout their lives. These alarming statistics underscore the importance of the steps being taken by Meta and other tech companies to ensure the safety of young users online.

432 views

Recent Posts

See All
bottom of page