top of page
Search
  • Annie Fardell Hartley

Why community is the only real suicide safety mechanism on social media



Trigger Warning: This article discusses suicide The chances are you have seen posts relating to suicidal thoughts or plans on a social media platform before. Posts such as viral live feeds of people attempting suicide show clear intent and become heavily discussed across the world for a moment in time.


We see this phenomenon pop up on Facebook, Snapchat, and TikTok periodically over time. These posts are hard to miss because of how confronting and raw the content is, and how frequently they are reposted for voyeuristic purposes. Thankfully these posts are in the minority, yet everyday people are posting in relation to their suicidal thoughts, and these may go under the radar of being noticed for their true intent.


Social media platforms have become a space that people use to diarize, express, and seek connection with others about challenges they are facing and thoughts of suicide. Suicide-related posts can be much more difficult to decipher potential risks as they are rarely explicit. When people feel suicidal, they rarely use such direct terms as ‘feeling suicidal’. Instead, more colloquial, or cultural terms are used, and these terms can be different across location, age, and social group.


Social media then adds another layer of complexity when it comes to suicidal communication. Digital hieroglyphics-like memes and GIFS, cultural nuance, and veiled communication to avoid censorship filters can make these posts difficult to distinguish because suicidal language is even less clear. When you are scrolling through a feed, you may not pick up on these distinctions because they are not sitting on your page with a giant red flag.


Platforms have stated that they are proponents of safety when it comes to their content. There is a narrative that there are features to flag content with the site, with algorithms to detect certain wording and moderators who work on what is referred to as ‘trauma floors’, siphoning through reported and flagged posts. The sheer number of reports makes timely review impossible, but even if moderators were able to be to respond in that minute when communication is so idiosyncratic, how can one person understand intent through another’s personal, cultural, and societal lens from across the globe let alone thousands of individuals on a rolling basis? The answer is they can’t.


Whilst platforms will tout success stories of being able to send emergency services to people’s houses and preventing death by suicide as proof of their safety processes, these are beyond rare in comparison to the sheer volume of platform users heading onto social media to express their suicidality.


Despite having over 23 years of working in the area of suicide prevention, intervention, and postvention, the complexity of detecting risk on social media isn’t lost on me. It is both an art and a science. Whilst there are evidence-based risk factors and warning signs, everyone is an individual and what fits for one, doesn’t necessarily fit for another online. Add to this the impact that distress has on our ability to think and communicate clearly, particularly digitally. Nevertheless, there have been some common themes for those who contributed to my academic research.


Visual communication tools are often used on social media to explore feelings and thoughts that are difficult to express either because of sensitivity or complexity. Memes and GIFS are particularly popular. Nihilistic humour or symbolism is common to express being overwhelmed, feeling hopeless or isolated. These visuals, in isolation, may seem humorous but they often have a deeper meaning attached, the comedic element acting as a veil to desensitize the sting of reality.


People may also take part in live feeds or videos of risk-taking behaviours. Drugs and alcohol, reckless driving, and violent weaponry can feature. It isn’t necessarily directly suicidal content but the ease of taking part in risky activities is seen to increase, particularly with teenagers. Then there are seemingly obsessive posts and reposts referencing or glorifying those who have died by suicide. These might be those with celebrity statuses such as Kurt Cobain, Twitch or Robin Williams, or people who are known personally to the poster.


The use of a hashtag makes searching and finding new, relevant suicide-related content a breeze. Again, to escape the filters, this hashtag may not always be a straightforward word or phrase, as we have seen time and time again. The choice of hashtag may also change quickly to avoid censorship. Spelling changes, substituting numbers for letters, slang and code are all used to avoid censorship filters. Suicide being spelled S00icide is an example. This isn’t a phenomenon isolated to suicidality, underground pro-anorexia posts and hashtags have been in existence for as long as the option to connect to others in this way has existed.


Contagion has always been of concern in relation to those vulnerable to suicide. Media protocols are in place to control what is released to the public when broadcasted by companies, but there aren’t any restraints on what the general public post on social media.


Social media can contribute to contagion by providing a flood of images, all easily accessible with a simple search, hashtag or having a friend like a post. It isn’t uncommon to receive videos of people discussing their suicidality, attempting suicide, and even pictures of death from suicide when looking for comforting content.


Exploring platforms to find help for suicidal thoughts or information on supporting a friend during a difficult time can show you information that is both helpful and destructive because the content retrieval process doesn’t discriminate between the two. One doesn’t even need to search specifically about suicide to be connected to the content. A curious mind looking up a trending death may link suicide-related material to the fyp suggestion or generational trends, such as being connected to a particular music scene, can link a subcultural influence such as self-harm with the interest. This has been seen time and time again with the pairing of the grunge or emo culture with the self-harm sub-culture via the inclusion of multiple hashtags to increase views and grow a platform. For example, a photo of a band on Instagram might have #alternative #music #dark #goth #emo #cutting #sooicide pulling in anyone connected to any of the algorithms to that central point.


How does an algorithm distinguish between dark humour and suicidality? Between a love of a musician and a connection to their despair? To possibly age-appropriate boundary-pushing and potential self-injurious behaviour? The simple answer is that it doesn’t, nor will it be able to.


Is social media all bad? Definitely not. It is a place where someone can go to process, seek connection, find refuge, and get information. It is a place where one can seek help from others and also be identified as needing help by people who see your content changing over time. Governments across the world have discussed the merit of instituting laws to remove online content, however, the question that becomes a sticking point is will removing all content eliminate a mechanism that isolated people use to flag that they need help, and will it take away the discussion and promotion of help-seeking measures by proxy? Subsequently, if this is removed, what can replace this tool for those who don’t feel comfortable with traditional mechanisms for flagging down help or connecting with others?


So, what is the answer? Community is the key to supporting and intervening with those online in suicidal crisis, whether that community is at a local or global level. We know the people on our feeds, and we see a longitudinal picture of their online communication. We may not always understand the intent of their communication, particularly if it is a meme, but we may perceive a change in tone or feel something may be off. We may see unusual hashtags or hashtags that don’t make sense. We need to ask or do some research on what this might mean. It can be a rite of passage to hide parts of your life from those around you but if you are seeing this information, it is something worth investigating further because that person may be experiencing mental health issues.


It is these gut feelings that we need to act on directly. This may seem intimidating. People may think “I don’t know what I am doing” or “I don’t want to say the wrong thing”. Simply reaching out with a personal message is a really good start. Checking in and expressing a sentiment similar to “hey I noticed some of your posts lately, I just wanted to check in and see if you are ok?” can be a huge tool in reducing suicidality. Why? Because people believe they have been seen and heard, they feel a connection and they know some people are safe and will not judge them. The opportunity to talk about issues also allows some steam to exit the pot. Even if it is simply a post they thought was funny or an off day, this action is an important one.


It is important to know that if you are a teenager reading this and you see a friend’s post you are concerned about, tell a trusted adult, and get their support. The assumption that someone older or more experienced will take care of it may not be true. Most teenagers who post suicidal content online will remove adults from the conversations with custom features so you cannot keep this to yourself. Even if you feel confident in handling the situation, both you and your friend need to be supported in navigating this.


But what next? You’ve had the conversation and you’re still concerned? Encourage your contact to see their GP or connect with a service such as a suicide or crisis line, many of which have both 24/7 phone and online chat access. Keep the communication line open as well.


If you think that a suicide attempt may happen rapidly, contact the local Emergency Services in that town and ask for a welfare check with the contact details you have. We cannot rely on platforms to keep up safe, irrespective of the narrative placed around safety options.


If you need to talk to someone, please reach out for help:

  • Australia: Lifeline: 13 11 14 or Suicide Call Back Service: 1300 659 467

  • United States: Suicide and Crisis Lifeline: 988

  • United Kingdom: National Suicide Prevention Helpline UK: 0800 689 5652

  • For additional international suicide prevention lines please visit https://blog.opencounseling.com/suicide-hotlines/


ABOUT THE AUTHOR, ANNIE FARDELL HARTLEY:

Annie is a Registered Psychologist and Suicidologist having worked in clinical, consultancy and management roles across Government, Education, Not-For Profit and the private practice space for over 20 years.

604 views
bottom of page