top of page
Search Results

104 items found for ""

  • Consent of Children on Social media - The Rise family channels

    “Hey, buddy, is posting your birthday photos online okay?” To many parents, there is an easy answer - “Well, it's my child, and I just want to post their milestones and funny incidents of them online to share them with friends and family.” Although your intention might be pure, the outcome most certainly isn’t. Consent is a pivotal part of our society and keeps our community safe and comfortable. When I was young, my parents quickly taught me about ‘bad touch and good touch’. It was one of my first lessons on consent and the most impressionable. The simple “Ask before you do” was enough for my little mind to feel comfortable in my skin and have the freedom of individuality even though I was 5! As parents, do you ask your children before posting photos of them online? Parents need to understand that once something goes on the internet, even if you change your mind and delete the content, it still wanders in the loopholes of the internet. It is, and it is fully erased. Those images of your newborn in their nappies and their little accidents will live on forever in the vast black hole of the internet. When parents and caregivers post their children's milestones and funny videos on the internet, they must be aware that they are not always used for the right reasons. An article by Stephanie Sokol suggests the implications of child images online. She questions parents on consent and how their children would feel when their funny moments are exposed and put into the internet to entertain family and friends. She suggested that as children grow up and find these contents of their childhood, it can prompt them to lose trust in their parents and grow up fearing being judged. Even if you, as a parent, trust that you are posting on a private account, these videos can be captured through screen sharing and screenshots and shared with many people. Parents must understand that little children cannot consent, and given their nativity and inability to consent, parents must respect their privacy and refrain from posting content that can shame their children. A new trend for parents on YouTube and TikTok is to start family channels. As a child, I watched many family channels do their pranks, funny videos, and family activities with their children. Family channels are nothing without their children, and the children are the main attraction. But there are some instances where these videos of your children for the entertainment of others go too far. In 2017, Mike Martin, the owner of the Youtube Channel DaddyOFive, was involved in a huge controversy where he used his children as ‘props’ as he pushed them to their limits, and exhibited abusive behavior to make some money from Youtube's monetisation schemes. The videos depicted pranks where his oldest child used to pass abusive and uncomfortable comments to the younger ones, to which they would respond with tears and postures of discomfort. What did these children do to lose their privacy at such a young age? Their parents want to make money off their children's lives, ruin their self-image and cause permanent trust issues to their caregivers. Later Martin lost custody of two of his children. But this is not the only case. There has been a rise of parents on TikTok who express their concern about children’s videos that are going viral on the platform. There has been an unhealthy number of saves, likes, and downloads of babies doing the most normal things - curling their toes, yawning, sneezing. Why are hundreds and thousands of people saving these videos? How do you, as a parent know what people on the internet use it for? A content creator on TikTok who goes by “World Shaker” shares his experience with posting children online. Being a dad himself, he seems visibly distressed as he explains that he saw a video where a dad ‘pats the bottom of his bare child.’ He states that the video had its comments disabled and 900 saves. This is disgusting. I am distressed by just thinking of the predators online and how they use such content, displayed FREELY by parents who think this is entertainment. Tampering with the little child's privacy as they are exposed to the public eye. To all the parents who post their children online, my humble request is - Don’t. I understand that sometimes they are too cute not to show, and even then try to cover their bodies and faces. The internet is a vast and uncontrollable domain, even if you think what you post is innocent. Trust me; there are people online who will use it in an unimaginably sickening way. Do your child a favour and respect their privacy, as you would like yours respected. Written by Arya - 18yrs.

  • 'We're all Doomed!' Is Daniel Howell social media's most positive influence?

    Back in January, my friend and I were able to escape the humidity of Sydney’s Darling Harbour in order to experience one of the most positive examples of social media stars around - Daniel Howell’s comedy show We’re All Doomed! 31 year-old British YouTube influencer Daniel Howell has been creating comedy and gaming YouTube videos, merchandise, books, and even two previous world tours with his creative collaborator Phil Lester, since 2009. He is well known for his self-deprecating and sarcastic humour, and has over 6.18 million subscribers on YouTube. Since 2017, he has been slowly shifting his approach to creating content. He decided to share components of his most raw and honest self. His video “Daniel and Depression” gained over 3 million views and two years later, he came out as gay in a 45 minute long essay-style video that gained over 12 million views, and 1 million likes. These two videos was what began a shift with the relationship with his audience. Daniel actively confronted his personal issues, documenting his emotions and advice, creating a mirror for self reflection in his audience. Personally, I distinctly remember processing those two videos when they first came out. I was 12 when “Daniel and Depression” was posted, and 14 when “Basically I’m gay” came out (no pun intended). I was on the younger side of Dan and Phil’s audience, as their peak was around 2013-2015, when their fanbase was still in their young to mid teens. I remember watching both repeatedly, aiming to try and understand myself more, both with my mental health and sexuality, and seeing someone I had evolved and grown alongside became comforting at the time, as I had used his videos in order to help deal with my own problems at school. Since his coming out video, Daniel had mostly disappeared from the limelight, not creating any content on his personal channel for two years during the pandemic. He reappeared 8 months ago with a video called “Why I quit YouTube” discussing his issues with creation, with the job leaving him burnt out and stressed over failed projects. Soon after, he announced his solo tour We’re All Doomed!, a comedy and performance show meant to both entertain, educate, and reflect on the state of the audience and creator. On March 2, his worldwide tour, that had begun mid 2022, finished in Dublin, wrapping him entirely around the world, sharing his sarcastic positivity with his global audience. Thanks to Safe on Social, I was able to meet the man behind the show for a little while to have a chat and get a photo with him. I also got the opportunity to talk to other people in the audience and compare my thoughts and experience on Daniel's content, the show and how he makes this crazy world of social media feel just that little bit easier. From the audience: Out of the people interviewed, only 20% of fans had been to previous shows, such as Interactive Introverts, and The Amazing Tour Is Not On Fire. Having been to Interactive Introverts in the past, I was surprised by how many people were seeing him for the first time. Regarding his openness and authenticity on social media and his affect on his audience, people replied with similarly positive comments: “He’s made me feel a lot more confident in myself and my online presence.” “He helped my anxiety in the past, it why I’m here, I was finally able to get out of the house!” “He's a very inspirational talker, the way he addresses his own mental health, his own queerness, it’s extremely admirable.” “I wasn’t able to experience the shows when I was younger because of my family, but he was able to affect me so much when I was a kid.” “When Dan came out, I felt comfortable to come out myself. He came a year or two before I came out as Transgender, so it was a big deal for me.” “Its nice to see an influencer who’s honest about themselves. While most are, Dan is really candid about it.” Thoughts on the community: “It feels very safe in this atmosphere, there’s a lot of people I see myself in.” “I definitely noticed I was in the right place when everyone looked like me.” “The community surrounding Dan and Phil helped me discover my identity as a part of the LGBTGI community, as I think its been for a lot of people here.” “I’ve been able to make friends at previous Dan and Phil shows. It’s really comforting to see people who are dressed the same as you.” “Dan attracts a very specific community. So it's cool to see how everyone is similar, and how everyone has grown up together.” The show The two and half hour show was split into two sections. The first half being light-hearted and educational, both highlighting the positives and negatives of this world, including global warming statistics, the internet, deep fakes and AI generated art, and of course, social media. This Included several montages of his large, circle screen behind him of Daniel “doom scrolling.” Showing natural disasters, mixed with jokey social media tropes, the beauty influencer, the right wing podcaster, and the clickbait live dancer. His social media section in particular was able to catch my attention and get me to reflect on myself and my own presence. Discussing how there was a clear divide in social media, with individuals who were insecure about “having an interesting life” where they constantly posted and updated people so others didn’t think they were living something mundane, while others were ok in that mundanity, such as Dan. My friend and I turned to each other in this moment, as we both knew where we stood in that situation, I was the former, he was the latter, with Daniel. He also acknowledged the hypocrisy in his statements, being that this was his job and how he got himself here. Then the show shifted after his dramatic meltdown, screaming “we’re all doomed.” With Dan reflecting with the audience about his experiences coming out, keeping his social media job, and struggling with his mental health during the continuous amount of crisis that appear in our world almost instantly. He ended up leaving the audience with an old quote from one of his videos. “Embrace the void, and have the courage to exist.” Ending with a slideshow of happy moments, with his back towards the audience, before the screen faded to black. A simple sentiment that stuck with the audience, a quote to take with them to keep trying, just like I had, and just like he did. ABOUT THE AUTHOR, OPHELIA: I am in my first year at College studying Journalism and I love all things cultural and sociological. What drove me to become a part of the Safe on Social team was contributing to fostering a more equal and safe online world and the opportunity to educate Australians to promote a healthy relationship with the internet. My skills regarding managing cyber/creative burnout and acknowledging and responding to online criticism and hate will positively impact readers and the community.

  • Vapes and Social Media - What parents can do.

    "Vapes" are readily available and very accessible to teenagers (and in some cases tweens), as many are purchasing them through social media platforms. While vaping products were initially marketed as a safer alternative to smoking cigarettes, growing evidence suggests they are very harmful. Vaping has become increasingly popular among Australian teenagers, with some studies indicating that over one in ten high school students have tried vaping. Schools often ask the Safe on Social team to speak about the danger of ordering vapes online. A recent study by the National Drug and Alcohol Research Centre at the University of New South Wales has revealed that social media platforms are facilitating the easy purchase of vaping products by Australian teenagers. The study found that traditional retailers are no longer the main source of vaping products for teens. Instead, 28% of high school students have resorted to buying these products through platforms such as Facebook, Instagram, Snapchat, and Twitter. Disturbingly, 80% of these buyers were not asked to verify their age. This poses a great concern since vaping products contain nicotine, which is harmful and highly addictive for young people. Vaping product sellers advertise their wares on social media by using hashtags such as #vapeforsale or #vapesforsale. Buyers then get in touch with sellers to arrange the sale. Sometimes buyers will agree to meet sellers in person to exchange cash for the product, which is a hazardous situation. Additionally, online marketplaces such as Facebook Marketplace or Gumtree, which prohibit the sale of tobacco and vaping products, are being used by sellers to circumvent these rules by using coded language or offering related products such as empty vape cartridges or flavored e-liquids. Vaping can lead to lung damage, respiratory problems, and other health issues. The unregulated nature of the vaping market on social media platforms raises additional concerns. There is no oversight of the quality or safety of the products being sold. There is no way to ensure that the products are not made up of highly toxic liquid, or the vape itself has been tampered with. Teenagers who purchase vaping products through social media are also at risk of being scammed or ripped off by sellers. They may meet up with strangers to collect the goods they purchased online. They can post pictures, videos, and descriptions of their products and provide information about how to purchase them that bypasses any restrictions. The Australian government took some steps to address the issue of teenagers purchasing vapes through social media. In 2019, the government passed legislation that makes it illegal to sell vaping products to minors, and in 2020, the government introduced new regulations that require all vaping products to be registered with the Therapeutic Goods Administration. Despite these regulations, the sale of vapes through social media platforms continues to be a problem. To combat this issue, more needs to be done to educate young people about the dangers of vaping and the risks associated with purchasing products through social media platforms. Parents, educators, and health professionals need to work together to raise awareness about the dangers of vaping and the importance of protecting young people from the harmful effects of nicotine addiction. What parents and carers can do to help combat the issue of teens buying vapes through social media: Talk openly and honestly with children about the dangers of vaping and the potential consequences of getting involved in illegal activities like buying vapes through social media. Open communication is key. Encourage your child to talk and offer support without judgment. It is never too young to start this conversation. Educate yourself on the risks associated with vaping and the strategies that schools are using to combat the issue. This will help you to better support your children and work collaboratively with schools to address the problem. Establish clear rules and consequences around vaping. Make sure your child understands the consequences of vaping, including the potential health risks and legal consequences. If you are made aware of a young person buying vapes online or through social media, report this to the app, and the police. This can help to prevent further illegal activity. Keep an eye on your child's behavior, grades, and social activities. If you notice any changes, talk to your child and address any concerns you may have. Establish clear rules and consequences around vaping. Make sure your child understands the consequences of vaping, including the potential health risks and legal consequences. Be a role model by not using any tobacco or vaping products in front of your child. If your child is addicted to vaping, seek professional help. Talk to your child's doctor or a mental health professional for support. It is important to be proactive about your child's health and well-being. Attend school meetings and stay informed about policies and programs aimed at preventing vaping in schools. Advocate for stronger regulations around the sale and distribution of vapes, including stricter enforcement of age restrictions and tougher penalties for those who sell vapes to minors online and off. To report information about the suspected illegal sale of these products. In Australia report to Crime Stoppers via https://www.crimestoppers.com.au If you are outside of Australia consult your local law enforcement.

  • TikTok 60min limit Under 18s

    Last night TikTok announced that accounts owned by those under 18 will automatically have a one-hour daily limit, which cannot be overridden. TikTok then goes on to say users can choose to disable the limit, but they will be prompted to set a daily screen time limit for themselves if they use TikTok for more than 100 minutes per day. Then there is more.......If a teen reaches the 60-minute limit, they will see a passcode prompt which they will need to enter to continue watching - a way of forcing them to make an "active decision" to extend that time. Additionally, parents can use the "Family Pairing" feature (which has been around for quite some time) to link their account to their child's and set a custom screen-time limit that cannot be overridden. Confused? as to what is actually happening? so are we! Family Pairing (which has been around for quite some time) allows parents to restrict some content and place restrictions on who the child can message on the app. For families with this setting in place, the parent account must make any adjustment to screen time limits. Currently, accounts for users aged 13-15 won't receive notifications from the app after 9 pm, and ages 16-17 won't see notifications from 10 pm if they have activated Family Pairing. TikTok's Family Pairing requires a strong level of trust between parent and child to ensure the child declares their account or accounts to their parent and agrees to be part of the Family Pairing system. It will work in some cases but not others. The new default screen time limit of 60 minutes daily will start in the coming weeks. While these new features (as confusing as they are) are seen as beneficial for the digital well-being of children, we must never "set and forget ." Please continue to have open conversations about rules, limits, and what to do when things go wrong. There has been no word on the restriction of actual content. Kids may see wildly inappropriate, violent, or disturbing content in 60mins. To activate Family Pairing which was introduced in April 2020 as part of TikTok's efforts to enhance user safety and provide more parental control on the app. Open the TikTok app and log in to your account. Go to your profile by tapping on the "Me" icon at the bottom right corner of the screen. Tap on the three dots in the top right corner to access your settings. Scroll down to the "Digital Wellbeing" section and tap on "Family Pairing." Tap "Continue" and choose whether you're a parent or a teen. If you're a parent, follow the prompts to connect your account to your child's account. If you're a teen, you'll need to enter your parent's TikTok username and password. Once connected, you can set screen time limits, restrict certain types of content, and control who your child can message on the app. You can also set a passcode to prevent your child from changing the Family Pairing settings without your permission. To access your Family Pairing settings in the future, go back to your settings and tap "Family Pairing."

  • How social media gives the loudest microphone to the most dangerous people

    The success and influence of social media platforms lies within the framework of their algorithms to appeal to specific the interests and needs of their users. We have accepted the idea that the collection of our data and information is not overly concerning as it is rather a useful resource in serving us more content of what we love. Social media algorithms are built around the core objective: to promote content that will maximise user engagement. However, it has become noticeable that social media posts further tap into and play to our need for stimulus, on our anxieties, into negative, primal emotions in which our disdain and hate ironically keep us clicking and engaging with the posts; profiting these platforms. You see, under the rhythms of capitalism, enforcing dominant and oppressive forces are essential to its flourishment. Social media essentially weaponises these rhythms, as it not only condones but it often promotes these harmful behaviours and ideas that fortify harmful hegemonic discourses to tap into the vulnerability of young minds that in turn maximise their user engagement and profits. This becomes highly evident within recent social dilemmas surrounding Andrew Tate’s prevalent presence on social media. Even though his content radically violates social media guidelines, platforms appear to do little to limit his spread or ban his account. Instead, it has propelled him into the mainstream, allowing clips of him to proliferate, and actively promoting them to young users. Videos of him expressing extreme misogynistic, hyper-masculine regressive ideas have been viewed 11.6 billion times on TikTok alone. Many of these violate community guidelines, but why aren’t they being taken down? Because views generate profit for TikTok, so they continue pushing these videos onto the feeds of young men, regardless of the real-world consequences that online misogyny can have. We know what happens when violent internet misogyny goes unchecked; we saw it with Hunter Moore, who popularised revenge porn in 2012. We see it with incel forums, which have been churning out mass shooters at an alarming rate. These videos not only encourage the engagement of young boys and shockingly adult men, but it feeds off the intervention of female viewers that respond to his pernicious content, further inflating his position on social media platforms to maximise views and engagement. This algorithm further becomes apparent in emphasising socially prescribed gender roles, western beauty standards and internalised misogyny through campaigns and content that trap girls into a vicious cycle that tells them their worth is in what others think about their bodies. Although these may not be explicit, they are transferred invisibly and subconsciously internalised. Such harmful narratives, which reward extreme and dangerous ideologies, are boosted and amplified through algorithms that enable their content to reach greater visibility and engagement, creating a fertile breeding ground for serving companies’ capitalist interests at the expense of reinforcing oppressive hegemonies. Ultimately, social media platforms covertly popularise controversial and problematic ideals in their content to perniciously gain further engagement and interaction by users, in turn translating to profit, regardless of its harmful influences and consequences on individuals' beliefs, attitudes and perspectives. We need to hold social media platforms accountable for giving the loudest microphones to the most dangerous people. ABOUT THE AUTHOR, GIGI: With a keen interest in Property Economics and Business Law, I am in my first year of University after completing my HSC at school in Sydney last year. ​ I wanted to participate in the Youth Committee to contribute a contemporary perspective on the safety of social media engagement and effective for young people.

  • Legal notices sent to social media companies demanding action to prevent the sharing of CSAM

    There are big confronting conversations that desperately need to happen NOW. We are grateful to see that Australia's eSafety commissioner, Julie Inman Grant, has issued legal notices to five social media companies, including Twitter, TikTok, Google, Twitch, and Discord, on their platforms. Failure to comply with the notice within 35 days may result in daily fines of up to $700,000. One of the the many major concerns we share with the eSafety commissioner is how social media giants monitor harmful content on their platforms. Over the past decade, Safe on Social has been closely watching the increased use of artificial intelligence (AI) and algorithms by social media platforms, particularly when it comes to recommending harmful content like child sexual abuse material. These technologies are designed to analyse user behaviour and make recommendations based on what the user will find engaging or interesting. However, in the case of child sexual abuse material, these algorithms can contribute to the proliferation of illegal content. Algorithms used by social media platforms work by analysing user data to create profiles and make recommendations. When a user interacts with or searches for content related to child sexual abuse material, the algorithm may use this data to recommend similar content to them. This can create a feedback loop where users are continually exposed to increasingly extreme content, which can normalise and desensitise them to harmful material. Moreover, algorithms can be manipulated by those who seek to distribute child sexual abuse material by using techniques like search engine optimisation (SEO) and keyword stuffing, which can promote their illegal content to a broader audience. Keyword stuffing is the practice of inserting a large number of keywords into Web content and meta tags in the attempt to artificially increase a page's ranking in search results and drive more traffic to the site. A keyword is a significant term that is relevant to the content in question. Keyword stuffing is considered an unethical SEO technique at best and an attack technique at worst. This can make it more difficult for social media companies to detect and remove harmful material. Using AI and algorithms to recommend child sexual abuse material is a serious problem, It is crucial for social media companies to take responsibility for the content on their platforms and invest in better technologies and policies to detect and remove harmful material. At Safe on Social we would also like to see this extended to online games such as Roblox and the "cartoon" type content. This week we have had numerous presentations where kids as young as 8yrs old have been offered by complete strangers free "Robux" the in game economy, to "lay down" on top of another avatar, to undress their avatar to be naked, be someones boyfriend or girlfriend, play the Mum or the Dad be "adopted" and numerous other concerning behaviours when they are happening in role play games. All of this contributes to the desensitisation of children during what is often long term online grooming. Parents need to completely understand what is happening on these platforms and be open to confronting and tough conversations about the horrendous amount of child sexual abuse content and what is actually happening. Ultimately, it is the responsibility of all of us to ensure that the internet remains a safe and welcoming space for all. It is crucial for social media companies to actively combat child sexual abuse material and protect vulnerable populations like children. On Monday the 20th of February both myself and our expert presenter Madeleine West, gave evidence at The Parliamentary Inquiry into law enforcement capabilities in relation to child exploitation. Australia is leading the way in issuing legal demands to tech companies and taking a zero-tolerance approach to predatory behaviour. Similar legislation has been implemented in other countries, such as New Zealand's Films, Videos, and Publications Classification Act 1993 and Harmful Digital Communications Act 2015, which make it illegal to possess or distribute such material and require tech companies to remove harmful digital communications, including child sexual abuse material, upon receiving a complaint. In the United Kingdom, the Online Harms White Paper proposes a new regulatory framework to tackle illegal and harmful online content, including child sexual exploitation and abuse material. The proposed framework would require tech companies to protect their users from unlawful and harmful content, and failure to comply could result in hefty fines and other penalties. In the European Union, the Electronic Commerce Directive requires online platforms to remove illegal content when they become aware. The recently passed Digital Services Act proposes new rules to combat illegal content and increase transparency and accountability for online platforms. In the United States, the Child Protection and Obscenity Enforcement Act and the PROTECT Act make it illegal to produce, distribute, or possess child pornography, and online platforms are required to comply with these laws. However, Section 230(c) of the Communications Decency Act of 1996 protects online platforms from being held responsible for content posted by their users. Still, this immunity does not extend to content the platform creates or supports. Collectively we need to do more. If you school or business would like myself and Madeleine to speak to the your parent community, business or conference about our experience on this topic and how we can do more please get in touch at wecanhelp@safeonsocial.com If you or someone else is in immediate danger please call Triple Zero: 000 To report criminal activity, please visit: Crime Stoppers or call 1800 333 000 To report suspected grooming, live streaming or consuming of child sex abuse material, an individual having an inappropriate conversation with a child or blackmailing of a child for sexual purposes you can also go online and report abuse and see more details on the Australian Centre for Child Abuse and Exploitation website. www.accce.gov.au If you need help or emotional support, please contact: Lifeline - 13 11 14 Kids Helpline - 1800 55 1800 1800RESPECT - 1800 737 7328 Bravehearts Foundation - 1800 421 468 PartnerSPEAK - 1300 590 589 Fighters Against Child Abuse Australia (FACAA) Blue Knot Foundation and Redress Support Service - 1300 657 380 13YARN - 13 92 76 for Aboriginal & Torres Strait Islander crisis support

  • IMPORTANT NOTICE: Winnie the Pooh Slasher Film is out TODAY

    Winnie the Pooh: Blood & Honey hits cinemas today and is the latest horror film that is spreading across social media like wildfire after the successful poster release in May 2022. When the trailer dropped in September 2022, the TikToks and Twitter posts ramped up, with the anticipation of release day causing quite the stir amongst adults and children alike. Disney decided to expire copyright of Winne the Pooh and his friends in January 2022, and as such, this popular children’s franchise has now been twisted into what can only be described as a very poor evolution of something beautiful into something sinister, and one that is very likely to capture the attention of children. What’s it about? The film serves as a horror retelling of A. A. Milne and E. H. Shepard's Winnie-the-Pooh books and follows Pooh and Piglet, who, after being abandoned by Christopher Robin as he headed off to College, have now become feral and bloodthirsty killers, after suffering a cold winter in Hundred Acre Wood. After fending for themselves for so long, Pooh and Piglet embark on a murderous rampage, terrorising a group of young university women and as well as their old friend Christopher Robin. What you need to be aware of The movie features several bloody, violent themes that are not suitable for children and as such, has been given the Australian Classification of R18+ meaning: ‘The content is high in impact. R 18+ material is restricted to adults as it contains content that is considered high in impact for viewers. This includes content that may be offensive to sections of the adult community. Only a person who is over the age of 18 years is allowed to enter the cinema.’ This rating however isn’t deterring young people, with social media platforms like TikTok, YouTube and Instagram providing them with not just the trailer, but with snippets of themes from within the movie including these seemingly popular kill scenes: A simple Google search of Winnie the Pooh features write-ups about the film as well as videos from the movie, making it incredibly accessible to any child with internet access. And as most social media platforms work on an algorithm, popular content is featured within their feeds so it is very likely that it will be presented to your child for viewing if they are on popular platforms. Use your judgement All children are different in terms of resilience and tolerability so here’s a few things to consider before allowing young people to view the movie or the related content: Research the movie and the content themes yourself Preview, ponder and then parent. After you have watched it, use your best judgement as to whether you think it is suitable for your child Don’t fall victim to the ‘but all of my friends are watching it’ because it’s simply not true Consider the social media platforms your kids have access to – TikTok, Instagram and Snapchat will be especially high risk factors when it comes to snippets being seen You know your child best, assess their fear factor but also their ability to bounce back. If you decide to let your kids watch Winnie the Pooh: Blood & Honey, ensure they know not to replicate anything within the movie like many kids did during the wave of Squid Games. This was a challenge for many Schools across Australia and the world with children becoming aggressive, being violent towards others and causing lots of trouble for themselves and others. It’s also important that they know that they shouldn’t share the content they might see with others. Sometimes young people might be scared to tell an adult if they see something inappropriate online as they fear that their devices may be taken off them or their access may be revoked. It’s really important that you continuously have open conversations with your children and/or students about how to let you know if they feel uncomfortable by something they have seen or read online and that they know you are always there to support them. And if they do accidentally view Winnie the Pooh content, sit them down and have a chat about what they saw and how it made them feel. It’s really important to stress that it’s not real life. And brace yourself, director Rhys Frake-Waterfield already has plans for a Pooh sequel and the decimation of Peter Pan, envisioning an entire universe of ‘crazy concepts’.

  • Babysitting on Social Media is Risky Business

    Trigger Warning: This post discusses child-related abuse One of the most convenient uses of the internet for busy parents and caregivers is the ability to quickly connect with local businesses in their area. Community Facebook groups often feature posts such as “searching for a reliable tiler/baker/landscaper etc”, but this should NOT translate into posting an ad online for a potential babysitter. There is no doubt that social media does an excellent job of connecting individuals in different geographic areas to share reviews of local goods and services, however when this crosses over into advertising for a babysitter, this becomes incredibly fraught with risks and potential danger. Take this example below from a community Facebook group with 18,000 members: This example would be one of thousands of similar posts on Facebook every day. Not only is this post advertising the (almost) exact age of her daughter, the approximate location of her home and her daughter’s interests, it is also allowing people to know that her daughter would be in bed by 8pm and that the babysitter would need to “prepare for bedtime” i.e. changing clothes, putting into bed and so on. Whilst this parent likely assumes that there would be minimal harm in posting this, as it is a private Facebook group, there are many key risk factors here to consider. To enter private Facebook groups, you often need to answer a few security questions (e.g. What suburb do you live in?), which are by no means robust and there is no way to fact check them. Similarly, community Facebook pages are often run by a group of individuals who are doing the role on a volunteer basis - there is absolutely no way that they can monitor every person entering the group and verify that they are who they say they are, nor can they easily monitor all the comments within the posts. Unfortunately, these types of posts are not limited to private Facebook groups. Oftentimes, individuals with public followings will make a Facebook or Instagram post to their community of (sometimes tens of thousands) to ask if anyone can recommend a babysitter for a specific area. Similarly, “babysitters” often advertise their services on Facebook pages allowing parents to comment and ask for more details. Time and time again uneducated parents are posting and responding to these advertisements and inadvertently opening up the door for predators. One chilling example of how this online advertising of babysitting has ruined many lives is in the case of Jareth Harries-Markham. Harries-Markham, 24, faced the Supreme Court on 27th Sep 2022, where he pleaded guilty to more than 140 charges, including 35 counts of indecent dealing with a child under 13 and 94 counts of indecently recording a child. The victims, some of them sisters, were aged between 8 months and 9 years, and were being babysat by Harries-Markham, after their families had responded to an advertisement he had posted online. Some of the families hired Harries-Markham on a live-in basis and the court heard some of the offences happened as the children were sleeping. Other victims were friends of the children who were abused while they were on play dates at their home. Harries-Markham will have to serve 16 years before he can be released on parole. If you are the administrator of a community Facebook group, you should remove any posts requesting/advertising babysitting services for the safety of your community and to also remove any liability to you as the Administrator. Unfortunately, tragedies like this happen not only to those who are seeking babysitting services online, but to those who are young, uneducated, and are advertising their services. Below is an example from the same Facebook group, posted by a young person seeking babysitting work: Whilst this 16-year-old is trying to do a noble thing and make some honest money babysitting, I am sure I do not need to go into details as to why an advertisement like this is riddled with risk. With just one click, it was easy to see a large collection of photos of the young babysitter. Without proper protections in place, it would be all too easy for a predator to pose as a parent and lure this teen to a location to meet up under the guise of them providing babysitting services. In addition, a predator could hire this young person for regular babysitting jobs and use the opportunity to groom them for future abuse. So how can you find a babysitter safely? Whilst there is no failsafe way to find a suitable babysitter, advertising in a public space that you would pay a stranger to be alone with your children is certainly not the safest. Below are some alternatives to posting online: Ask a trusted family member or friend Ask friends or other parents for babysitter recommendations. This might include teenagers they know, or even their own teenage children Talk to the parents of your child’s friends about setting up a babysitting club, where you take turns to babysit for each other If your child attends formal childcare, ask your child’s educators if they’re interested in babysitting after hours Use a babysitting agency. Agencies can provide experienced babysitters, who have been background checked by the agency and come with references from previous families. If advertising your babysitting services (particularly young teens), you could consider: Giving flyers directly to parents that you or your parents know Use notice boards at local community centers they are familiar with (e.g. their local Scouts or dancing group) Use a babysitting agency. Teens who are offering their babysitting services should only do so with the support of a trusted adult to ensure that there are safety measures in place to protect the teen (e.g. an adult attending an initial meeting to check that the family and the teen babysitter would be a good fit). Please share this information with those around you who may use Facebook to advertise or search for babysitting services. Whilst many parents have successfully advertised online to find a babysitter, once you begin to look at all the ways in which it can go tragically wrong, it is just too risky. ABOUT THE AUTHOR, ANNA HAYES: With 10+ years experience in education and a range of leadership positions, Anna has seen first-hand the rapid rise of technologies embedded into learning programs - often with little regard for teaching students, parents, and teachers.

  • Childcare Provider and School Apps: Do You Know Exactly What You Have Signed Up For?

    As a parent, especially new parents, we go through many emotions. Handing our children over to be cared for whilst we attend to working or other life commitments can be daunting and sometimes even terrifying. We worry about whether they will get the same care and attention we give them or what will happen if they get sick or hurt. Local community social media groups frequently feature callouts from parents asking for recommendations for the best childcare centre, or sometimes advice on which ones to avoid. Many centres are recommended based on the use of their centre App, and how engaged the service is, with the App keeping you up to date on everything your child is doing during the day. The App, you will be told, makes life so much easier. You’ll be sold on how much information is put into it. General newsletters, the weekly menu, photos of your child, observations, and reflections, you’ll know just how much they slept and ate and all about their toilet habits. App and software developers have seen the benefit of creating childcare (and school) Apps. The industry is lucrative and seemingly has an infinite future supply of user accounts. According to the Australian Government Department of Education, the most recent data published in the September quarter of 2021, 1,398,050 children attended a childcare service. As our population grows and our economy continues to change, this number will only increase yearly. When we as adults download an App, we consent to the Terms and Conditions, whether we read them or not, and honestly, how many people really read the fine print? By simply downloading and ticking that little ‘I agree’ box, we are consenting for our own data to be used by the App as well as consenting to permissions such as accessing our photos, comments, phone contacts, and sometimes even our location. We understand that we are adding to our own digital footprint, and for the most part, that’s fine, we’re adults, and we get to make that decision. But what happens to the rights of our children? What data are the apps collecting, and who can see it? How will the data these apps collect affect them in the future? These are the questions we raise regarding the Apps that most childcare centres enforce for communication purposes when a child first attends their centre or when the Centre chooses to introduce one. WHAT APPS ARE CURRENTLY USED IN AUSTRALIA? The list is quite exhaustive, with new Apps popping up all the time; however, some of the most popular Apps that childcare providers are currently using include (but are not limited to): · Xplor · OWNA · HiMama · Brightwheel · KidKare · Sandbox · Kidsoft Primary and Secondary Schools are also mandating the use of Apps, including Sentral, SeeSaw, School Stream, Skool Loop, and SkoolBag, to name a few. The use of Apps in a childcare and school setting is very common place but what many don’t realise, is that when we as parents and/or carers agree to these Apps on behalf of our children, we are aiding in the creation and building of their digital footprint, a footprint that they have no control over, and this footprint can be very sensitive. Arguably, your childcare service provider does not have control over this footprint either. Who does? you may ask Well, when your childcare service provider enters into an agreement with an App provider, the contract of service is between the service provider and the App provider. The parent or guardian is merely a user, not a party to the contract. In basic legal terms, if you are not a party to a contract, you cannot enforce it or seek a remedy for any breach or damage. WHAT HAPPENS WHEN YOU SIGN YOU AND YOUR CHILD UP TO YOUR SERVICE PROVIDER’S APP? The process is very similar across most Apps: When you submit your completed enrolment forms, the childcare service provider enters the personal details of parents/guardians in the administrators’ side of the App. Your children’s data is also entered into the App, including sensitive medical information, their Medicare card details, any medical providers, and medical reports. The provider then uses the App to upload daily routines, toileting data, sleep data, feeding data, photos, observations, stories, and incident reports, including behavioural notes and other tailored reports. Likely a parent/guardian can comment on the entries made. Sometimes other children will be included in these entries. And yes, mistakes can be made with a simple click on the wrong child or parent, seeing your child’s photo, sensitive information, being shared with other families. Many Apps allow you to invite ‘family’ to view your child’s journal, which also includes other children if they are featured in your child’s account (which is highly likely). There is no vetting of who can get these invitations; it’s on the user to invite others. This could be an aunt, an uncle, the grandparents, or even the man who lives next door, ‘just in case.’ This means that someone else is seeing your child, someone you haven’t consented to, and the child they have permission to view. You do not get to approve additional users who are added or given access by other parents, only those you choose to provide access to yourself. Additionally, there is rarely two-factor authentication in these Apps to protect login details and to ensure that people are who they say they are. Let’s not forget it’s common to practise to share login details with family than have them set up their profiles, so commentary, viewership, and communications can be from someone who is not the intended user. WHAT ABOUT PRIVACY, THE SECURITY OF THE DATA COLLECTED, AND YOUR CHILD’S RIGHTS? Mandating the use of Apps is becoming standard for the convenience of childcare providers. Recently, we have been made aware of more and more After School Care services mandating the use of Apps. In some cases, even including that, the service provider can only be notified of absences in the app. Otherwise, an ‘administration fee’ will apply. This is also applicable to many primary and secondary schools. What’s the problem with using a childcare App if, seemingly, every other industry has one? Very little research has been conducted into the childcare App market. It doesn’t even raise eyebrows as these Apps are seen as convenient and a timesaver for the provider and a key engagement driver for parents. And the Apps aren’t silly; some are designed with the whole mummy/daddy guilt in mind, played on by the developers, so parents feel that they can’t sign up without suffering from major FOMO and a massive case of the guilts. A Privacy and Security Analysis of Mobile Child Care Apps is a study that was released in March 2022. The study looked at 42 Apps and found a direct threat to privacy by tracking mechanisms embedded in the applications. Another risk it noted was information leakage. We forget that children cannot consent to their data being stored. That in itself raises privacy issues. As the authors of this study state, it is the job of the parents and educators to act with caution. Always remember that you are the product if something is free to use. So, if your child’s daycare centre is using a free App to keep you up to date with what your child is doing, that means that you could be paying with external access to all the data stored within it. Serious questions need to be considered about the mandatory use of these Apps when considering Australia’s anti-discrimination laws and the privacy rights of a child. There appears to be a giant black hole, and these two do not meet. On a cursory glance, forcing an individual or family to use an App for convenience appears to be nothing short of indirect discrimination against those exercising their rights to keep their lives offline. The Human Rights Commission defines indirect discrimination as: “where an unreasonable rule or policy that is the same for everyone but has an unfair effect on people who share a particular attribute.” Yet, for many, they will not satisfy the protected attributes required under our discrimination laws, such as gender, disability, or race (amongst others). Some will, however, for example, the vision impaired who may not be able to utilise these apps or perhaps families who are not from an English-speaking background. Those of us who have, for want of a better expression, a ‘conscientious objection’ to putting our children’s data online have no protection. Interestingly, Article 16 of the Convention of the Rights of the Child states that: “ 1. No child shall be subjected to arbitrary or unlawful interference with his or her privacy, family, home or correspondence, nor to unlawful attacks on his or her honour or reputation, and 2. The child has the right to the protection of the law against such interference or attacks” These Apps that have been designed for convenience do not seem to consider either of these aspects, and there is a gaping hole, at least in Australia, about how we can protect our children from storage and usage of their data. The concerns with this are many, including that most services do not consult or are not consulting with parents/guardians about using an App, moving to an App, or in the case of new families to a service, even providing them with an option to use the App or to inform them that one is being used. Some families have no choice due to vacancy rates for centres in specific geographical locations. At best, you’ll be presented with a Privacy Policy or referred to the Centre’s Privacy Policy on their website that concerns the Centre itself and how it handles your and your child’s personal information. In one instance that has been observed, the Centre’s Privacy Policy is very vague, stating “… realises the importance of privacy to families/guardians and as such does not release any information of records stored to a third party for their use without the account holders’ authority unless required by law”. Arguably, the transmission of personal information is not being ‘used’ by the third party. Still, this Privacy Policy makes no mention that the service utilises an App and that, as part of the Terms of Use of the service, personal details will be disclosed. You also need to agree to the Apps’ Terms and Conditions and their Privacy Policy. One service provider states in their Terms and Conditions that whilst they take care to provide services and ensure that the App and website is free of any virus or malware, they are not responsible for damage caused, and that you indemnify the developer from all liabilities, costs and expenses. In relation to privacy, it states that whilst it aims to take due care, they do not warrant and cannot ensure the security of any information which is provided, and information is transmitted solely at your risk. CAN THEY REALLY MAKE AN APP MANDATORY? AND WHAT MIGHT THIS MEAN FOR YOUR CHILD IN THE FUTURE? The first generation of social media users are now parents, and they are the target market for these Apps. This generation had little or no guidance about the risks of what they were using when they were in school in the early 2000s, and now they are becoming parents; many don’t know the questions that they should be asking because it all just feels like ‘the way it is’. However, the problem with some of these apps, especially for those who are not tech savvy, is that you may not know what questions to ask regarding what vulnerabilities these Apps may have, what trackers are in place for analytics, or how they are used. We also don’t know how our data is used to ‘better the product,’ how secure the cloud storage is, and what country it is located in (many Apps state that data is ‘stored in an external data storage facility.’ And what happens to photographs and videos uploaded of our children? Who can download them, and who is taking screenshots? Who are the other parents, grandparents, siblings, aunties, or uncles who now have access to photos of your children through the App? For parents with no choice as to providers, their children’s data, along with their own, is being held for ransom because the use of the App is mandated by using the service. On top of all this, we have no idea what this data will look like for our children in the future. Already, some life insurers are using the reason a person has accessed mental health services, including youth mental health services, even without a formal diagnosis of a mental health condition, to decline or heavily restrict offering insurance coverage to an individual. This has been identified in various complaints to State Anti-Discrimination Tribunals and highlighted in a 2021 report by the Public Interest Advisory Centre. Whilst there is clear discrimination here, some insurers are finding their way around the Anti-Discrimination laws. Often, consumers do not know or understand their rights to challenge the decisions made. In addition to these reports, Safe on Social has been informed of cases where individuals are being declined insurance coverage for having had episodes of anxiety, including young adults. What about the possible impact of this data being released or otherwise located by our children’s potential employers down the track? How will the possible disclosure of this information impact their job-seeking? Already we know that employers are searching for prospective employees, with many scrutinising their social media presence, and the perceived reality of a candidate is an influencing factor for some employers. Adding an extra layer of sensitive information could be devastatingly adverse for some individuals. There is no suggestion that these Apps or the data collected is being misused at present, but any data collection is open to exploitation. Data such as sleep patterns, toileting, and what our children eat during the day is essential for some parents, but does it need to be collected and stored, especially when we don’t know who the App developers may share the data with? or sell it to? Photographs of our children are lovely, but do they need to be stored in a third-party Application that cannot guarantee our children’s privacy? Who monitors or vets who are allowed to use the App, and what invitations can be sent to what family members or friends? Other issues are more subtle but of equal importance. For starters, forcing a parent to utilise an App may disadvantage some members of our society (the vision impaired or the non-English speaking), potentially resulting in some form of indirect discrimination. It may even go further to discriminate against parents with family responsibilities who feel they have no choice but to remove their children from services because of a desire to protect their privacy. Those parents are forced to choose between work or placing their child in the care of a service provider who mandates storing their child’s data and who has no control over disclosing that data. We all sign Permission to Publish forms for our children, and there used to be a choice. If you opted out, you would be emailed the photo or given a printed copy. But lately, Safe on Social has been contacted more and more by parents that feel discriminated against. For example, a parent contacted us who was very upset that she had to pull her child from an early childhood after-school activity because she didn’t agree to photos of her child being published online. She had escaped domestic violence and did not want photos of her child online. She was told that her child could not participate if they could not be photographed and published on the business’s social media pages. Newer parents are learning from older parents and are indeed becoming wiser. New parents are making conscious choices to keep their children’s data offline; they’re not posting photos and are thankfully starting to be more cyber-street smart. But are they thinking about the Apps they use to keep track of their children’s activities during the day in childcare or just not sharing them on the major social media platforms? We teach our children about being safe online, and generally, we are now becoming more cautious about when we allow them online and what we allow them to do. These Applications and mandating their use of them take that control away from parents who cannot make informed decisions about what or how their data and their children’s is being used. Isn’t it time we had a conversation about how best to manage the delicate balance between convenience and our children’s privacy? BEFORE YOU SIGN ON, UP AND OVER YOUR CHILD’S DATA, HERE ARE SOME KEY QUESTIONS YOU SHOULD ASK YOUR SERVICE PROVIDER (IN ADDITION TO THOSE RAISED ABOVE): 1. Is the App paid for by the service provider, or are they using a free version? (Remember, your data becomes the product if something is free to use. If it is paid for by the parent, the use of the data may have further protection by the Australian Information Commissioners Office.) 2. Who has access to the app and its data? Where is it stored and can it be deleted if you or your child want it all deleted in the future? 3. How are the people accessing your and your child’s data vetted? 4. Are the photos able to be saved/screenshot? 5. Is there a Social Media Policy in place that advises parents that they should not share photos from within the App on their personal Facebook pages etc if other children are in the image? 6. Does the service provider have a way to email photos to the parents if they choose not to allow their child to be published on the service provider's Facebook/Instagram, and why? 7. If an opt-out is allowed, do they take photos and blur the child’s face out of things they publish online or exclude them completely? (This way, a child can still feel included, and their parents can be emailed the photo, but if blurred out, they cannot be identified online.) 8. What happens to the photos and the data when a child leaves the service provider? 9. Can a parent ask for all data to be destroyed, and if so, how does that happen and when? 10. Is the use of the App mandatory? Is there another way you and your service provider can communicate and share information without using a third-party App? Co-written by Andrea Turner and Kirra Pendergast Andrea Turner is a Lawyer and Cyber Safety Educator for Safe on Social. She is based in Cairns, Australia. Kirra Pendergast is the Founder of the Safe on Social Group of Companies and splits her time between Australia, The UK, and Italy. If you have any questions or concerns about the Application(s) that your childcare service, after-school activity provider, or School is using, or if you are a childcare service provider, school activity provider, or School who needs advice on policy and training, please get in touch with Safe on Social at wecanhelp@safeonsocial.com

  • M3GAN: The TikTok-inspired horror movie has hit Australian screens

    When you picture cyber security, what springs to mind? Do you picture the inside workings of a company? People behind computers warding off data breaches and hacks? Do you picture something literal, like Schwarzenegger’s The Terminator, slow walking down a decrepit hallway with a gun in a box of roses? What about the word doll? Are you reminded of your favourite childhood Barbie? Or maybe something along a more sinister line, such as Annabelle or Chucky? How about combining all three? After making an absolute killing (excuse the pun) in US cinemas, meme-glorified horror flick M3GAN has hit the big screen in Australia and she’s not taking any prisoners. Literally. Taking on the appearance of a teen girl, M3GAN is a hyper-realistic doll who is assigned with protecting her client’s child, as well as being her companion. The narrative details the ups and downs of friendship and violence as it demonstrates an interesting balance between comedy and horror. Raking in a whopping $US45 million (AU$65 million) you would be forgiven for thinking that this is some sort of a crazy success story, branching off from older horror tropes and applying a quirky, modernised idea, but M3GAN, which was produced by James Wan (Annabelle) brings a fresh perspective to entertainment, one that has been specifically attracting a younger generation. And it’s all thanks to the film’s creative use of social media. Featuring (now) incredibly notorious scenes such as the TikTok-inspired dance moves M3GAN displays before she kills her victims, the movie became a marketing success late last year. The popular dance scenes were used continuously for advertising as well as part of publicity stunts when groups of M3GAN lookalikes would appear together on breakfast television as well as during premieres of the film. A similar tactic was used on Australian Shores to promote the second season of The Handmaid’s Tale. Dancing M3GAN groups have been successful both on social media (it’s been meme heaven!) and in mainstream entertainment with user-generated content being uploaded to platforms including YouTube and TikTok. This, along with the interesting cover of Titanium, masked as a bedtime lullaby, M3GAN has left audiences springily satisfied with the film, with a current 79% audience score on Rotten Tomatoes. As expected, TikTok has jumped right onboard the M3GAN gravy-train with the use of their ad placement garnering intense interest as TikTokkers are greeted with the dancing doll and her snappy uses of violence when you first open the app. This is super clever of TikTok as their audience’s appetite for content was built on rapid fire videos, something that companies like TikTok are renowned for and have gained huge success from. The movie’s comedic take on the horror genre, has also gained public interest and attention on social media, both through film critiques and positive reviews, but also through content that was not advertised publicly until after the movie had begun screening, such as the Titanium scene, or M3GAN running on all fours through a forest. We’d be naïve to think that M3GAN was the very first movie to use modern components of popular culture and social media in its story telling, in fact, this element was used within the wildly successful whodunit series by Rian Johnson which includes Knives Out and the latest instalment, Glass Onion, which has experience significantly large success, only releasing on Netflix in late December. Glass Onion critiques the privileged, emotional, and downright stupid lives of the rich, where modern success stories of POC characters prevail, an immensely satisfying take on the classic murder mystery style of film derived from the 30s/40s predecessors. But what do these films have in common? Both use social media, whether it be in their marketing or used as one of the fundamental components of their storytelling. While the use of social media and internet trends to garner a response from a younger generation has been successful for M3GAN, the vitality of the doll’s dance has been able to attract a possibly, almost too young audience. With low internet restrictions at home, a lack of parental controls on social platforms, and the use of trending content by these platforms such as dances, younger users could be lured into watching the horror film, thinking it’s a movie with a significantly less sinister plot. And those kids that may find the trailer or content of M3GAN funny or cool, what will they do? Share it. Thus, the wildfire begins. Could this movie be an issue for parents? Absolutely. The concept of targeting childhood fears and memories, such as dolls, and turning them into horror-themed entertainment is not a new concept but one that can really challenge and skew a child’s perception of play. As mentioned in previous articles by Safe on Social, popular children’s video game Poppy Playtime uses a similar method, and in turn, engages a much younger audience than is recommended. Speaking with Safe on Social’s Creative Director, Rikki Waller, she shared the following: ‘I see the potential of M3GAN having a Squid Games effect on young people. It was not too long ago that we reported on the dangerous repercussions in the playground with groups of children as young as 8 acting out the violent nature of the Netflix series, inflicting actual physical harm on other children. M3GAN very much has the ability to create something very similar. I have an 11 year old daughter myself, and she’s now at the age where her curiosity has really peaked. She will often express a desire to watch the latest scary movie after seeing it advertised on TV – Annabelle is one I have fought off for a long time. For kids, scary movies feel fantastical, even a little bit dangerous; it’s like they’re peeking behind the adult viewership curtain within a safe environment. What they don’t understand is the effects that such viewing movies like M3GAN can have on their emotions and their behaviour.” What are these effects? Transient fears such as an increased fear of the dark and strangers, trouble sleeping, and nightmares Their ability to recognize what is fact and what is fiction in real life. They may exhibit irrational fears of dying, fears of losing control in everyday scenarios and feelings of not being ‘themselves’ Real disorders can develop including, anxiety, sleep issues, and self-endangering behaviours They may recreate violent scenes, becoming aggressive with themselves and others They may develop a ‘clingy’ nature Decrease in compassion and empathy. If you understand that your child is under the entertainment age rating, is sensitive to horror movies or darker themes, or you have heard them talk about M3GAN, perhaps start a discussion with them about what they could be seeing on social media regarding the movie. It’s important to distinguish fact from fiction, and get real about what’s ok and what’s not ok, though how you approach this is, all dependent on their age of course. And remember, social media platforms work on an algorithm, feeding your child the content that they feel suits them best based on their previous viewings as well as what content is currently trending. If they are finding that scary or inappropriate content is turning up on their feed regularly, you may need to take a walk through their search history and have a chat about what they’ve been searching for and watching. As Rikki went on to say: “Social media platforms unfortunately don’t necessarily have the same level of or interest in reducing young people from viewing content that may be harmful or inappropriate. Views, likes and engagements are their top priorities and their biggest revenue raiser. In saying this, it is very likely that, even though you do your absolute best to protect your kids from seeing M3GAN or other content you don’t wish them to see, snippets, trailers and memes will pop up in their feeds as it trends. If this is the case, take screen shots, take note of dates and times and the platform(s) you witnessed it on so you can lodge a report to have it removed.” M3GAN is an interesting approach and adaptation in the film industry, in order to engage an aware audience, as post-pandemic entertainment and media has been focused on the absurd and hilarious. With the movie’s use of social media, it’s no wonder the movie has been a box office hit in the new year and will hopefully continue an intriguing trend when shifting and warping the tropes of the entertainment industry. Before buying your kids a ticket to the cinemas, however, consider the following: In Australia, M3GAN is rated M and promotes violence, sustained threat, and coarse language Preview, ponder and then parent. After you have watched it, use your best judgement as to whether you think it is suitable for your child Don’t fall victim to the ‘but all of my friends are watching it’ because it’s simply not true Research the movie and the content themes Consider the social media platforms your kids have access to – TikTok, Instagram and Snapchat will be especially high risk factors when it comes to snippets being seen You know your child best, assess their fear factor but also their ability to bounce back If you decide to let your kids watch M3GAN, ensure they know not to replicate anything within the movie. As Rikki mentioned, The Squid Games Effect was rife in Schools across Australia and the world, seeing many children become aggressive, violent towards others and get in strife. Also tell them that sharing content with others isn’t a good idea. Heads up: There is a similar film coming out this year that has started gaining a lot of media attention - Cocaine Bear. Keep your eyes peeled and your ears on the ground. If you have any questions or need assistance, get in touch at wecanhelp@safeonsocial.com ABOUT THE AUTHOR, OPHELIA: Just last year completing Year 12, I love studying all things cultural and sociological. What drove me to become a part of the Safe on Social team was contributing to fostering a more equal and safe online world and the opportunity to educate Australians to promote a healthy relationship with the internet. My skills regarding managing cyber/creative burnout and acknowledging and responding to online criticism and hate will positively impact readers and the community.

  • I witnessed the Seaworld helicopter collision. Social media helped me heal.

    ‘Holy shit! That’s not supposed to happen hey, it’s not part of a show or anything?’ This was the panicked sentence that came out of the mouth of the 50-something year old man that stood next to me as we exited the Penguin Encounter enclosure at Seaworld on the Gold Coast. Our eyes and brains still not comprehending what we had just witnessed, the man and I, along with my 2-year-old daughter in her pram and about 7 other people, ran the short distance up the hill and slammed ourselves against the fence. What lie in front of us in the distance, is the most horrific thing I have ever seen. My family and I hadn’t been on a holiday for a really long time. When COVID hit, it’s like we, along with many others, pressed pause on our lives outside of our communities and even outside of our own 4 walls. But 2022 felt good. The masks came down and more and more people peered around the corners before deciding it was safe enough to join the world again. Having two girls and being able to work from home, my husband and I hadn’t really taken a breath all year, so we jumped at the kindness of a friend to spend some time at her vacant rainforest paradise in Byron Bay for a week. Here, we relaxed, refreshed, and rejuvenated, all the while connecting with our girls on a deeper level and learning to enjoy doing things together again without the rules and claustrophobia the past few years had thrust upon us. But then their small souls got restless. They wanted some manufactured adventure, to end their holiday with the bubbling of jellyfish and the roaring of rollercoaster laughter. Rates were through the roof and there was only one room left at the famed Seaworld Resort. Looking back, this should have been a sign that this little adventure should have perhaps been postponed. After some wonderful debate skills from our 11-year-old, we caved, snapping that last room up and making our way to the place where marine dreams come true, Seaworld on the Gold Coast. Over the next 2 days we marvelled at everything the park has to offer, ending our balmy Monday session at the park separated – my husband and eldest daughter going to fight the long lines for the rides, my youngest babe and I going to visit her favourite animal in the world. As we entered the Penguin Encounter building, two things struck me: 1) it was way too hot for my liking and 2) it was overly crowded. But my little one hadn’t been able to see the large emperor penguins yet so we forged ahead and made our way through. I often find myself wondering if what I experienced mentally following the collision would be different if my choice inside that enclosure was different. You see, there are two levels inside, the bottom level allows you to view the penguins swimming from underneath, the top level then affording you an amazing view of the penguins front on in all their glory. We watched the penguins swim from below and then made our way up the ramp where we were greeted with what felt like a hundred like-minded tourists, 5 rows deep in penguin love. As I mentioned earlier, the building was jam-packed and I wasn’t keen to wait around for people to push their way past us so that we could inch our way forward to the glass. So we left. I can’t help myself from thinking that I should have lingered just 2 minutes longer, that I should have been more patient. But I wasn’t, and that’s a decision that will be forever playing on repeat in my mind. Instead, I pushed my daughter in her pram back down the ramp, straight out of the back exit and into the welcome summer breeze. It was then that, out of the corner of my eye, I caught a glimpse of the most beautiful slice of sky imaginable, the perfect wedge between buildings. What happened next went so quickly. Interrupting the serenity was a helicopter, and it appeared to have clipped something which then simply dropped out of the sky. What ensued next was the most awful bang I have ever heard, followed by a mess of spiralling for the helicopter, which had now come a mere 20-30m from my daughter and I. And then it disappeared. That’s when I heard the man beside me speak. Looking at him, eyes wide, we ran, as if by instinct, up the small hill. Fingers laced through the fence, I slammed the break on my daughter’s pram and peered out to the distant waterway and saw it. An insanely deep hole in the sandbank, the rotors of a helicopter peeking out the top. What we didn’t know at the time, was that the helicopter hadn’t actually landed in a hole, it had hit the sandbank, broken in pieces and fallen on its side. What we also didn’t know until later, was that what we saw clip the helicopter near the penguins, was actually another helicopter, this helicopter, laying in its sand grave with 4 deceased people on board. The spiralling copter awkwardly landed, but this wasn’t the most impressive thing I witnessed that day. No, it was the distant onslaught of human kindness. The people who rushed in from the water, off their boats, to help someone who needed someone. The Seaworld first aid staff who barrelled across the park, swamped in equipment. The sounds of sirens in what felt like only minutes from the time of impact. But amongst all this goodwill and restored faith in humanity, much of what I wouldn’t learn about until much later, I also witnessed the faces of complete loss. A young man and lady, running down the path in front of the fence, announcing that their family were on that plane. A man, silent but rushing loudly to a Seaworld staff member, her face dropping when the man seemingly shared who he was, ushering him down the hill to where the helipad was located. And a woman, I will never forget this woman; face twisted with something that could only be described as pure pain. Her tears were coming so thick and fast that she had to be carried out and away as if to help shield her from her realisations. I was shocked, so much so that I just stood there, watching it all unfold. I called my husband on the other side of the park and told him what had happened. He texted my mum and told her what happened. I got a news alert on my phone from Facebook, advising that two helicopters had collided outside of Seaworld and 3 people were confirmed dead. I froze, the reality sinking in that I had just watched people fall to their death. That another man who was beside me at the penguins who said he had just narrowly missed being hit in the head by something flying in the sky after the big bang, had in fact almost been struck by rotor shrapnel. This man was right next to me. Right next to my daughter. We could have been hit. The helicopter, mere metres away from the penguins could have landed on us, could have plunged into the enclosure, and killed hundreds of people, including us. It sounds silly, but I feel, in some way, that I cheated death that day, that it wasn’t my time just yet, that I was being afforded another chance to do good, to be good. But in turn, this makes me feel incredibly guilty. Standing outside of the Jellyfish exhibit, I cried. And as the tears streamed down my face, my husband hugging me, I did something I didn’t expect to do, I opened Instagram. In a manner of swiftness, I typed up and posted a quick note to my friends explaining what I had just seen and that I was praying for all involved. This was the start of an obsession that would see me hit rock bottom emotionally but ultimately lead me onto a path of healing. Working in cyber safety education, I see the worst of what the world can do online. I have offered words of comfort to kids who have been cyberbullied, worked alongside schools to help protect their staff, I have delved into a parent’s trauma and worst fears following accusations of rape against their child. This stuff isn’t easy, this stuff hits home. But it’s the Seaworld collision that brought down my walls of resilience and allowed my heart to completely break in half. While I expected to find a way to work through what I was feeling and how it was affecting me, I never expected the remedy to be social media. That very afternoon of the collision, I was glued to my phone, refreshing my search for updates on the incident. As time wore on, the details of what unfolded came out and what was and is going to be a long road to determine how it all happened began. I didn’t sleep at all that night, visions of what I saw and the details I knew swirling in my head. On the drive home the following day, 7News Brisbane messaged me asking for an interview. I wasn’t interested. My husband reminded me to check in with him, that I didn’t need to deal with anything alone. My boss and colleague offered words of wisdom, to take the time to process it all, to be kind to myself, to go and see someone if I felt like I was drowning. Both my mum and husband were concerned about my sudden extensive use of social media. They thought that by keeping myself up to date with everything related to the accident, that I was somehow enabling the grief to continue, allowing myself to spiral deeper. I found however, that it was doing the exact opposite. I was able to sift through the bullshit, counter the opinions of media outlets who weren’t present with actual facts. I silently laughed at the people who injected themselves into the narrative through hollow interviews and confirm my feelings through the pain of those whose worlds has stopped. I wanted to know how the victims in hospital were recovering, who the pilot was, even watch the video footage taken by bystanders. I was able to stay up to date in a way that made me feel connected and it all somehow helped me process the event with clarity. The biggest thing it did, however, was allow me to bathe myself in the absolute humility and love of the people who helped that day. The people who don’t wear a uniform, who don’t have medical training, just simply have big hearts and jumped into action as if their own lives depended on it. Instagram, Facebook, YouTube, all of these platforms posted a lot of shit, but they also shared the beautiful stories from those on the ground and those who loved the lost. It’s these stories that I have grabbed onto with no sight of letting go any time soon. The deep feelings of sadness and grief sat inside of me for almost two weeks. I felt numb, devoid of the typical feelings of contentedness that I usually exuberated. I also felt silly, silly that I had such deep empathy and emotion toward people I didn’t even know. But slowly, I came back to myself. Not forgetting what had happened, but rather understanding it. Whilst I scold social media channels for allowing inappropriate, harmful content to be filtered through to our kids and for giving those people with dangerous views a platform, I also have a lot to thank them for because without them, I’m not sure where I would my head would be right now. ABOUT THE AUTHOR, RIKKI: With over 20 years experience in media, advertising and sales, Rikki is the driving force behind forming strong and profitable relationships between Safe on Social and our valued partners.

  • Why community is the only real suicide safety mechanism on social media

    Trigger Warning: This article discusses suicide The chances are you have seen posts relating to suicidal thoughts or plans on a social media platform before. Posts such as viral live feeds of people attempting suicide show clear intent and become heavily discussed across the world for a moment in time. We see this phenomenon pop up on Facebook, Snapchat, and TikTok periodically over time. These posts are hard to miss because of how confronting and raw the content is, and how frequently they are reposted for voyeuristic purposes. Thankfully these posts are in the minority, yet everyday people are posting in relation to their suicidal thoughts, and these may go under the radar of being noticed for their true intent. Social media platforms have become a space that people use to diarize, express, and seek connection with others about challenges they are facing and thoughts of suicide. Suicide-related posts can be much more difficult to decipher potential risks as they are rarely explicit. When people feel suicidal, they rarely use such direct terms as ‘feeling suicidal’. Instead, more colloquial, or cultural terms are used, and these terms can be different across location, age, and social group. Social media then adds another layer of complexity when it comes to suicidal communication. Digital hieroglyphics-like memes and GIFS, cultural nuance, and veiled communication to avoid censorship filters can make these posts difficult to distinguish because suicidal language is even less clear. When you are scrolling through a feed, you may not pick up on these distinctions because they are not sitting on your page with a giant red flag. Platforms have stated that they are proponents of safety when it comes to their content. There is a narrative that there are features to flag content with the site, with algorithms to detect certain wording and moderators who work on what is referred to as ‘trauma floors’, siphoning through reported and flagged posts. The sheer number of reports makes timely review impossible, but even if moderators were able to be to respond in that minute when communication is so idiosyncratic, how can one person understand intent through another’s personal, cultural, and societal lens from across the globe let alone thousands of individuals on a rolling basis? The answer is they can’t. Whilst platforms will tout success stories of being able to send emergency services to people’s houses and preventing death by suicide as proof of their safety processes, these are beyond rare in comparison to the sheer volume of platform users heading onto social media to express their suicidality. Despite having over 23 years of working in the area of suicide prevention, intervention, and postvention, the complexity of detecting risk on social media isn’t lost on me. It is both an art and a science. Whilst there are evidence-based risk factors and warning signs, everyone is an individual and what fits for one, doesn’t necessarily fit for another online. Add to this the impact that distress has on our ability to think and communicate clearly, particularly digitally. Nevertheless, there have been some common themes for those who contributed to my academic research. Visual communication tools are often used on social media to explore feelings and thoughts that are difficult to express either because of sensitivity or complexity. Memes and GIFS are particularly popular. Nihilistic humour or symbolism is common to express being overwhelmed, feeling hopeless or isolated. These visuals, in isolation, may seem humorous but they often have a deeper meaning attached, the comedic element acting as a veil to desensitize the sting of reality. People may also take part in live feeds or videos of risk-taking behaviours. Drugs and alcohol, reckless driving, and violent weaponry can feature. It isn’t necessarily directly suicidal content but the ease of taking part in risky activities is seen to increase, particularly with teenagers. Then there are seemingly obsessive posts and reposts referencing or glorifying those who have died by suicide. These might be those with celebrity statuses such as Kurt Cobain, Twitch or Robin Williams, or people who are known personally to the poster. The use of a hashtag makes searching and finding new, relevant suicide-related content a breeze. Again, to escape the filters, this hashtag may not always be a straightforward word or phrase, as we have seen time and time again. The choice of hashtag may also change quickly to avoid censorship. Spelling changes, substituting numbers for letters, slang and code are all used to avoid censorship filters. Suicide being spelled S00icide is an example. This isn’t a phenomenon isolated to suicidality, underground pro-anorexia posts and hashtags have been in existence for as long as the option to connect to others in this way has existed. Contagion has always been of concern in relation to those vulnerable to suicide. Media protocols are in place to control what is released to the public when broadcasted by companies, but there aren’t any restraints on what the general public post on social media. Social media can contribute to contagion by providing a flood of images, all easily accessible with a simple search, hashtag or having a friend like a post. It isn’t uncommon to receive videos of people discussing their suicidality, attempting suicide, and even pictures of death from suicide when looking for comforting content. Exploring platforms to find help for suicidal thoughts or information on supporting a friend during a difficult time can show you information that is both helpful and destructive because the content retrieval process doesn’t discriminate between the two. One doesn’t even need to search specifically about suicide to be connected to the content. A curious mind looking up a trending death may link suicide-related material to the fyp suggestion or generational trends, such as being connected to a particular music scene, can link a subcultural influence such as self-harm with the interest. This has been seen time and time again with the pairing of the grunge or emo culture with the self-harm sub-culture via the inclusion of multiple hashtags to increase views and grow a platform. For example, a photo of a band on Instagram might have #alternative #music #dark #goth #emo #cutting #sooicide pulling in anyone connected to any of the algorithms to that central point. How does an algorithm distinguish between dark humour and suicidality? Between a love of a musician and a connection to their despair? To possibly age-appropriate boundary-pushing and potential self-injurious behaviour? The simple answer is that it doesn’t, nor will it be able to. Is social media all bad? Definitely not. It is a place where someone can go to process, seek connection, find refuge, and get information. It is a place where one can seek help from others and also be identified as needing help by people who see your content changing over time. Governments across the world have discussed the merit of instituting laws to remove online content, however, the question that becomes a sticking point is will removing all content eliminate a mechanism that isolated people use to flag that they need help, and will it take away the discussion and promotion of help-seeking measures by proxy? Subsequently, if this is removed, what can replace this tool for those who don’t feel comfortable with traditional mechanisms for flagging down help or connecting with others? So, what is the answer? Community is the key to supporting and intervening with those online in suicidal crisis, whether that community is at a local or global level. We know the people on our feeds, and we see a longitudinal picture of their online communication. We may not always understand the intent of their communication, particularly if it is a meme, but we may perceive a change in tone or feel something may be off. We may see unusual hashtags or hashtags that don’t make sense. We need to ask or do some research on what this might mean. It can be a rite of passage to hide parts of your life from those around you but if you are seeing this information, it is something worth investigating further because that person may be experiencing mental health issues. It is these gut feelings that we need to act on directly. This may seem intimidating. People may think “I don’t know what I am doing” or “I don’t want to say the wrong thing”. Simply reaching out with a personal message is a really good start. Checking in and expressing a sentiment similar to “hey I noticed some of your posts lately, I just wanted to check in and see if you are ok?” can be a huge tool in reducing suicidality. Why? Because people believe they have been seen and heard, they feel a connection and they know some people are safe and will not judge them. The opportunity to talk about issues also allows some steam to exit the pot. Even if it is simply a post they thought was funny or an off day, this action is an important one. It is important to know that if you are a teenager reading this and you see a friend’s post you are concerned about, tell a trusted adult, and get their support. The assumption that someone older or more experienced will take care of it may not be true. Most teenagers who post suicidal content online will remove adults from the conversations with custom features so you cannot keep this to yourself. Even if you feel confident in handling the situation, both you and your friend need to be supported in navigating this. But what next? You’ve had the conversation and you’re still concerned? Encourage your contact to see their GP or connect with a service such as a suicide or crisis line, many of which have both 24/7 phone and online chat access. Keep the communication line open as well. If you think that a suicide attempt may happen rapidly, contact the local Emergency Services in that town and ask for a welfare check with the contact details you have. We cannot rely on platforms to keep up safe, irrespective of the narrative placed around safety options. If you need to talk to someone, please reach out for help: Australia: Lifeline: 13 11 14 or Suicide Call Back Service: 1300 659 467 United States: Suicide and Crisis Lifeline: 988 United Kingdom: National Suicide Prevention Helpline UK: 0800 689 5652 For additional international suicide prevention lines please visit https://blog.opencounseling.com/suicide-hotlines/ ABOUT THE AUTHOR, ANNIE FARDELL HARTLEY: Annie is a Registered Psychologist and Suicidologist having worked in clinical, consultancy and management roles across Government, Education, Not-For Profit and the private practice space for over 20 years.

  • Why my new high schooler won’t be having a phone

    I’ve reached that point that I have been dreading. My husband and I don’t dislike technology, we know it’s important, but we do dislike the effect personal devices have had on society in general. Society now is more quick to judge, jump to conclusions, be less critical thinkers, proffer our opinion on topics we probably shouldn’t, require instant responses, provide an excess of ‘white-noise’ in our lives which has impacted upon all of us. In a world where seemingly every high school child has a phone, it seemed inevitable that we were going to have to supply our soon-to-be Year 7 child with a phone after being told that “everyone” has one. Next year, he will most likely be riding to school which is around a kilometre or so from our home, he’ll only have to cross one minor road to get there. So his time away from available contact methods will be limited, but of course, he may elect to go to a friends house before coming home. Not wanting our child to be ‘that’ child, or for him to be ostracised and possibly even bullied as he heads into a new and exciting chapter in his life, I started to look for “kid smart phones” that allow us to have some level of control, or something that will keep him “safe” whilst we are not with him. The options available, whilst better than a hand-me-down phone, were still not sitting right with me and my procrastination has given me time and confidence to make the decision that whether he’s the only child in year 7 in this situation or not, he won’t have a phone. As happens with this time of year, I have attended Christmas functions and end of year gatherings, spoken with a lot of people and reconnected with friends we generally don’t have time to see during the year. In these discussions, the usual commentary of “wow, where has the time gone?”, or “I can’t believe he’s in high school next year” is abundant. This leads to discussions on what we are going to do and how we manage our child’s rapidly evolving independence. Surprisingly, when I’ve mentioned that I want him to be able to contact us, but don’t want him to have a phone to do so because of all the other issues that come with a phone, everyone has been supportive. Friends who have had children go through this stage already and given them a phone wish they hadn’t, a friend who is a high school teacher with younger children than mine has said there is no way her boys will have a phone in high school. Her school has banned phones completely after trialling a ten-minute phone allowance during break times as it caused more problems than a total phone ban. She was even enraged enough about the topic to say that as parents who are about to navigate this issue, we need to learn from parents before us and take a stand in saying no to just blindly giving our children phones, and she’s completely correct. Change starts with us and those brave enough to buck the trends and status quo. I also recently made a post in a professional group I am part of as to possible suggestions to the contact problem and if I stuck to my guns about no phone, what impact might that have on him. The responses varied, some said that they know of children with no phones who miss out on social gatherings that are usually arranged through group chats. Ok, if that’s the worst that’s going to happen, then that’s actually not too bad when comparing the worst that could happen if he is part of these group chats, or has unfettered and unfiltered access to the web and its insidious apps before he is mature enough to understand and handle the content. To be fair on him, he does still have a device that he can use at home so he’s not going to be totally in the dark, but his access to electronic communication is going to be minimised and in a controlled environment where we are available to him if things go awry or to provide that constant education that he’s going to need to become a good digital citizen. So, what does my son think about this? He’s always known a phone is not a guarantee, so we haven’t ripped the rug from under him, he knows he’s not going to be totally isolated. He’s had the benefit of listening to me preparing my talks for Safe on Social, and knows what we present to students in schools, so he has a heads up on how dangerous some of these apps can be, and how intrusive and addictive a device can be. Luckily he’s pretty sensible, but he also understands we’re looking out for his safety and for now at least, he’s ok with that. So, before you wrap that device to place under the Christmas tree for your soon-to-be high schooler, or any other child this year, please consider what you are setting your child up for and whether there are any other, more practical ways, that benefit both your families and your children. Think about whether you can be part of that change and trend back to our kids being kids. ABOUT THE AUTHOR, ANDREA TURNER SPECIALIST FACILITATOR CYBER SAFETY WITH SAFE ON SOCIAL Andrea is a lawyer with a background and interest in personal injury and employment law. After observing a significant increase in workplace psychological injuries over the years and correlating this increase with the rise in popularity of social media and electronic communications, Andrea has spent time focusing on educating both employers and employees in remaining safe both in and out of the workplace, and advocating for a change to more kinder workplace cultures. Knowing what types of environments our children are likely to be exposed to, Andrea seeks to empower our young people and educate them on the impact that social media and technology can have on them not just now, but also into the future, and seeks to ensure that students, parents and educators know how to use technology safely and not to be afraid to call out poor behaviours. Being a mum to two young boys has also strengthened Andrea’s resolve to make the world a kinder place to be. Want to learn more about how to keep your kids, students, school, business or sporting club safer online? Subscribe to Safe on Social's ESAFETYHQ Online Learning Program today. Learn more about ESAFETYHQ.

  • How do you slow down in a world that is speeding up?

    The question of the ages. A question that, through the thorough research of Author Johann Hari in his book Stolen Focus has only now been answered. Do you remember when you bought your first phone? Or when you first downloaded Twitter or Instagram? Since the first mobile phone, life has never really been quiet. It's hard to think back to a time before phones. Personally, I don’t really know what life was like before technology and in many ways I wish I did! Hari says that on average ‘We touch our phones 2,617 thousand times a day’. It is scientifically proven that people can only focus on one thing at a time. If you try to do more than one thing at a time, everything is done to a much lower standard and you absorb less information. The average time it takes to refocus on a task after being distracted is around 23minutes! In an interview with ABC radio, Hari says this is called the ‘switch cost effect’. Through the constant, daily interruption we face through social media our society faces something called cognitive degradation which is basically frequent memory loss. Because we are always so busy we are constantly trying to do what we think is “multitasking” when really it is “juggling” and it isn’t getting us anywhere. One very important thing that we don’t give ourselves enough time to do, is mind wander. When you sleep or when you feel as though you are in a bit of a trance, your mind is actually wandering through all of its past thoughts, sorting through them and when you get distracted by the constant noises of social media, you allow your mind that little bit less time to sort through its thoughts. Over time this is what causes cognitive degradation. So much time is spent taking the right photos, posting the right photos and receiving likes and views. We never really take a moment to just breathe. Never really allowing our bodies and our minds to regather. Our phones and all of our technology today was specifically designed so that it would interrupt us. Hari mentions a man named Tristan who worked at Google whilst it was developing Gmail. The team he was on was trying to figure out how they could manipulate people into picking up their phones more frequently and checking Gmail more often. This is when they come up with notifications, when your phone gives a buzz you are much more likely to pick up your phone and see what the notification is. This is something that all social media platforms use to make money: Step 1: Create a platform. Step 2: Create an algorithm that will generate video’s, ads etc that will keep people interested. Step 3: The more they scroll, the more money we make. On Hari’s journey he found out how apps show you exactly what you want to see. It was actually very interesting and quite alarming to read. All apps, including Google, Snapchat, Facebook, Instagram and Twitter store your every search. They create a profile on your patterns, things that you are most interested in or easily distracted by and they sell this information to advertising companies. Companies that then set up advertisements to entice you. Everyone has experienced that weird, not so coincidental moment when your social media platform pops up with an advertisement of something you were just talking about or searching and you think ‘wow my phone must be listening to me’. Well you're not lying because your phone, your apps are listening to you and following every button you press. If it’s free you are the product. Social Media has become a distraction, an interruption into our every thought. Now that I am aware of what social media aims to do I have found that I stop myself from picking it up more often. I now understand how important it is that every person, no matter what age, understands how social media functions so that we can become more aware of the needs of our own minds for the future. Technology has so many positive aspects and one of those is that our world is continually advancing. In spite of this we need to stop and I mean really stop, especially as the next generation, stop and ask ourselves, how do you slow down in a world that is speeding up? ABOUT THE AUTHOR, DRUE: I’m 16, and I joined the Youth Committee as I believe it will be such a significant learning experience that will go towards the security of the online world. ​ I believe social media is such a large part of today’s world, positively and negatively impacting individuals and society. ​ Hopefully, through the Youth Committee, I can help create a safer online space for everyone.

  • The Allure of Call-Out Culture

    The allure of call-out culture continues to enchant thousands of people worldwide each day with more and more people choosing to voice their opinions regarding others from behind the safety of their screens on social media. Call-out culture is a form of public criticism that uses social media to clap back at the behaviour and actions of people who may have done something that goes against public opinion. However, the increasing prevalence and severity of call-out culture leads to the questions Is call-out culture just an excuse for trolls to criticise people online? And has call-out culture gone too far or is it still beneficial to call out behaviours that are considered wrong? We have all experienced some form of call-out culture. It may have been at a family dinner, may have been online, you may have even done it yourself, but one thing remains the same it has been around for years. The issues with call-out culture lie in the contrast to basic human rights and social justice movements. We should each be allowed to voice our thoughts and opinions about topics that may be controversial without the fear of being judged and harassed. I have been argued against for voicing my own opinion at a supposedly friendly party and whilst I valued hearing the opinions of others, listening to a different perspective and point of view there comes a point in which a simple disagreement turns into a dispute. I disagree with call-out culture online and in person because of the confrontation. Despite the argument that is sure to follow it can be entertaining for witnesses. So, I suppose there are a few benefits. Call-out culture should be used when someone says or does something genuinely wrong, that goes against human rights or morals. As a result of being called out people are then cancelled – driven out of social circles, ostracised. Call-out culture can often be used as a petty way to start drama or an argument. It can be used to stir up trouble and make a mess of things. It can be used online as a way for people to sit behind their screens and feel good about themselves for pointing out the inferiority of others, but this isn’t what call-out culture is about. It’s about making the world a better place all round by showing that nasty actions and words won’t be tolerated by forcing people to acknowledge that something is wrong. Call-out culture is powerful. If used correctly it has the power to make or break a person. It has the power to ruin a person’s career or reputation, to ruin relationships. With call-out culture, it is easy to get carried away and turn a small issue into something huge. But with the ease of gaining the moral high ground and singling out a person how do we correctly call somebody out? Some say that a softer approach is ideal. When you call somebody out publicly you are unwillingly giving others permission to contribute and as a result, it turns into multiple people against one. A softer approach would be to message them privately or pull them aside to save them from public embarrassment. When or if we have said or done something similar we often use reason to justify what we have done. You may have been going through a bad break up so therefore you can do and say as you wish to anyone but why can you do this and not somebody else? A second approach that is considered better than calling somebody out publicly is considering the reasoning for the other person's behaviour. Taking a step back and asking yourself “why might they be doing whatever they may be doing”? This could lead to a less heated debate or maybe even a choice to ignore the person altogether. In the end, there is no right or wrong way to call somebody out. Sometimes it is not up to us but rather the person who has done something wrong. It is up to them to use such incidents as a way to grow as a person, as a way to step back and realise they have done something wrong and as an opportunity to apologise. If you have been called out it should not be used as an opportunity to make yourself seem better in the public eye by making a scene both in person and online. It is not an opportunity to insult and hurt people. If you have been called out simply work out what is wrong and acknowledge it. Apologise and try to fix what you have done and avoid making the same mistake twice. Try and walk away from the situation if only for a brief period of time to let everybody involved cool off. Call-out culture is still beneficial to society if it is done right but isolating a person socially is taking it a step to far. No person should be left to fend for themselves against an onslaught of disagreement. In the end, you can tell somebody they have done something or said something wrong but if they don’t wish to change their opinion or behaviours to fit in they will remain the same. Sometimes it is best to just ignore the issue. Sometimes a person doesn’t know what they have done is wrong so pointing it out results in change but is it truly worth it? In my opinion, no. ABOUT THE AUTHOR, TEALIA: I am 16 years old and I joined this committee to make a difference and be the support that my peers deserve, the voice for those who cannot speak up, and the guide to help navigate the rocky world of social media. ​ I want to help my peers be heard and make their online experiences more positive. ​ I bring to this committee my strength, dedication, and support. I am excited to help.

  • What YouTuber Dream's face reveal tells the internet about online privacy and beauty expectations

    Popular Minecraft YouTuber Dream, who rose to stardom creating speed run content and challenge videos with his friends in 2020, reappeared in the public eye a number of months ago with an elaborate publicity campaign in order to reveal a significant part of himself to the public: His face. In October, a video called “Hi, I’m Dream” was uploaded to his main Youtube channel, followed by a vlog his friend and co-worker Georgenotfound created called “I Met Dream In Real Life.” Curating a significant amount of attention back to the popular creators, with both videos combined gaining roughly 39 million views, with Dream’s video holding number two on the Youtube Trending board as of 6/10/2022. His publicity stunt involved many of his YouTube friends, taking a video of themselves calling Dream, reacting to his face, with the phone turned away from the camera, which was then posted to Twitter and YouTube. This however, is not the first time Dream had lead his fans into believing he was leaking his face. He first hinted at a possible reveal at the end of 2020, in his friend, Mr Beast’s video “YouTube Rewind 2020, Thank God It’s over.” Where he had posted photos of himself with his iconic doodle smile in-front of his face, with fans only to be met with a hyper-realistic mask of his smile icon underneath, created by artist Undauntedhaunted. “My goal was to just start doing things.” Dream states in his video, “Get out, meet creators…be an actual creator, be a person.” As an older fan of Dream and his co-workers, seeing the resurgence of older fans talking about the latest news regarding the “Dream Team” brings back an old sense of nostalgia, and interest in his content once again. At this point, all I can see on my own social media feed are updates from fellow fans about the creator’s latest found freedom in what he posts, most significantly, his lifestyle and new found freedom in his home, in Florida. However, while I have been interested in the positive reactions to Dream’s face reveal and the different kinds of content he and his friends have been publishing over the past couple of months, what is more intriguing, are the hate responses and distasteful response that has been birthed from the release of Dream’s face reveal and the factors behind that response. Beauty expectations of internet creators and influencers In 2019, Insider documented a survey of 3,000 kids over three countries, China, US, and UK, on what children wanted to be when they grew up. Over 30% of children in the UK wanted to be Vloggers/Youtubers, which was followed by 29% of US children wanting the same career. With the strong desire to be popular on social media becoming a more viable and accessible career for younger generations, it’s no surprise that popular culture’s beauty standards and expectations would soon follow. The Twitter hashtag #PutTheMaskBackOn soon became trending after his face reveal went public. If you take a look through some of Dream’s friends Instagrams, fan edits, or twitter hashtags, you are bound to find fans calling any one of them attractive, using social media to emulate a para-social attraction to social media stars. When Dream did not meet his fans or social media users expectations of beauty, was when hate began to appear online. This factor, along with the anonymity social media provides, slowly creates a tidal wave of similar content and hatred directed at a singular person, purely based on appearance. The lack of content creator privacy One crucial component of Dream’s face reveal video was him stating that “I’ve been bunkered up…the people trying to leak my face, trying to find out what I look like.” He states “There’s too many…just a tiny, tiny bit too much.” Doxxing, and leaking of private information is nothing new in online spaces, however creators who purposefully choose to hide their identity provide a new challenge for fans and haters alike. For example, when Dream would find himself amidst an online controversy, users would try and leak private information, such as his home address, families faces and contact details, and naturally, his own identity. Many of Dream’s co-workers, such as Ranboo and Corpse Husband, who have also decided to remain faceless on the internet, could possibly take this backlash, and marketing success, as a sign to reveal themselves or hide further within their masked identity. As the sub-genre of “Faceless You-tubers” somewhat relies on the eventual face reveal once the creator is popular enough. With Dream’s identity being a component of himself that has been stressed by his friends as something “they hadn’t even seen” adds further interest. Due to the fairly new position in career, and tendency to be a freelance paying job, content creation and privacy is a component of creator’s life that is entirely up to them. However, with this career growing in popularity and notoriety, many creators now have a payed team or PR manager behind them to control these types of leaks and scandals from tarnishing a creator’s reputation. Dream’s face reveal has reiterated the impact of Influencer’s control of media and popular culture, being a component of the new age of technology and careers. This investigation leads me to believe that this experience for some would shape or diminish a creator’s identity and self-esteem, and provides a learning lesson for future creators, and exactly how much they should share of themselves on the internet. ABOUT THE AUTHOR, SCARLETT: I have just complet Year 12 and love studying all things cultural and sociological. What drove me to become a part of the Safe on Social team was contributing to fostering a more equal and safe online world and the opportunity to educate Australians to promote a healthy relationship with the internet. My skills regarding managing cyber/creative burnout and acknowledging and responding to online criticism and hate will positively impact readers and the community.

  • Wizz – Make New Friends

    What is Wizz? Wizz – Make New Friends (2019) is a friend-finding chat app in the social media category focused on connecting strangers with common interests from all over the world. It allows users to find and meet new friends by scrolling through a live feed of other users to talk, and exchange pictures and videos; no option to use video calls. Once two people agree to be friends, users can either freely directly message individuals or create a group chat where anyone can join them. User profiles, posts and bios are available for anyone to see. Who is it suitable for? The App store provides an age rating of 12+, whilst Wizz requires a minimum of 16+ for use. Wizz gathers such data to put users of a similar age in the same feed. However, this is unreliable and misleading as the app has no authenticity or verification from its users to confirm their age or sex. This said, it allows users younger to create an account. Minors and adults frequently appear in the same space and can be connected if they accept each other’s friend requests. In turn, making it an unsafe virtual space for anyone under 18 years of age. What are the potential dangers? Whilst the app’s guidelines claim it prohibits sexual and violent content, it is evident that Group Chat conversations seem to directly violate their terms of service. The use of foul language is frequent and some users’ bio images are also sexually suggestive. The lack of censoring and the inability to use video calls can foster a perverse arena for catfishing and grooming between minors and adults or predators. Users under the age of 18 have noticed these dangers and have commented that “The app is a hellhole for the spawns of maggots to slide into your DMs and attempt to swindle you for inappropriate things. Half the people I talk to on the app are catfishes and are not who they claim to be”. Are there any parental controls available? There are no parental controls available on Wizz. This and the very nature of uncensored digital communication is extremely dangerous for teens as this can foster inappropriate communication through opportunities for catfishing and grooming between minors and adults. Final word Parents must talk to their kids about the potential risks and dangers of posting personal information on Wizz and other social media apps in a way that empowers them. They need to feel safe speaking up and telling you what happened, so make sure your child knows that is always a safe space to speak up no matter the circumstances. Even if your kids do not use the app, it remains imperatively important to reinforce and teach appropriate and respectful communication and safely navigate such platforms. Parents can discuss alternative social media apps that are more appropriate and safer for minors and explain the dangers for kids to interact with adults online and the risks of being friends with someone you’ve never met in person. ABOUT THE AUTHOR, GIGI: I have just completed my HSC at school in Sydney. I hope to attend University where I aspire to study for a double degree in Property Economics and Business Law. I wanted to participate in the committee to contribute a contemporary perspective on the safety of social media engagement and effective for young people.

  • High School: The Good, the Bad & The Hierarchy

    High school. The best times of your life, right? Wrong. It is the meeting place of hormonal, angry teenagers. A society based entirely on imagery and deception. The place where we are all determining who we are. The place where everlasting friendships are forged and pathways leading to successful futures become available. Ranking within the social hierarchy is the determining factor of who you are believed to be. People are judged solely on popularity: both online and offline. Popularity: the thing that dictates whether it is the best or worst times of your life. Firstly, what is a social hierarchy? A social hierarchy is the specific ranking of individuals based on various factors. The social hierarchy is constantly evolving and yet seems to stay unfortunately stagnant. The stereotypical groups are as follows: the popular kids who are rich and attractive, the jocks who are associated with a particular sport team, the floaters who blend between the groups seamlessly, the good-ats who are friends with everyone, the artistic people, the brains or the ‘nerds’, the normal students who are ‘invisible’, the stoners, the goths, the anime/manga lovers, and the loners who keep to themselves. While these groups have been evident for years, social media and pop culture are showing that they are not set-in-stone. Regardless, some things haven’t changed. Ruling with an iron fist are the populars and the jocks – the groups which are portrayed as manipulative and charming. Those that use social media status to improve their supposed self-worth. Despite the societal changes for diversity, statistically, these groups are predominantly white. "People of colour could not be completely in another group because they were in [a racial-ethnic] community by default [because] that’s just who they are.” Stated Rachel Gorden, lead researcher at the University of Illinois of Chicago. This just goes to show, no matter what social media is portraying racial inclusivity, it is still present within the social hierarchy. You’ve probably heard of the terms clicks and cliques. You might even be wondering what the difference is. When people ‘click’ it is when they bond with other like-minded people. They share interests, values and provide each other with almost a home-base. In other words, a click is a healthy friendship. Conversely, ‘cliques’ are organized around power, popularity, and influence. Members embrace the exclusivity of the group as, to them, it defines who they are as a person. Outward appearance is the sole important factor remaining or being outcast from these groups. Fortunately, for both society and individuals, social media has vast international reaches allowing for friendships to be forged internationally. Clicks are being made with a click of a button. No longer are we confined to the people within our town. But unfortunately, social media is driving the external self-validation through outward appearances. Thus, allowing for cliques to similarly thrive. Psychologically, human brains are still wired to follow the ‘herd’ in the name of survival. Dr Mitch Prinstein, a leading psychologist in popularity, said “within forty minutes of being rejected, changes can be seen in our blood, it is our body reacting to the recent threat towards our survival.” Our body literally goes into survival mode when rejected. This response meant that homo sapiens survived when others went extinct. He also said, “When we experience social rejection a part of the brain is activated. It is the same part that activates when we feel physical pain.” Our brains are telling our body to prepare for the inevitable pain associated with rejection. Homo sapiens have evolved to be at the top of the food chain and yet, we are still subject to natural selection. A reaction which once increased our survival now, ironically, does the exact opposite. This reaction reinforces the need to fit into a crowd. To be popular or at the very least be accepted. The social hierarchy is ingrained into our very DNA as animals. Just look at Chimpanzees. They too have a hierarchy with an alpha male and female. Factors such as physical strength and age, factors of survival, dictate their hierarchy. To them, it is necessary. But for us? The hierarchy is dictated by ‘desirable’ traits extraversion, looks, wealth, athletic abilities, and popularity. Instead of the physical confrontation associated with rising in the ranks, social media has become the platform for the challenges. Bullying, manipulation and simply omission of the imperfections of lives are becoming a means to boosting rankings within the hierarchy. Just think of all the photoshopped and idealized images prevalent on social media. High school is being controlled by these edited portrayals of life online. Even personally, within my high school, this is present. The popular people within social media are constantly posting on Instagram their ‘perfect lives’. But what about the imperfect moments? We never see them online, but it is just pretended that they never exist. Afterall, the groups never mix, so how are we supposed to know any better–what we see is all we know. Social media has literally become the world’s high school. Whether people are willing to admit it or not we have all imagined at one point or another having a lot of followers, being admired and envied. Gratifying, isn’t it? Quite literally. Gratification is triggered because the pleasure centres of the brain are activated. This section of the brain is developed at just 11 years old! Whenever we consider ourselves popular, these areas light up like a Christmas tree (cliché but you get the gist.) Social media is a mainline to an instant gratification high. An experiment put the conformity caused by social media to the test. A random group of teenagers were shown several photos from Instagram with a limited number of likes. Immediately the inhibition centre of the brain was activated meaning that they were turned off. When they were shown the exact same photos, with lots of likes the same part of the brain shut down. Showing their overwhelmingly positive response. Startling isn’t it. Something as simple as the number of clicks of a button controls opinion. This just goes to show that opinion isn’t so personal, it is subject to the opinion of others. If other people think it is beautiful, we are subconsciously more likely to agree with them. Social hierarchies are ingrained within all societies. They will never completely disappear. Nothing can completely eradicate our animal nature (no matter how hard people try.) Nevertheless, conscious decisions can mitigate the control that this hierarchy has on our lives. Rather than being consumed by the race for popularity, invest more time in being happy. How that happiness is achieved doesn’t matter. Meet people that share the same interests, make friendships that will last through all the drama and gossip. While being popular may be ‘cool’, ignore the people that use you to improve their own status. Don’t let anyone change who you are. No matter the situation, be true to yourself and strive for the future you dream of. Give everyone a chance, no matter how awkward it may be, a simple conversation can make someone’s day. Social media shouldn’t be the centre of the universe. Yes, this is me recommending that people reduce the amount of social media usage. Before posting, ask these questions; why are you posting? What are you posting? Who are you portraying yourself as? It doesn’t matter whether people think you are weird because all you do is post pictures of your cats, if that makes you happy, do it. Let’s just face it, high school is horrible. Teenagers are complete hormonal, judgmental idiots. But we are all just looking for a place to belong, an identity. Everybody is different, we will form groups. We don’t all get along, BUT we can include people. Our status doesn’t change much, not matter what you post or don’t post. We can’t change our brain’s reactions. Live life your way. Be who you want to be, love yourself. Make weird and bizarre friends, they are more entertaining. Who wants boring friends anyway. What do you want to be remembered for? If we all asked ourselves this question, the world would be a much more bearable place. ABOUT THE AUTHOR, JAENELLE: I am currently 17 years old and completing my final year at school in Rotorua, New Zealand. ​I wanted to be a member of the Youth Advisory Committee to share my experiences with social media with others. I want to communicate to everyone that social media is only a tool – it is up to you to decide how you want to use it.

  • The War Before Our Eyes: Global Conflict in the Digital Age

    Wars have been a periodic, yet seemingly inevitable fact of life for humans as long as we have lived on this planet. In the modern era we have seen staggering military powers develop to fight devastating wars, fracturing millions of lives around the globe. We, who live in relatively safe, developed countries, have rarely had to reckon with the reality of warfare in any serious or prolonged way. In fact, we are usually the ones waging the wars. Generally, the closest thing we come to seeing the gritty realities of combat are daily news reports, which are often sanitised in order to preserve the sensitivities of coddled first world audiences. We have never had the ability to watch wars unfold hour by hour, to see what the individuals, who constitute the cogs of conflict, get up to. Until now, that is. The development of social media platforms has, as you all well know, allowed for the transfer of information to occur incredibly quickly. Events happening on the other side of the world take barely 10 minutes to reach our Reddit or Twitter feeds, and this now includes the new swathe of combat footage that have made its way onto these platforms. There are now entire subreddits, twitter accounts, and 4chan threads dedicated to the publishing of actual combat videos making their way out of places like Syria, Yemen, Afghanistan, Kashmir, and now notably, Ukraine. Recently I’ve spent more and more time watching these videos, fascinated by the intensity and harsh realities that they convey. Videos of Russian soldiers doing strafing runs in helicopters, Ukrainian soldiers shelling Russian positions, ISIS militants in combat with Taliban soldiers, French foreign legionnaires in Syria, a Palestinian gunman killed by an Israeli sniper, and more; the stream is seemingly endless. None of them are censored, and some are fairly violent. It feels taboo to even come into contact with these videos, as if they should instead be consigned to the realm of dark web content and sordid thumb drives traded between military officials. But no, it’s all right there, ready to be seen by any curious browser in their bedroom. The footage can sometimes feel incredibly amateurish or chaotic, so far removed from the pictures that war movies have fed to us of highly calculated encounters. It could just be a man standing behind a wall, firing his AK-47indiscriminately towards some faceless opponent. It could be a soldier scrambling into a ditch to escape the shrapnel of a grenade lobbed towards them, or drone footage of an IED being dropped on a cluster of unsuspecting combatants. Perhaps you can spot one of them crawling away from the blast, legs shattered. It’s dark, and yet it also feels so unreal, so inconceivable to somebody in my position of privilege and comfort. While watching these videos for the shock value and the morbid curiosity is what initially drew me in, I’ve now been forced to consider the real ramifications of this footage, what it means for the future of our digital age as we emerge into an era that portends further global warfare. What does it mean for our society that such violence is now so readily viewable? I think the reality is that we already live in an anaesthetised environment, within which we are so bombarded with information that it’s hard to even feel strongly about important issues anymore. Add to this mix more images of organised violence and warfare, and we may find that this too becomes just another thing that happens in our world, unquestioned because it is so pervasive as to seem almost normal. If social media is made to reflect the types of lives that we live, to shed light on the vast array of human activities, it only seems natural that warfare sits within that mix. When we get the unfiltered picture, it’s clear that war isn’t as glorious as it's made out to be in our popular culture, and perhaps that new perspective isn’t necessarily a bad thing. It’s hard to know what this means for young people on social media, some of whom will inevitably stumble across this type of content. I find some of the footage incredibly interesting, and the rest of it fairly harrowing. In terms of what it might do to younger people, I think the effects may be multifaceted. Already young kids are exposed to fairly violent sentiment through television and movies, whose objective as pieces of media is to convince them that it is real so that they will be compelled to watch further. Swapping out the fake for the real might just be the next evolution of this content cycle. Then again, this combat footage has no plot except its geo-political context, no human intrigue except the innate empathy one feels while watching a fellow human being struggle in a trench. It all contributes to this slightly worrying, matter-of-fact type of viewing, where the violence is taken for granted, and the dead bodies are just nameless units. Paradoxically, I think the footage makes these wars feel much more real in one way, but much more one-dimensional in another. While we’re seeing the consequences of war that we normally do not, it makes clear how normal we can find such extreme violence, and almost espouses it by cementing it in the consciousness as the ‘done thing,’ which of course it is, whether it should be or not. It may grant young people more perspective than the insultingly simplified news reports they are used to, but they will need the ability to critically analyse what they see in order to make any informed judgement about how to interpret it. If that skill doesn’t exist, then the terrain of combat footage is ripe for those with wartime or nationalistic agendas to enter the scene and communicate any narrative that they please. It could devolve into a semi-Orwellian scenario, where conflict is explained as a necessity to keep people in constant fear, and then the footage will be there to simultaneously vilify any foreign ‘aggressors’ and to venerate our ‘heroes.’ For young people navigating this landscape in the online realm, it will certainly be confusing and difficult terrain to assess. This leads us into the next question of propaganda. This combat footage is often presented quite neutrally, (at least on the r/combatfootage subreddit I browse), but it has the staggering potential to be manipulated and to then manipulate others. It might sound funny, but ISIS has a very effective PR strategy designed to convey their narrative, and it includes a type of image warfare propagated through visual media. ISIS has managed to recruit over30,000 people from 85 different countries, no doubt in part as a result of their relentless media campaign which includes combat footage displaying their victory over other groups like the Taliban. Through these videos they can project their religious arguments while emphasising their power and violence, all in a slickly edited montage of jihadists gunning each other down to the sound of haunting Arabic chanting and music. It’s certainly unsettling, but it’s also easy to forget that these things are happening continents away while we sleep soundly. The realm of combat footage is by no means reassuring, but it is certainly revealing. I think that we can learn a lot about the modern zeitgeist and the climate that social media has created by understanding that this footage exists. There is no shortage of interest in warfare and violence online, and while it may give us new perspectives, it is also a tool of propaganda that is extremely powerful. In a world where this is all readily available to us, it begs the question of whether to watch or not. Is it better to keep our eyes and ears shut to the gunfire that rings out across countless communities, or shall we look to see what we can learn about the true state of humanity and the online realm? Either way, I think that this combat footage is likely to grow in relevance and continue to play a large role in the way our perceptions of global conflict are shaped, how we assess the actions of nations and armies as we see more now than we have ever had the chance to before. ABOUT THE AUTHOR, LENNY: I'm eighteen, and I just finished my HSC at Cape Byron Steiner School in Byron Bay. I joined the Safe on Social Youth Advisory because I am deeply committed to learning about topics that interest me and spreading that knowledge with as many people as possible. By telling stories that often go untold, especially from the perspective of a young man, I hope to bring a diversity of voice that is incredibly important, especially in the online space. These new technologies affect us all, and young people are on the forefront of these changes and the challenges they present.

  • HSC Students Warned Against Trolling After Their Exams

    School is a marathon, and the HSC is the final sprint. The sprint that some students (or runners, if you will) will trip and fall through, thus failing miserably. They will then take their anger out on the studied poets, artists and authors rather than their own shortcomings. This is most evident through the HSC students’ collective distaste for the English exams, both paper 1 and paper 2. TWO exams for 1 subject totalling 4 hours and 10min. It’s a bit excessive in my opinion. Let’s just say, English is not my favorite subject, especially considering it’s the only compulsory subject and every other two-unit subject has a singular HSC exam lasting up to 3 hours and 5mins. I’m not saying English isn’t important, it is, and I really believe that. However, it’s not suited to all students, and it doesn’t always play to their strengths or interests. With the different English courses including the ATAR pathway, you have a choice of either English Standard or English Advanced as well as the option for Extension units or the non-ATAR pathway of English Studies. The problem with the ATAR pathway is for many students it does not feel applicable to their areas of interest and is just not enjoyable in comparison to their other subjects. My Preliminary year 11 English was much more enjoyable, I felt it was more engaging than year 12 and my marks and ranking show that. My favourite ever assignment in year 11 English involved creating a google site exploring the investigative-journalism podcast, ‘Serial’, season 1 to be specific. This interactive digital assignment piqued my interest and I put care and effort into it and my results on the task reflected that. The idea of the assignment was for students to develop critical thinking skills and be able to question information from a logical standpoint, which is valuable in everyday life. In this digital age I feel more students should be given assignments that encourage students to be digitally fluent. In comparison my year 12 English assignments are influenced by what we will encounter in the last HSC exam - essays, essays, and even more essays which I enjoyed much less. Section 1, paper 1 is the main issue, it is basically comprehension, deconstructing and understanding unseen texts and relating them to the human experience. We revisited literature techniques and analysis of texts/stimulus but very briefly as it was presumed that students knew how to deconstruct text as they had done it all throughout high school. There are 5 sections all up each worth 20 marks. In every other section students had a whole term to study in detail whereas section 1 got a few lessons sprinkled in here and there. In previous years, students came out of the exam centres to immediately go on social media to harass and troll the referenced authors, poets and artists within the English exams. It's so common that my English Teacher, who I have so much respect for, said to my entire class the last week of school, with the passion that exists only in English Teachers, “If any of you go on social media and harass or dm any of the authors in the unseen texts after your exam, I will hunt you down.” I was waiting for an ‘and’, but it never came. But she warned us, and she should have warned the entire state I guess because no one wants a repeat of the previous HSC groups who have been stained by what a select few have said or done on social media. In 2021, Ocean Vuong, a Vietnamese American novelist/poet fell victim to angsty Australian teenagers, where the harassment was dealt with calmly using humour. Even posting on his Instagram story a picture of a dog along with “Yo, what the hell is an HSC exam and why are all of y’all failing it?”. The unseen text in question was an exert from Ocean Vuong’s 2019 debut, award-winning, coming-of-age novel ‘On Earth We’re Briefly Gorgeous’. The protagonist is depicted plucking the grey hairs from their elderly Grandmother's head as she tells them stories. Unfortunately, the graduating class of 2017 were not at all kind to Indigenous Poet Ellen Van Neerven. Van Neervan, along with every English Teacher in the state, were not informed that her poem was to be used in the exam. She endured racial and sexist abuse from some HSC students, primarily on the HSC Discussion Group. For the students that defended Van Neerven on social media, they also became the victims of abuse, including sexual harassment. This behaviour was in response to her Poem ‘Mango’ as it was featured as an unseen text and, without context, perplexed many HSC students at the time. The question “Explain how the poet conveys the delight of discovery”, was only worth two marks. To put this into perspective, this question would be only 2/100 marks in the English exams meaning, if a student bombed-out on this question, they would get a maximum of 98/100, which is a band 6, the highest band. No matter how much the question was worth, the abuse was unnecessary and extremely hurtful, especially as it was directed towards a person with no input into the exam whatsoever. NESA (NSW Education Standards Authority) expressed their disappointment as they stated students involved should issue apologies for their action and reiterates that the NSW curriculum promotes tolerance and diversity. This year, I completed my HSC. I am also a part of the HSC Discussion Group which had a few trolls that would do their trolling every now and then. But after the English Paper 1, the Group lit up with trolls, poking fun at both NESA and the unseen texts. English Paper 2 came along, and NESA decided to throw us a curve ball. After spending a year learning to write a polished and concise reflection piece for Module C, NESA decided against it and instead decided more unseen text was ideal for this year’s HSC. This baffled both students and English Teachers alike and led to more uproar on the Facebook Group directed at NESA. It is imperative that HSC students understand that theHSC isn’t the be all and end all. If you fumble on aquestion, it is not the fault of an artist, author, or poet, they don’t deserve the abuse and trolling theyare people with feelings too. For the students that troll these human beings they need to askthemselves, is it worth spending my fleeting time harassing strangers for an exam that is already over? Students need to get it out of their head that the HSC will determine the future trajectory of their life and avoid basing their self-worth on the ATAR. ABOUT THE AUTHOR, JADE: My name is Jade. I am 17 from regional NSW. I am passionate about bringing awareness to the safety of social media users because it is ever so present in my own life and the people around me. ​ I believe it is a hugely influential aspect of our lives as it shapes culture and social expectations and has significant impacts on our mental health. Ongoing education in regards to social media is necessary for the future.

bottom of page