top of page
Search Results

104 items found for ""

  • How to talk to kids about 'Get paid to play' games

    Get Paid to Play' games can be a minefield for both adults and children. These platforms, with their promises of easy money for playing games, can be particularly seductive. As a parent, you must equip your children with the knowledge and critical thinking skills they need to discern the legitimacy of these opportunities. Here's how you can approach the conversation: Start with Curiosity, Not Judgment Begin by expressing genuine interest in the games your children play or are interested in. Ask them what they enjoy about gaming and if they've encountered any 'Get Paid to Play' opportunities. Starting with an open and curious approach can lead to more productive conversations than immediately expressing concern or scepticism. Share the Appeal and the Realities Acknowledge that earning money through gaming is exciting and could seem like a dream come true. However, it's crucial to discuss the reality that not all opportunities are as lucrative or straightforward as they appear. While some platforms may offer legitimate rewards, many are designed to exploit players' time and money. Discuss the Importance of Research Emphasise the importance of thorough research before engaging with any 'Get Paid to Play' platform. Guide them on how to check reviews, read terms and conditions carefully, and verify the security of payment methods. Please encourage them to look for information beyond the game's website or advertisements, such as user experiences shared on forums or social media. Teach Them to Spot Red Flags Discuss common red flags, such as the requirement to pay money upfront, promises of unrealistically high rewards, or lack of clear information about how earnings are calculated. Teach them the value of scepticism in the online world and the importance of questioning offers that seem too good to be true. Teach the Value of Privacy Stress the importance of protecting personal and financial information online. Explain that legitimate games and platforms should never require invasive amounts of personal information or payment details without offering precise, secure processing methods. Create an Atmosphere of Open Communication Let your children know they can come to you with questions or concerns about online gaming opportunities. Assure them that you aim to support and protect them, not to restrict their fun. Creating an environment where they feel comfortable discussing their online activities will help keep them safe. Lead by Example Show them how you evaluate online offers and decide what to trust online. Your approach to showing them how to legitimise online content can serve as a powerful model for your children.

  • Deepfake Bullying Protection for Educators

    Artificial intelligence is getting smarter, creating some big problems in schools. Schools have faced a troubling rise in deepfake bullying, with instances of teachers’ images being manipulated into provocative or embarrassing videos. Photoshop and image manipulator apps do a lot of damage, but readily available deepfake technology for photos and videos has taken it to the next level. This misuse can stem from photos taken from screenshots of school websites, school social media accounts, and personal social media accounts. When a photo is deep faked to make someone look like they're doing or saying something they're not, it quickly becomes serious and can end up in a courtroom. It’s becoming more common for students to use deepfakes to pick on other students and teachers, and they often grab pictures from school websites or everyone’s social media to do it. It's causing a lot of stress. In some cases, teachers have told us they feel unsupported by their workplace and did not know that this misuse of AI technology could be reported. In some cases, legal action for defamation could be considered. Robust strategies for educating how to protect our digital identities, especially on social media, are needed. What is deepfake technology? When does it become defamation?  When is it considered image-based abuse? So many questions. We need to get better at teaching everyone – especially in schools – how to keep their online images safe, and be smart about what they share on social media in a proactive rather than reactive move. Mitigating Deepfake Risks Avoid Direct Gaze When it comes to photographs, the way you pose can make a difference. Direct eye contact with the camera can provide a clear, frontal view of your face, which is ideal for creating deepfakes. To reduce this risk, try adopting different poses where you’re not looking straight at the camera. For instance, a candid shot where you're glancing to the side or an artistic pose where your gaze is directed downwards can disrupt the alignment of facial recognition algorithms used in deepfake software. This doesn't make it impossible to create deepfakes, but it can lower the quality and believability of a potential fake image. Opt Out If your school or workplace regularly posts photos online, you have the right to request that your image not be included. It's worth having a chat with whoever is in charge of media or communications and expressing any concerns you have. You could suggest alternatives, like group shots where you're not directly in the front or providing a written contribution instead. Privacy is your right, and opting out is a valid choice if you're worried about your image being misused. Use Avatars Avatars are a creative and safe way to represent yourself online. They can be customised to reflect your personality without giving away your real appearance. You can design one using various apps and platforms that offer a range of personalisation options. An avatar can serve as a consistent visual identity for your online activities, from your profile picture on social media to your by-line in an online newsletter. Mindful Sharing It's tempting to share our lives on social media, but every photo shared increases the risk of it being used in ways we didn't intend. Think carefully before posting a photo – consider who might see it and how it could potentially be used. Apply privacy settings to control who can view your images and clean up your digital footprint regularly by deleting old photos that you no longer need to have public. Photo Use Agreement Whenever you’re at an event or participating in something where photos will be taken, it’s smart to ask about how those photos will be used. You might want to request that your photos are only used for certain purposes, like internal newsletters instead of public marketing materials. If there's a photo release form, read it carefully before signing, and don't hesitate to ask for modifications that make you feel more comfortable about where your image might appear. Robust Privacy Settings Make full use of the privacy controls offered by social media platforms and other websites. These settings can restrict who has access to your posts, photos, and personal information. Regularly review and update these settings to ensure they reflect the latest privacy options and your current comfort levels with online visibility. Content Watermarking Incorporating digital watermarks into your photos and videos adds a layer of protection by embedding a mark or logo that identifies you as the rightful owner. This can deter potential abusers by making the content less attractive for manipulation and easier to trace if misused. Stay Informed The landscape of AI and deepfake technology is rapidly evolving. By staying informed about the latest trends, tools, and potential threats, you can better prepare yourself to recognise and respond to suspicious content. Phishing Vigilance Be cautious with emails, messages, and links from unknown sources, especially those that urge immediate action or request personal information. Phishing attempts often precede identity theft, deepfake creation and sextortion. Verify the authenticity of any requests by contacting the sender directly through a known and trusted method like phoning them. Reporting Deepfakes and seeking legal advice If you encounter deepfake image-based abuse content involving yourself or someone you know, report it immediately to the platform where it's hosted. Most social media sites have policies against such content and mechanisms for reporting it. Should you become the victim of deepfake bullying that affects your reputation or well-being, consulting with legal professionals who specialise in cyber law can provide clarity on your rights and options. They can advise on actions to remove the content, seek damages, and navigate the complexities of digital law. In Australia, defamation law is primarily governed by the uniform Defamation Acts of 2005, which are state and territory laws made largely consistent under an agreement between the states. These laws aim to balance the protection of individual reputation with freedom of expression. Here's an overview of how defamation law works in Australia, especially regarding minors and cases where a teacher might sue a child for defamation. Key Points of Australian Defamation Law Defamation in Australia involves the publication of material that could damage a person's reputation by making others think less of them. It must be communicated to someone other than the person defamed. The plaintiff must prove that the statement was published, the material was about them, and it damaged their reputation. Several defenses are available, including truth (justification), absolute privilege, public interest, and fair comment on matters of public interest. Minors and Defamation Australian law does not exempt minors from being sued for defamation. However, suing a child for defamation is rare and presents practical and ethical challenges. The courts would consider the child's understanding and intent and whether they comprehended the defamatory nature of their actions. If a court awards damages against a minor, enforcing the judgment presents challenges. Minors typically do not have significant assets, and court orders for payment might not be practical. Parents' responsibility for their children's actions is limited in this context, though in some circumstances, there might be legal arguments for parental responsibility. Liability and Teachers Suing Students There are a few precedents of teachers suing students for defamation in Australia. Such actions raise significant concerns about the impact on the student, the educational environment, and the teacher's professional reputation. Schools and educational authorities generally prefer handling such matters internally through disciplinary procedures, mediation, or conflict resolution strategies, rather than through litigation. Legal Reforms and Considerations Australian defamation law has been under review, with reforms proposed and implemented to better balance freedom of speech with protection against harm to reputation. This includes considerations about digital platforms and potentially reducing the number of trivial claims. Seeking Legal Advice Specific advice from a legal professional is crucial in these matters, as the law's application can vary significantly based on the details of the case. Legal professionals can guide potential legal action, defences, and alternative resolution methods. Other Jurisdictions In every jurisdiction, the approach to defamation by minors, especially in contexts like a teacher suing a child, involves careful consideration of the child's age, intent, and understanding, as well as the broader implications for freedom of expression and the protection of reputation. Legal advice from a professional familiar with local laws is essential for navigating these complex issues. PROFESSIONAL LEARNING OPPORTUNITY We have a few spaces left throught the year via zoom for comprehensive training on cyber safety and AI Tech us in schools. Email wecanhelp@safeonsocial.com for more info.

  • Meta's new safety measures against explicit content

    Meta is introducing new safety measures to protect users, especially teenagers, from the risks associated with explicit content on its platforms. This move, affecting both Facebook and Instagram, comes in the wake of criticism the company faced for encrypting Messenger chats, a decision some have argued hampers the detection of child abuse. The new feature is designed to prevent the sending and receiving of nude images, with a focus on safeguarding women and teenagers from unsolicited content and the pressures of sharing such material. While children under the age of 13 are already prohibited from using Meta’s platforms, the new measures specifically target teenagers, making it harder for them to receive explicit material via Messenger on both Facebook and Instagram. The tool will also discourage teenagers from sending such images, although Meta has not specified how this will be implemented. Adults will also have the option to integrate these safety tools into their accounts for enhanced online protection. In addition to these measures, Meta is changing the default settings for minors on Instagram and Facebook Messenger. Under the new default, teens will only be able to receive messages or be added to group chats by people they already follow or are connected to. This aims to protect them from unwanted contact and give parents and teens more confidence in their online interactions. Teens in supervised accounts will need their parent’s permission to change this setting, which applies to all users under the age of 16, and in some countries, under the age of 18. These changes are part of Meta's broader effort to address concerns about online safety, particularly for younger users. The company has faced legal challenges and public scrutiny over its handling of user safety, with recent US lawsuit filings alleging that an estimated 100,000 teen users of Facebook and Instagram experience sexual harassment daily. Meta has responded by stating that the lawsuit mischaracterizes their efforts. The company's move to protect Facebook Messenger chats with end-to-end encryption by default has also drawn criticism from governments, police forces, and children's charities. Critics argue that this level of encryption makes it difficult for the company to detect and report child abuse material. Some have suggested that platforms should employ 'client-side scanning' to detect child abuse in encrypted messages. This system would scan messages for matches with known child abuse images before they are encrypted and sent, automatically reporting any suspected illegal activity to the company. Meta has announced that these new safety tools will also work in encrypted chat messages, with more details expected to be released later this year. On their blog, the company has emphasized its commitment to child safety, stating that it has introduced over 30 tools and resources to keep children safe and plans to introduce more measures over time. According to Australia’s eSafety Commissioner, in 50% to 70% of cases of online child sexual abuse, the abuser is known to the child. They also state that children who have been sexually abused online are four times more likely to experience mental health problems both immediately and throughout their lives. These alarming statistics underscore the importance of the steps being taken by Meta and other tech companies to ensure the safety of young users online.

  • Smart Tech and Online Safety - A Balancing Act for Families

    The landscape of home technology is evolving at a breakneck pace. From AI-driven applications to online games and the conveniences of smart home technology, we are witnessing a tech revolution that's making waves in our daily lives. As these innovations bring comfort and excitement, they also usher in new concerns, particularly when it comes to online safety. It's crucial to acknowledge that with every smart device and AI-powered app, the cybersecurity risks increase. These technologies, while sophisticated and helpful, may also expose us to various online threats. This reality calls for a heightened awareness and proactive measures. So, what can we do to maintain a balance? It's about being informed, vigilant, and engaged. Staying updated with the latest in tech safety, understanding the potential risks, and having open conversations with our families about digital security are key steps. AI and Smart Devices Imagine a typical household in today's era: children are playing online games, parents are controlling home appliances through smart devices, and AI-driven apps are a common sight. This scenario, though convenient, brings with it an array of cybersecurity concerns. The widespread use of AI tools, which often lack robust cybersecurity measures, exposes us to various online threats. This tech revolution demands our attention towards the safety of our families. From Apps to Online Games Consider AI-powered applications that allow us to upload photos and receive modified versions of ourselves. Innocuous as it seems, these interactions can lead us to inadvertently share personal data, potentially ending up in unsecured databases. It's a digital Pandora's box - once opened, hard to control. Then, there are chatbots, often providing age-inappropriate content, and online games, where the risk of encountering malicious actors is alarmingly high. These games often become platforms for cybercriminals to build trust with young gamers, only to exploit this relationship for personal data theft or fraud. The Smart Home Paradox The evolution doesn't end with AI apps and online games. Smart home technology, a cornerstone of modern living, also presents its set of challenges. While these devices offer convenience and control, they can become gateways for cybercriminals to access personal information. Children, in their interactions with these devices, might unknowingly expose sensitive data like names, addresses, and even parental credit card information. Here lies the paradox of smart technology - the smarter it gets, the more vigilant we must become. Fintech and Children With its tailored products and services for children, opens another frontier of risks. Banking cards designed for kids as young as twelve bring convenience but also expose them to financial frauds. Cybercriminals often lure children with promises of free gadgets or games, only to lead them into phishing traps. Awareness and Action So, what can we do to navigate this landscape safely? First and foremost, educating children about cybersecurity is paramount. They need to understand the risks inherent in gaming, the importance of protecting personal data, and how to recognise and avoid cyber traps. This knowledge, once a prerogative of adults, is now equally crucial for our children. Communication and Monitoring Open communication about potential risks and enforcing guidelines is crucial. Discussing the reasons for using an app, establishing clear boundaries, and respecting personal space are key steps in fostering a safe digital environment. Balancing Tech and Safety As we embrace the wonders of AI, smart homes, and the ever-expanding world of gaming and fintech, let's also embrace the responsibility of ensuring a safe online environment for ourselves and our children. It's about striking a balance - leveraging the benefits of technology while safeguarding our digital well-being. Remember, in the digital world, safety and awareness go hand in hand with innovation and convenience.

  • Taylor Swift, TFGBV, and when US freedom of speech protections become viral abusive images

    Taylor Swift has been subjected to a particularly disturbing form of online abuse involving the creation and viral spread of deepfake pornographic images. This incident highlights the darker aspects of technology-facilitated gender-based violence (TFGBV). These AI-generated images, which depict individuals in explicit situations without their consent, represent a new and deeply troubling strain of nonconsensual pornography. This was starting to happen in schools at the end of the last school year with students doing this to other students. The images of Swift originated from a Telegram chat dedicated to creating such content using generative AI tools, despite violating platform policies. There are no laws in the US to stop this kind of abuse, there are in Australia under Image-based abuse laws. Her fans, have rallied in her defense, launching a counter-campaign to bury the AI-generated content and support Swift. Their actions underscore the broader implications of such abuse, emphasizing that it's not just a threat to celebrities but can happen to anyone. This form of violence is a stark reminder of the potential for digital technologies to be misused in ways that cause significant harm, especially to women and girls. This is part of a larger pattern of gender-based violence online, where women in the public eye are often targeted with misogynistic abuse and harassment. This incident is just one example of how digital spaces can be weaponized against women, reducing them to mere objects of sexualization and ridicule. It highlights the need for stronger legislation against nonconsensual deepfakes, better enforcement of platform policies, and a broader cultural shift to challenge and condemn such acts of gender-based digital violence. I saw the images even though they have now been blocked by platforms, no doubt they will come back again. You can see the numbers in the image below that I heavily edited. Once it is out there it is hard to pull back. All of our courses for 2024 teach about ethics in AI use. Education is key. Law changes in the US should be a priority. When "Freedom of speech" becomes an abusive image...things need urgent change. #tfgbv #deepfake #taylorswift

  • The Rise of Synthetic Relationships

    The increasing prevalence of AI-generated 'people' on platforms such as Instagram has led to the rise of “synthetic relationships”. A synthetic relationship refers to an interaction or connection between a human user and an artificially intelligent entity, often portrayed as a human-like character or in the case of Facebook, celebrities with a side hustle of being a chatbot. These AI-generated personas, created using advanced technologies, can mimic human behaviors and communication styles, making them seem lifelike. The relationship is termed 'synthetic' because, unlike traditional human-to-human relationships, one side of the interaction is completely artificial and programmed. These synthetic entities, like the ones captured in the video clips, can appear on social media platforms, chat applications, or in virtual environments and are designed to engage in convincing conversations, respond to emotional cues, and sometimes even form emotional bonds with users. While these relationships can offer companionship or entertainment, they also raise concerns about the emotional impact on individuals who may find distinguishing between genuine human interaction and AI-generated responses designed to look and act like real humans. This is one of the things we discuss in our eReady Kids course because they pose a unique challenge, especially for young users that need to learn from a young age how to tell the difference. They can create body image issues and set unrealistic expectations of a child comparing their life or looks to something that does not exist. There's a significant concern that these interactions, which are becoming more sexualised, might be a cover for criminal activities like sextortion by criminal gangs. This involves tricking young users into sharing personal or sensitive information, which can then be exploited for blackmail. The primary issue here is the difficulty in differentiating real human users from AI-generated profiles, which could lead to dangerous situations if not approached with caution and awareness. During the school holidays, it's a common mistake to think that online risks, especially those related to schools, take a break. Many years of working in this sector have shown me that issues, particularly in group chats, actually increase during this time. Kids have more time to explore online....and they do. With the emergence of synthetic relationships, deepfake technology abuse and bullying, the online landscape is changing at pace, and parents are struggling to keep up with challenges. Deepfake tech can create realistic but fake images and videos, which can be especially harmful to students. We have had numerous reports of students deepfaking etc other without realising what they are doing is highly illegal and reportable under image-based abuse and bullying laws. For instance, a simple photo or video shared by a student (or their parent on a public Instagram account, for example) could be captured and manipulated into misleading content, leading to bullying and damage to their reputation, not to mention long-term mental trauma. The trouble with deepfakes is their ability to blur the lines between what's real and what's not, making it challenging for kids to discern the truth and handle the fallout. To combat this, adjusting privacy settings on social media is essential. Parents should ensure their holiday photos and posts, particularly those involving their children, are shared within a close-knit circle only. This reduces the risk of these images getting screenshots into the wrong hands and being used to create deepfakes to bully them by another student, or by someone else who may create CSAM (Child Sexual Abuse Material) from the image. Now is also an opportune time for parents to talk to their children about online safety. These conversations should be non-judgemental and non-confrontational and highlight the need to be careful about what they share, the dangers to their digital footprint, and the realities of technologies like deepfakes and synthetic relationships. It's crucial for children/teenagers to understand that the risks of the online world persist, even outside of school, and they need to be equipped to navigate these challenges. Parents need to be prepared for when things go wrong. This includes recognizing signs of cyberbullying, understanding how to report and remove harmful content, and maintaining open lines of communication with their children to discuss any online issues that arise. In this constantly changing environment, parental awareness and proactive engagement are key to safeguarding children during the holidays and beyond

  • Brain-computer interface technology and Snap's NextMind Acquisition

    In 2022 Snapchat's parent company, Snap Inc acquired NextMind, a neurotech firm based in Paris. This is a development in the tech world that's should be gaining more attention. While this move is an exciting step into augmented reality (AR), it's also something that parents and teachers should be aware of, given how fast technology is evolving and becoming a part of our children's lives. Brain-computer interfaces (BCIs) herald a new era in technological advancement, offering a range of positive applications that extend far beyond conventional computing. These interfaces, by translating neural signals into commands, enable individuals, particularly those with mobility or speech impairments, to interact with and control computers, prosthetic limbs, or other devices, simply through their thoughts. This not only opens doors to greater independence and improved quality of life for differently-abled individuals but also enhances learning and gaming experiences, offering a more intuitive and immersive interaction with online environments. What's NextMind and Why Does It Matter? NextMind specialises in brain-computer interface technology. Simply put, they've developed a way for people to control computers and AR/VR headsets using only their thoughts. Snap's plan is to incorporate this technology into their AR projects, like the Spectacle AR glasses. Imagine a world where you can make something happen online just by focusing on a virtual button – that's what we're looking at. Why Should We Care? This technology is not just a futuristic fantasy; it's a reality, and it's likely that our kids will be interacting with it sooner rather than later. As parents and educators, it's essential to stay informed about these advancements. Understanding how AR and brain-computer interfaces work will help us guide our children and students in navigating these technologies safely and responsibly. Safety and Privacy Concerns With such intimate technology used by organisation such as Snap, questions about privacy and data security naturally arise. It's crucial for us to discuss these aspects with our kids, teaching them about the importance of personal data privacy and the potential risks associated with new tech. As we venture into this new era of interactive digital experiences, let's make sure we're equipped with the right knowledge to help our children use these technologies wisely. It's not just about keeping up with the latest gadgets; it's about understanding the impact they can have on our lives and ensuring our kids are prepared for this online future. Some Tips As we've seen with the rapid integration of GenAI into our lives, technology is evolving at an unprecedented rate, and it's vital to stay informed and prepared. Here are some tips to ensure safety and privacy while using AR and VR technology: Educate About the Technology: It's essential to understand how AR and VR work, especially technologies like brain-computer interfaces. Knowledge about these technologies can help in recognizing their capabilities and limitations, making it easier to discuss their safe use. Discuss Online Privacy and Data Security: Engage in conversations about the importance of privacy. Explain how personal data can be collected and used, and the significance of consent in sharing information online. Set Boundaries and Usage Limits: Like with any screen time, it's crucial to set limits on the use of AR and VR devices. This helps in preventing overuse and ensures that kids have a balanced approach to technology. Monitor Content and Apps: Keep an eye on the apps and content accessible through AR and VR devices. Ensure they are age-appropriate and don't expose children to harmful content. Stay Updated on Safety Features: Manufacturers often update software to include new safety features. Regularly check for updates and be familiar with the safety settings available on the devices your children are using. Encourage Open Communication: Create an environment where children feel comfortable discussing their experiences and concerns about the technology they use. This open line of communication can be crucial in identifying and addressing potential issues early on. Be Aware of Physical Safety: Using VR headsets can sometimes lead to physical disorientation or accidents. Ensure there's a safe, clear space for using these devices, and educate kids about taking regular breaks to reduce strain. Use Parental Controls: Many AR and VR platforms offer parental controls. Use these features to manage and monitor your children's activity and to restrict access to inappropriate content. Here are some key articles of interest to get more insight Snap's Official Newsroom Announcement: Snap Newsroom TechCrunch's Coverage: TechCrunch Article https://www.frontiersin.org/articles/10.3389/fnsys.2021.578875/full

  • Youth Voice - Monthly Trending Topics

    Meta’s Underage Users Have Finally Caught Up with the Company Meta, the parent company of Facebook and Instagram, have been screwing young people like me over for a long time now. Finally, in the face of overwhelming evidence, the company is facing legal challenges from 33 U.S. states who allege that it actively “coveted and pursued” underage users, while intentionally ignoring the vast majority of reports the company received about underage accounts. Though Instagram ostensibly refuses to allow those under 13 onto its platform, a trove of documents including employee chat logs, analytics data, and concealed internal studies points to the contrary. The legal filing also revealed that Meta even created internal company charts displaying the percentage of 11 and 12-year-olds who used Instagram daily. I checked the date my Instagram account was created, and I was surprised. My account was created in May of 2014, meaning I was 10 years old. Instagram doesn’t seem to know my date of birth, though I probably had to lie about it to create my account at the time. From then on, it seems Instagram had no interest in whether I was old enough to have the app. That being said, the amount of money Instagram was making off its underage users in 2014 was certainly nowhere near today’s levels, and perhaps Facebook hadn’t even realised the economic potential of the minors on its platform when I signed up as an underage user. The presence, pursuit, and sanctioning of economic and informational exploitation of underage users has been described as somewhat of an open secret at Meta. While Meta defends itself, accusing the states of utilising “cherry-picked” documents to mischaracterise Meta’s actions, the company simultaneously undermines the initiatives of investigators who attempt to uncover the level of harm that minors face on platforms such as Instagram; when the U.S. state of New Mexico filed their lawsuit against Meta, Attorney General Raul Torrez expressed concern about Meta’s suspension of Instagram accounts that the state was actively using to prosecute its investigation into child predation on the platform. The lawsuit also reiterates the persistent claims that Meta knowingly designed and implemented addictive mechanisms in the creation of their social media platforms, a practice exposed by the whistle blower Francis Haugen, who also claims that the company intentionally catered to and exploited children under 18. Monopolistic businesses have a long history of exploiting and harming kids with dangerous products, while denying it, even while caught in the act of papering over the negative findings of their own studies. Big Tobacco, anyone? As social media platforms face further scrutiny, the legal system is beginning to catch up. British coroner Andrew Walker concluded in late 2022 that the death of 14-year-old Molly Russell was brought about, in part, by platforms like Instagram and Pinterest, which played a “more than minimal” role the acts of self-harm that led to her death. To me, the prosecution of large social media companies, and specifically the individuals within them who encouraged, if not entirely constructed this toxic culture, could be the best investment in childhood health that any government has ever made. The consequences of social media use are becoming quickly apparent, meaning the next steps that governments take will be of paramount importance. They could save a generation. Federal Government’s Vape Ban Implemented Jan. 1st As the government’s new vaping importation ban comes into effect in the new year, medical professionals are concerned about the strain that the medical system may experience due to nicotine dependency. The ban includes new plain packaging laws for vapes, and a set of conditions imposed by the Therapeutic Goods Administration (TGA) for those wishing to acquire a licence, granted by the federal government, to import vapes. Recent data is starting to reflect the real proportion of young people who regularly vape; 20 percent of 18 to 24-year-olds, as well as 14 per cent of 14 to 17-year-olds are current vapers. For a long time, I’ve maintained a healthy scepticism regarding vape use statistics; I always felt they were far too low. These new numbers are beginning to fall into line with my observations and experience. As the true scale of vape use is revealed, the associated bills rack up for the medical industry. More addiction means more patients and more treatment, especially if young people face nicotine addiction without the ease of buying a vape from a convenience store. Unfortunately, it could also mean more profits for tradition tobacco companies, as people make the switch from vapes to cigarettes. As a seemingly pragmatic acceptance of this fact, all GPs and nurses will have the ability to prescribe vapes as a means of nicotine addiction treatment under the new scheme. Whilst in the past only GPs who had elected to undergo additional training could be certified to prescribe vapes, just 5 per cent of practitioners signed up. Likewise, the RACGP estimated that only 7% of users acquired vapes via prescription under the old system. In the meantime, the thriving black market of vaping products undermined any legitimate prescription scheme, as vapes were readily available at tobacconists and corner stores across the country. In an effort to change this, the Australian Border Force has also been allocated an additional $25 million to effect the ban. Whether or not the sale of illegal vapes can continue remains to be seen, though the government’s strategy has been embraced by state health organisations across the country. While it hasn’t been embraced by many of the young people I know, they tell me they’ll probably just switch to cigarettes if they can’t get their hands on a vape. Sadly, Big Tobacco have been playing this game a long time, and it seems that they may end up on top once again. EU Drafts the World’s First Comprehensive ‘AI Act’ After years of hard-pressed negotiation, the European Union (EU) has ensured that their groundbreaking AI Act will finally be enshrined in law. It’s a pivotal piece of legislation aimed at curbing potential harm in domains where AI poses the gravest threats to fundamental rights, including law enforcement, healthcare, border surveillance, and education. It also enables governments to ban applications of AI tech that present an "unacceptable risk." Under this act, AI systems categorised as "high risk" will be subject to stringent regulations, necessitating the implementation of risk-mitigation mechanisms, including the use of high-quality datasets, full transparency during a technology’s development and deployment, and vitally, human oversight. The AI Act is a monumental achievement, bringing much-needed regulations and enforcement mechanisms to a profoundly influential sector, though it took legislators a long time to reach a unanimous position, with dissent at times from countries like Germany, France, and Italy. The Importance of Binding AI Ethics Silicon Valley, and especially it’s pack of AI evangelists, love to lecture the public about their approach to ethical design and development. However, as we’ve seen with the latest OpenAI saga, Sam Altman’s Lazarus-esque return, and the respect that Microsoft’s bottom-line commanded during these negotiations, it becomes clear that in the Valley, profit often smothers ethics in its sleep. Hence, the EU’s introduction of legally binding, enforceable rules surrounding the ethical design and deployment of AI technology may come to represent a cornerstone of user protection in the space. The implications of AI for law enforcement, biometrics data, copyright, and privacy necessitate that companies and governments shoulder a burden of responsibility to ensure the protection of fundamental human rights. AI technologies deemed to pose unacceptable risks will be prohibited. These include systems engaging in cognitive behavioural manipulation, social scoring, and remote biometric identification, with limited exceptions for law enforcement purposes. High-risk AI systems that impact safety or fundamental human rights will be subject to more stringent scrutiny. They encompass AI systems used in products falling under EU product safety legislation and AI systems in specific critical areas, all of which must be registered in an EU database. High-risk AI systems will undergo comprehensive assessments before entering the market and throughout their lifecycle. For AI systems with limited risk, minimal transparency requirements will be enforced, allowing users to make informed decisions. Users interacting with AI applications must be made aware of the AI's involvement, particularly for systems generating or manipulating image, audio, or video content, such as deepfakes. The key here is that the government, and by extension its citizens, are awarded full transparency and a guarantee against any abridgement of their rights. This tenet is, and must be, the precursor for all serious AI legislation that intends to protect users. A Barrier to Dystopia What is most impressive to me about the EU’s new AI regulation is how comprehensively it has reacted against the common conception of our worst dystopian nightmares; that is, it regulates the creation of AI technology that could precipitate an overbearing police state, or the rise of oppressive techno-capitalist overlords. Certain applications have been completely banned, like the creation of facial recognition databases via the generalised scraping of data from CCTV and the internet, or emotion recognition software in school or at the workplace. Likewise, AI systems are not allowed to engage in behavioural manipulation, social scoring, or biometric identification and classification of people. Some EU countries have resisted the strenuous regulations surrounding the use of biometrics; France has continued to adopt new AI surveillance technologies, including legislation authorising police use of AI powered, algorithmic video surveillance ahead of the 2024 Paris Olympics. While individuals’ fundamental human rights are now broadly protected, none of these regulations apply to technologies developed exclusively for military or defence purposes. In sum, the EU now represents the gold standard for legislative reform surrounding AI, a move that in the coming decades will, I’m sure, prove itself to have been both highly prescient, and deeply necessary. I find the prospect of US regulation highly unlikely; the level of compromise and dilution involved in that lawmaking process would mean a product resembling barely a shadow of the EU’s legislation. This will prove to be one of that greatest A/B tests of all time, and while Australia watches from the backbenches, the government of the day ought to be taking note; given a few decades, the contrast between life in the EU and life in the rest of the world may grow increasingly stark.

  • The "Tradwives" trend on social media

    Over the past few months, I've had several close friends contact me, sharing photos of 'trad wives' that have been appearing on various social media platforms, especially Instagram and TikTok. They've been curious about my thoughts on this trend. My concern, as always, lies in how specific polarising views might influence young people who stumble down rabbit holes of this content. In a few cases, there is what seems like an internalised patriarchy rearing its head. For instance, women might undervalue their own abilities or accept gender-based discrimination as normal. Similarly, men might feel compelled to conform to traditional masculine roles or views, even if these do not align with their personal beliefs or values. It's essential for us to instil critical thinking in our youth so they understand that they have the right to choose their path in life without succumbing to online influences. What some of these 'trad wives' are advocating can, in a way, be seen as a form of reverse bullying. They post messages implying that all women should conform to their lifestyle, suggesting that this is the only way to be a 'real' woman. Such assertions can become problematic. To explore this, I asked our Youth Advisory Council for their input, and we have two pieces: one from 19-year-old Lenny and another from 17-year-old Madison. For the accompanying images in this story, I asked ChatGPT to generate an image of a 'tradwife' and then one of a working mum. These AI-created images serve to highlight the diverse interpretations of these roles. Perspectives By Lenny 19yrs and followed by Madison 17yrs below The social media algorithm loves an argument, and nothing drives engagement like a heated debate about the role of conservative values in our socially progressive society. There’s really nothing better than reading these part-time sociologists argue in the comments of a Tik Tok of a woman packing her husband’s lunch. That brings us to this new topic: the emergence of the ‘tradwife’. This is the new-ish term, short for ‘traditional wife,’ for a woman who espouses and practices a much more traditional role in a marriage, preferring to stay at home to carry out domestic duties, take care of the children, and be a homemaker. At least, that’s how it appears on the surface. From what I’ve seen of the response to women on Tik Tok and elsewhere who endorse this lifestyle, the feedback seems remarkably harsh. One headline from style and culture magazine ‘The Cut’ leads with the phrase ‘Is Tradwife Content Dangerous, or Just Stupid?’ So why is there such open hostility to what appears to be an informed choice on the part of a small minority of women? Firstly, it seems the tradwives have cultivated a very neat aesthetic that almost appears like a parody of peak 1950’s domesticity. Women stayed at home and looked after the kids, men went out and made the money, everybody lived happily ever after, etc. This seems to me an incredibly one-dimensional assessment of a time which, contrary to what tradwife proponents seem to think, was neither simple nor easy for women. The idea that this family structure represented the natural state of things is, truly, idiotic. The marketing and capitalist propaganda of the ‘50s was so good that it still has men and women longing for the vintage charm of indoor smoking and rampant amphetamine use among women who struggled to conform to this ridiculous ideal. There seem to be a lot of issues attached to this movement, which has spurred on justifiably strong reactions from many. Without going too in-depth, (you can read about these phenomena elsewhere), this movement could be characterised as a dog whistle for the far right, and a damaging precedent to set for boys and young men making their way out into the world. Then we arrive at the economic factors of today. How many people can really afford to raise kids in a stable home on a single income? The answer is very few. Recent statistics show that more than 80% of households have more than one breadwinner; this displays the parochial nature of the tradwife trend, and its stunning inability to see beyond the realities of the modern day. It requires a degree of economic privilege to intentionally leave the workforce to mother children full time, as unfortunate as that is. This brings me to the other tradwife issue, which is that is seems to eliminate the role of the man in child rearing. In all the videos I’ve watched, the men are seldom seen. Are we just to assume that they’re always at work, bringing in the money? The whole concept just seems so soulless, and that’s reflected in how performative all of the Tik Tok and Instagram videos are. It all just looks like women LARPing in a warped conception of feminine excellence, which for some reason, despite it’s ‘traditional’ skew, includes these smutty, tightly cinched aprons that I’m sure they would say are essential to the practice. And they do think it’s excellent. In my mind, there’s nothing wrong with a woman who makes a conscious choice to spend her time at home with her children, beautifying her home, supporting a husband, and whatever else they feel like doing. The problem is the level of condescension involved, as if this is peak feminine existence. The whole idea makes me bored and tired. Why are we rehashing this experiment again? Yet I feel it’s also important to understand where this trend might stumble upon some credible ground, even if unintentionally. From my perspective as a 19-year-old straddling boyhood and manhood, it seems like the world is still predicated upon some very masculine principles. Especially in professional and cultural spheres, it seems both men and women are exhorted to work harder, to prioritise productivity and to maintain a constant energy around this. People on the more ideological end of the tradwife spectrum might use this argument to place women back within a very cramped and domestic box, which is in itself a deeply misguided impulse. What I’m getting at though, is that maybe it’s time to reconsider the ways in which we keep pushing women into the masculine sphere in order to survive and live anything approximating a prosperous life. Positions within the 9-5 workday structure have been historically occupied by men, and hence designed to prioritise the hormonal and emotional cycles of men. The male hormonal cycle is 24 hours, with testosterone peaking in the morning, meaning men are perfectly in sync with our current conception of the workday. This completely ignores the 25-35 day hormonal cycle of women, who are encouraged, even expected, to function consistently and unerringly at similar levels of productivity throughout the month, regardless of the menstrual cycle. If the tradwives really wanted to be radically traditional, they wouldn’t attach themselves to these misguided principles that still insist on elevating men and confining women to these extremely narrow roles. To truly be trad, maybe we need to start evaluating the roles of women and men within our society based upon our natural hormonal cycles and energetic patterns. If we considered these principles, I think there is amazing potential to advance feminism further, by injecting an element of equity into the very design principles by which we build the world together. In any case, the tradwife trend is probably just a phase. I think most people see right through the act, so its staying power is limited. The tradwife content producers will probably lose interest quite quickly, and themselves move on to more interesting things. In the meantime, I have no interest in going anywhere near it again. - by Lenny Dowling ------------------ The 1950s. The post-World War II boom, TVs became more common in households, and women were homemakers and stay-at-home mums while men worked. An era where women were known for their vintage dresses, curled hair, bright lipstick, aprons and subservience and devotion to their partners. They had limited rights, were prone to abuse and had little to no life outside of their homes. Sounds bad, right? Well, this era and attitudes are quickly resurfacing in the Tradwife Trend. The Tradwife Trend (or movement) is where women act as ‘traditional wives’ mirroring the behaviours and attitudes of a 1950s woman. One account, @tradwiferoriginal states “Tradwife stands for educated women who prefer a role of feminine and respectful submission in a loving relationship.” These women have traditional and conservative values and believe a woman’s sole goal is to serve and cater for their husband’s needs and create a family. Their lives revolve around cooking, cleaning, staying at home and not working. They encourage traditional, feminine dressing and have a large focus on being presentable and beautiful. They also believe in a ‘simpler lifestyle’, the ‘nuclear’ family and that motherhood and wifehood should be women’s top priorities. They are often against women going to college or university, and many prefer to homeschool their children. They believe in traditional gender roles, with the man being the masculine breadwinner and the woman being the submissive homemaker. They also hold high value in returning to the ‘natural order’ of life and turning back time to the conservative ways of decades past, criticizing the changes in society due to ‘woke culture’. Users such as @esteecwilliams are fully turning back the clocks, adopting the whole look and lifestyle, whereas women like @jasminedinis put a more modern spin on this trend. Many are Christian, and majority adopting this lifestyle are Millennials or Gen Z. On the surface this movement may seem harmless and fun, but there are also many issues. Although varying in extremism, many groups harbor racist, misogynistic, homophobic, and transphobic ideals, similar to that of the 50s, and accounts and groups become the basis of many conspiracy theories regarding the government, the COVID-19 pandemic and more. They ‘slut shame’ young women, criticize ‘nagging wives’, belittle the work of feminists both present and past, and degrade and look down on other women for their autonomy and rights. Their ideals take away the rights of the women participating in this movement and suggests to them that they have little value to society other than in the household. It goes back in time and corrupts the efforts of other women who have fought so hard for the rights and freedoms we have today. Some sources have suggested it may creates a generation of young women to believe that they have no autonomy or rights. Although this trend seems cute and fun on the surface, it is easily a breeding ground for unsafe behaviours and can heavily influence the people participating. It takes away from the rights and freedoms of women and conveys to them that they have little value other than being a mother and wife. - by Madison Jones

  • Meet Sally or is that Sam? Synthetic Stars and How AI Celebrities Are Redefining Human Connections

    The emergence of AI celebrities marks a pivotal shift in our social media use, blending the lines between reality and virtual interaction. As these synthetic stars gain prominence, it becomes increasingly crucial to educate ourselves and those around us about the nuances of engaging with such advanced technology. The lifelike nature of these AI entities, while fascinating, also calls for a heightened awareness and caution, especially regarding the sharing of personal information. One of the primary concerns is the potential misuse of data. Engaging with AI entities like "Sally," modeled after Sam Kerr, or others in Meta's roster, could lead to scenarios where users are prompted to share personal details or details that may become personal in the future. It's essential to remember that while these AI assistants provide entertainment and interaction, they are also data-driven entities operating under the control of large corporations. The data shared with them could be used for various purposes, including targeted advertising, market research, or even sold to third parties. We really don't know. Another aspect to consider is the long-term implications of data usage. As technology evolves, the ways in which collected data can be utilized will also expand. Information shared today might be used in ways we cannot currently anticipate. This uncertainty underscores the importance of being judicious with the information we share with AI entities. It's vital to foster an environment of digital literacy and caution. This includes educating individuals, especially the younger, more impressionable users, about the importance of maintaining privacy online and being skeptical about the amount and type of information shared with AI personalities. Encouraging critical thinking about the motives behind these AI interactions and the potential long-term use of shared data is essential. Not to mention how much more money companies are making with a whole new level of personal data being fed to them. While to some AI celebrities like "Sally" represents an exciting development in digital entertainment, it also necessitates a proactive approach to educating ourselves and our families and communities. Understanding the difference between real and synthetic interactions, and being cautious , is paramount in navigating this new era of human-AI relationships.

  • Youth Voice - Monthly Trending Topics

    I am Kirra Pendergast, and it is with great excitement that I introduce our new monthly newsletter, a resource designed to keep parents, educators, and guardians informed about the dynamic and often challenging digital landscape our children are navigating. Each month, we'll delve into the most pressing issues and trends that are capturing the attention of young people. Our newsletter will feature contributions from Lenny Dowling, the head of our Youth Research Division and Madison Jones from our Youth Advisory Team whose deep understanding of digital trends and their impact on youth is invaluable. We will tackle a range of topics, from the latest social media crazes and emerging online risks to practical advice on online safety. Our goal is to empower you with the knowledge and tools necessary to guide and support the young individuals in your care as they explore the vast digital world. We believe that through awareness, education, and open dialogue, we can create a safer and more positive online experience for our children. A note from Madison from our Youth Advisory on current trends There's a notable trend of misinformation circulating, especially concerning topics like the complex Israel-Palestine situation. The volume and intensity of these discussions are both overwhelming and concerning, as it demonstrates how quickly unverified information can spread. Remember the 'stop don't talk to me' dance from the days of Musical.ly? It's making a surprising comeback. While it may seem harmless, it's important to acknowledge the underlying tones of bullying within its lyrics. It serves as a reminder of how online trends can sometimes inadvertently promote negative behaviours. The speculation surrounding Matthew Perry's death is another point of concern. Theories linking his death to suicide, based on superficial analysis of his writings and social media posts, are not just baseless but also show a lack of respect for his legacy. This trend highlights a broader issue of sensationalism overshadowing empathy and respect in online discourse. On a lighter note, Taylor Swift's re-release of 1989 has sparked a flurry of activity among her fans. The level of engagement and enthusiasm in dissecting her lyrics for hidden messages is a testament to her influence and the power of fan communities. A disturbing trend I've noticed is some young males boasting online about losing interest in their relationships as an excuse for infidelity. This trend is problematic as it normalises disrespect and dishonesty in relationships, an issue that deserves more serious attention and discussion. In terms of my personal online activities, I've been educating myself on the Israel-Palestine conflict, a complex and significant issue that demands more understanding and empathy. Being unwell recently, I found solace in watching and creating reels, a pleasant distraction during recovery. Halloween brought out my creative side, leading me to share costume ideas through reels. I've also taken steps to curate a more positive social media environment, particularly by unfollowing individuals whose views, especially on racial issues, I found objectionable. It's a small but important step in promoting a more inclusive online space. Regarding my peers, there's been a noticeable trend of 'rite of passage' posts related to partying, seemingly a way to affirm social status. The 'stop don't talk to me' dance is also popular, despite its problematic aspects. Instagram story stickers have become a new tool for expression, often used to highlight personal moments or favourites. ________________________________________________________________________________________ Weapons banned in UK apparently found on shopping app Temu - by Lenny Dowling: Consumer protection agency ‘Which?’ says it bought age-restricted knives and axes without checks from sellers in Temu. Temu markets itself as the Chinese equivalent of Amazon, though its user authentication requirements are far less stringent than most other online retailers. Temu will not ask for date of birth, or any form of age verification, despite selling knives and other potentially banned, or at least dangerous, implements. Temu has been aggressively advertising through TikTok, which has led it to record nearly 39 million downloads worldwide in August of this year. Temu has now removed all related weapons listings such as knives and axes after receiving “a complaint of a person under 18 purchasing a bladed article from our platform.” Cyber Incident at DP World Australia Shut Down Port Operations, Backed Up 30,000 Shipping Containers - by Lenny Dowling: The latest large-scale criminal attack on critical infrastructure shut down port operations across Australia over the weekend, prompting a backup of some 30,000 shipping containers that were unable to unload for several days. The attack, being characterised as a “cyber incident” by victim DP World Australia and still unattributed, appeared to have involved ransomware but without an accompanying ransom demand. This DPWorld attack is yet another example of the pressing need for companies to begin taking malware and ransomware seriously, as we enter an age where techno-terrorism will become increasingly common, perpetrated by private groups in order to make money, and by state actors wishing to disrupt supply chains and undermine national security. Bin Laden Manifesto Trending on Tik Tok - by Lenny Dowling: TikTok says it has been “aggressively removing” posts featuring Osama Bin Laden’s ‘Letter to America,’ written one year after the September 1st terrorist attack. Interestingly, the letter had only initially garnered about 2 million views until X, formerly known as Twitter (Really, are we still doing this? It’s not like Elon Musk is anywhere near as cool as Prince, and don’t we all know that it’s changed by now anyway?), influencer Yashar Ali posted a compilation of existing reaction videos on TikTok, which sent the views of the hashtag #lettertoamerica to 13 million. TikTok then removed the hashtag #lettertoamerica from search results, while suppressing videos with the hashtag, and even videos of those criticizing the sudden and widespread endorsement of Bin Laden’s letter. The Guardian newspaper was forced to remove its translation of Bin Laden’s letter from its website after outrage began bubbling up from all corners. The masthead, on whose website the letter had become the most viewed news story, said that it was taken down due to the fact that it had been “widely shared on social media without context.” Implications manyfold: Nobody knows when the original post was made, or why the content has resurfaced now, of all times. In a moment where US lawmakers are directing increased scrutiny towards the already outsized and frankly monopolistic behaviours of social media companies like Meta, Tik Tok is only going to come under more pressure from governments. Even in a time where, in a mutual hot flush of geopolitical proportions, there seems to be a thawing of relations between the elder statesmen of China and the US, US government officials are not allowed to have Tik Tok installed on their devices for security reasons. It is evident that distrust remains, and the re-emergence of Bin Laden’s manifesto on Tik Tok, where there is a non-zero probability of CCP interference, will only engender greater scepticism from global policy makers. The swift endorsement of Bin Laden’s message from Tik Tok users must, after all, be the exception rather than the rule, right? Phrases in one of the early paragraphs includes “the creation and continuation of Israel is one of the greatest crimes, and [America] the leaders of its criminals,” and “Each and every person whose hands have become polluted in the contribution towards this crime must pay its price, and pay for it heavily.” Hence, we can immediately understand the swift popularisation of this text what is ow the context of renewed war between Israel and Gaza. Piers Morgan Conducts Latest Andrew Tate Interview - by Lenny Dowling: Piers Morgan has released a second hour-long interview with Andrew Tate, almost a year after their previous discussion in London. Morgan flew out to Bucharest to meet and interview the Tate’s, challenging Andrew on his more recent controversial statements, as well as extracting Tate’s thoughts on his seemingly ongoing legal troubles, which include charges of rape and human trafficking, in allegations levied by the Romanian criminal courts. The interview had garnered 2 million views in 12 hours and will likely see millions more by the end of the week. Questions were, of course, asked of Tate about his stance on the ongoing war between Israel and Gaza, while the two also clashed over the definition of misogyny. While Tate had disappeared from the spotlight during his 3-month jail stint, he, as well as major independent media outlets, seem intent on rehashing the debates and talking points of the past. It doubtless makes for captivating content that drives audience engagement, while dividing many, which likewise furthers the popularity of such interviews. ______________________________________________________________

  • Risks of Apple's NameDrop Feature

    Apple's latest iOS 17 update brings with it a nifty new feature called NameDrop, and it's pretty much what it sounds like. Think of it as the digital equivalent of swapping business cards, but instead of cards, you're using your iPhones or Apple Watches. Just a quick wave of your phone near someone else's, and voila – you've shared your contact details. Whilst it's magic for the networking crowd, it also makes the whole 'let me give you my number' dance a little weird. As with anything that seems too good to be true, there's a bit of a catch. Not everyone is sold on this whole NameDrop thing. In fact, it's stirred up quite the buzz, especially around privacy and safety concerns. Imagine being at an event and someone a little too eager wants your details – it's not always a situation you want to be in. These worries aren't just random paranoia; they're being talked about all over social media, with many pointing out how this could be a bit awkward, or worse, unsafe, particularly for women. So, while NameDrop is a cool step into the future of how we connect with people, it's also a reminder that with great tech comes great responsibility (or something like that). It's about balancing that awesome 'techy' feel with being mindful of our digital footprints and who we're leaving them with. The Risks Privacy Concerns - Users may feel pressured into sharing their personal contact details with others, leading to potential privacy breaches. Safety Issues - Particularly for women, the feature could be misused by individuals who insist on contact sharing, creating safety concerns. Consent Complications - The act of sharing contact information through a simple button press might not always reflect true consent, especially in situations where users feel uncomfortable refusing. The Benefits Convenience - NameDrop offers a quick and easy way to exchange contact information without manually entering details. Security Measures - Apple has implemented safeguards such as requiring both devices to be unlocked and in close proximity, and allowing users to decline transfers from non-contacts. Innovation in Networking - The feature enhances networking experiences by simplifying the process of connecting with new acquaintances. Step-by-Step Guide to Disable NameDrop To ensure your comfort and privacy, you may choose to disable the NameDrop feature. 1. Tap on the 'Settings' app on your iPhone. 2. Scroll down and select the 'General' option. 3. In the General settings, find and tap on 'AirDrop'. 4. Within the AirDrop settings, look for a section or option labeled 'Bringing Devices Together' or similar. 5. Toggle off the option for NameDrop. This will prevent your device from participating in the NameDrop feature. 6. Ensure that the changes have been saved and exit the settings. 7. While you're in the settings, it might be a good time to review other privacy and security settings. Check what you're sharing and with whom and adjust settings according to your comfort and safety needs. Additional Recommendations Use Hide My Email Utilize Apple's Hide My Email feature for additional privacy, especially when signing up for online services. Stay Informed and Vigilant Keep yourself updated on the features and settings of your device. Being proactive about your digital privacy and security is essential. In summary, Apple's NameDrop feature in iOS 17 is a double-edged sword. It stands as a testament to the remarkable strides in technical innovation, making the sharing of contact information as simple as bringing two devices close together. However, this convenience also brings to the fore important considerations regarding privacy and personal safety. While the feature offers practical benefits for networking and social interactions, it's imperative for users to be aware of the potential risks and exercise control over their digital interactions. As we embrace these technological advancements, it's crucial to balance the allure of convenience with a conscientious approach to privacy and security.

  • US Government Takes Action for Responsible and Trustworthy AI Development

    As the UK prepares for its first AI Safety summit, the US government has issued a significant directive aimed at fostering the secure and reliable advancement of Artificial Intelligence (AI). Here’s a simplified breakdown: The US government is encouraging a united approach across all its departments to ensure that AI is regulated properly throughout the US. AI has immense power to transform our world, enhance prosperity, and drive innovation. However, it also brings potential risks such as bias, fraud, discrimination, and misinformation. For AI to truly be beneficial, it must be used responsibly and transparently, with proper legal guidelines in place to manage its potential risks and unlock its capabilities. Collaboration among the government, businesses, universities, and communities is essential to achieve these objectives. The Executive Order signifies a monumental step towards establishing a robust policy framework that aligns AI technologies with democratic values and civil liberties. It underscores the necessity of continuous efforts and collaboration among various stakeholders, including tech giants, academics, and civil society, to navigate the complexities of AI governance effectively. For a more detailed understanding of the Executive Order, you can access the fact sheet here. Key Highlights A move towards more open and ethical frameworks is being embraced, necessitating GenAI foundational models to disclose findings from rigorous safety evaluations and mitigation strategies. Recognition of the essential need for providing resources to educators, facilitating the responsible integration and utilisation of GenAI tools such as AI tutors in their teaching methodologies. Privacy takes a central role, with a distinct emphasis placed on protecting the data of young individuals. A concentrated effort is being made to reduce algorithmic bias, which is inherently present in all existing GenAI foundational models in the current market. This focus aims to promote fairness and objectivity in the outcomes produced by these models. A significant emphasis is placed on watermarking and identifying content generated by AI or synthetic means. This approach is crucial to move beyond the current discourse that simplistically categorises AI as deceptive or easily identifiable, promoting a more nuanced understanding and handling of AI-generated content. To promote responsible AI use in education, the US Secretary of Education must develop resources, policies, and guidance within a year. These should focus on the safe and nondiscriminatory use of AI, considering its impact on vulnerable communities. The development should involve relevant stakeholders and include an "AI toolkit" based on recommendations from the US Department of Education’s report. This toolkit should guide education leaders on human review of AI decisions, designing trustworthy and safe AI systems in compliance with privacy laws, and establishing specific guidelines for educational contexts. What it is lacking that needs to be addressed There's a lack of strong calls for better AI literacy training. Such training is essential as it teaches people how to use AI technologies ethically and responsibly. Without it, there's a risk of people misusing AI. The framework should focus more on providing equal access to technological tools for everyone, everywhere. This promotes fairness and inclusivity, allowing people from all backgrounds to benefit from AI technologies. There's a need for clearer guidance on how current foundational models will adapt to new safety and transparency guidelines. Clear instructions are crucial for updating existing models to meet new standards, ensuring the safe and transparent use of AI technologies, which is vital for user trust and reliability. A call to action to take into account the impact of GenAI chatbots/synthetic relationships such as SnapChat's "My AI", and Facebook's "Billie" and others. These tools should be designed considering the unique needs of the youth, ensuring the technology used is suitable, and safe, and enhances their development and learning. The roadmap has been laid out, but the real challenge lies in its execution. There is optimism that we are progressing towards a future where the development, deployment, and adoption of these revolutionary tools are conducted responsibly and ethically, ensuring that they bring about positive impacts on society and individuals. It's great to have general guidance, but the real test will be its implementation. Here at Safe on Social, we will continue to focus on how we are assisting educators in teaching AI literacy, including ethics and safe use through tools to enhance classroom and learning outcomes whilst attempting to close the ever-widening digital divide. ------------- For information on how we can assist your organisation including keynote bookings click here For more information on our School and business AI Programs click here To purchase the first of our AI Lesson Packs for just $89+GST for a whole school license click here

  • The Escalation of Online Aggression Among Teens and Tweens Post-Covid

    A Guide for Parents and Guardians (schools please feel free to publish part or all if you choose) The post-Covid era seems to have delivered a significant escalation in online aggression and cyberbullying among young people. In fact, it seems to be affecting all people - not just the kids. For this post, I am only talking about reports and cries for help submitted to Safe on Social by schools and parents. These private messages, emails, and Zoom meetings indicate a disturbing trend of intensified hostility, marked, in some cases, by violent threats and the use of horrendous language by children as young as 9-10 years towards each other (calling each other c-bombs etc.), and the proliferation of hate speech in online interactions involving teens and tweens. I could compose an extensive and alarming essay on the effect global occurrences online are having on our kids, along with my perspective on why addressing many of these issues proves to be challenging and why they often elude regulatory measures. A close friend poignantly summed up the online overwhelm yesterday in one simple sentence, "Our humanity is bleeding out?" I have been thinking about that all night. How do we do better? what else can we do? can we collectively say stop and choose to share kindness so our kids are flooded with posts about kindness, empathy and helping one another as much as we can during such troubled times? We are glued to our devices, watching all sorts of atrocities unfold globally, daily. It is almost unavoidable. I think we may have forgotten that the young people in our lives have experienced trauma after trauma, both online and offline, over the past few years, and it is quite clearly having an impact. So, this is meant to give you an understanding of the current landscape as reported almost daily to me by people frustrated and scared and not getting the support that they need anywhere else. I have also coupled this with practical guidance to start navigating and addressing these challenges. Trends and Observations in reports to me: Increased Violence in Online Interactions There has been a noticeable increase in the severity of threats and violent language used in online conflicts among tweens/teens. Reports include instances of explicit threats of physical harm, contributing to heightened anxiety and stress among victims. Late-Night Group Chats A significant number of aggressive interactions, including the exchange of violent threats and derogatory language, are occurring in late-night group chats. This trend is particularly concerning due to its impact on adolescents' mental well-being and sleep patterns. Impact on Sleep and Mental Health The presence of mobile devices in bedrooms and participation in late-night online interactions have been associated with disrupted sleep and increased stress levels among teens and tweens, affecting their overall mental health and well-being. They are literally lying awake all night stressing over what is being said and refreshing over and over to see if more is being added to the chat. Every beep of notification is interrupting their sleep. Use of Derogatory and Hate Speech The use of racist slurs, homophobic comments, and other forms of hate speech has been prevalent in reported instances of online aggression, reflecting a broader societal issue that needs urgent attention and action. Concerning level of inaction The increasing prevalence of online aggression and cyberbullying has unfortunately been met with a concerning level of inaction by various online platforms and regulators. Users, particularly those who have faced harassment or have reported inappropriate conduct, find their pleas for help and intervention often overlooked or inadequately addressed. This lack of responsive action has led to a significant erosion of trust particularly among teen users, leaving many feeling disillusioned and unprotected in their online lives. The seeming indifference exhibited by these platforms not only perpetuates the cycle of online aggression but also discourages victims from reporting, as faith in effective resolution diminishes. Guidance for Parents and Guardians: Validation and Support It is crucial to validate the experiences and emotions of tweens/teens who encounter online aggression. Offering a listening ear and expressing understanding and support can make a significant difference in their coping process. Promoting Open Communication Encourage open and honest communication, allowing your tweens/teens to express their feelings, concerns, and experiences related to online interactions and aggression. Exploring Reporting Mechanisms Be informed about the various reporting mechanisms available on different online platforms and guide tweens/teens in accessing and utilising these resources when necessary. If there are threats of harm, immediately report to law enforcement. Advocating for Professional Support Consider the option of professional mental health support, such as counselling or therapy, to provide your child with additional coping strategies and emotional support. Device-Free Bedrooms Promote healthy sleep hygiene and mental well-being by keeping mobile devices out of bedrooms during the night to minimise exposure to disruptive online interactions. Educational Initiatives Support educational initiatives that promote online safety, digital citizenship, and the development of coping strategies to navigate online aggression and cyberbullying provided by the school. Attend all the talks (they are all different). Schools and families should work together to educate students about responsible and respectful online behaviour. Please do not just expect the school to do it for you. Where does a school's duty of care end and parenting begin A school's duty of care is to keep the child safe while they are at school. If the aggression/bullying comes from another student at the school, let the school know so they can keep the child safe on school grounds. The school’s duty of care pertains to actions that impact the school environment, student safety, and well-being. Parental responsibility involves supervising and guiding their child’s online activities and behaviours at home and outside of school, instilling values and setting boundaries. Please be across every aspect of your child's online life. I recently wrote a post about how you may even be responsible for what your child says and does online. You can read that here: https://www.safeonsocial.com/post/who-is-liable-for-what-educating-young-minds-on-internet-law-and-regulation And finally - don't forget to share the good stuff.....and lots of it. Humanity needs it more than ever right now. Please start right now below this post. Share something beautiful/kind/funny/happy please.

  • This is Safe on Social - Jacinta Saxton

    I grew up in NSW but have called Victoria home for over 15 years. I live in a small country town called Yarragon, a well-known tourist stop if venturing east of Victoria from Melbourne. It is a beautiful part of the world, and while I still can't get my head around Victorian weather, Autumn in Yarragon (and Melbourne) is particularly wonderful. I am married with three young children. We lead the typical family life with our pet golden retriever. My kids experience the absolute best of town life and farm life, as in addition to owning a farm, their grandparents have a farm as well. We currently live in town near their school, so they also get to enjoy the beauty of riding bikes and playing in the cul-de-sac on their own. I have never been a person to have strong hobbies or interests – I am more a "dabbler". So currently, I am back into gardening and trying to keep my plants alive for longer than a few weeks. I also have just started doing puzzles again. I found that dinner time was the bane of my existence, so I have set up a puzzle at the end of the dinner table so I can be present with my children while they eat but also not want to pull my hair out as I tell my 4 and 8-year-olds to just sit and eat for the millionth time! Other interests are my regular group exercise classes (totally hopeless on my own!) and socializing with whoever I possibly can. A real downfall for me of not having "standard" work is that I crave interaction with others – possibly a reason my local café staff know me and my dog very well. My interest in working with Safe on Social was piqued by the challenges I knew I was entering into with raising my own children. In addition to my own children, I also have five nieces and ten nephews. My extended family, who live near and very far, are a very important part of my life. Some of these kids are gamers, others love TikTok, and the younger ones love YouTube like my own children. They all have not known life without social media, and they all have parents who are regularly dumbstruck and struggling to keep up with the challenges and pitfalls. Between chats with my own siblings and other friends, parents are having the same issues and often don't have the time to wade through the vast amount of information out there. Or, as I have found....they don't even know where to start. I have studied economics, psychology, and my Master of Business Administration. Nothing specifically related to technology at all. However, these studies have led me to jobs in business consulting, change management, communications, and training across a range of public and private organizations. I have always found it interesting and important work to educate and help others with their interpersonal skills – which are the same key skills for good social media etiquette. When I hear about the challenges people are facing on social media, it regularly makes me think of the basics we teach on good communication skills. For me, it feels like adults often act very differently behind their screens than they would if they were eye to eye with the other humans they were engaging with. With children, this behaviour can often be even more challenging as they are still growing and developing as individuals themselves, and their brains are yet to understand risks, consequences, and right and wrong. Over the last 10 years, I have also gone through quite a challenging mental health battle. This is ongoing and something that I will continually manage. The links between mental health and social media are immense. I have experienced the absolute benefits of social media and our technology-connected world, but I also know its drawbacks and how constant exposure to curated lives and limited face-to-face contact with other humans can have. Our whole societal structure has changed, and we are only beginning to see the consequences of less "real" human interaction and reduction of incidental connection. Social media has opened Pandora's box when it comes to mental health challenges, and I want to be part of early intervention on self-management and wellbeing for our children. To book Jacinta to speak at your event, business or school email us here: wecanhelp@safeonsocial.com

  • Urgent update on Roblox and Online Safety

    I want to share some crucial insights and feedback from our discussions with Primary school-age children over the past few weeks, focusing on the online gaming platform - Roblox. Roblox, as you would all be aware, has captured our children's imagination, especially in years 3-6, becoming a significant part of their online world. However, some aspects of this virtual playground require immediate attention and action as parents and guardians. During our sessions, my team and I have seen a noticeable increase in the number of young students interacting with strangers within the game, engaging in chats and in-game activities that sometimes cross safety boundaries. In instances where children were asked to role-play as a girlfriend/boyfriend, mum/dad, nurse, or doctor, and offers of free in-game currency (Robux) in exchange for more often than not, inappropriate interactions were reported to us. In some cases, we have had to report these to the police. Roblox has privacy settings and features that allow us to safeguard our children's online experiences. These tools enable us to restrict who can communicate with our young ones and what content they can access, but it doesn't stop what they can see. But do not rely on this to keep your kids safe. It does not prevent them from seeing things that are wildly inappropriate. Some of the content has become a sex fetish -if you search "Roblox Sex" online, you will get millions of links to adult entertainment websites where Roblox videos have been uploaded. I am very concerned that some of these "Roblox Sex" videos may be children's avatars engaged in these acts - there is no way to tell. Children have reported what is nothing short of in-game sexual assault to me on more than one occasion, which has been reported to the police. As far as we know, only one major Adult Entertainment company has made the word Roblox and all of the ways it may be written (R.O.B.L.O.X for example to circumvent security) completely unsearchable on their sites. Our active involvement, guidance, and open communication are pivotal. Let's foster an environment where our children feel comfortable discussing their online experiences, assuring them they can always turn to us or a trusted adult without fear of being banned from games or devices. They will speak up without fear so we can help them stay safe. There are concerning behaviours being normalised on Roblox, such as children's avatars lying next to strangers' avatars or encountering avatars lying around together in underwear. At Safe on Social, we educate children to perceive their interactions on Roblox as visiting a place rather than merely playing a game. We encourage them to envision Roblox as visiting a vast shopping centre alone. Imagine if your child was wandering alone in a real-life shopping mall, and a stranger offered them money for inappropriate activities. The immediate reaction would be to flee, seek a trusted adult, and report the incident to authorities. We emphasise that the same safety principles should apply online within platforms like Roblox. Talk to them about Roblox like they are going to a shopping mall. We aim to cultivate a sense of online vigilance and safety among children, encouraging them to respond to uncomfortable online situations with the same urgency and seriousness as they would in real-life unsettling encounters. Please learn to navigate the online playgrounds your kids are in, ensuring that your children's online adventures are safe, respectful, and joyous. Making sure they know it is safe to speak up is a great place to start. You can download our free guide to Roblox here Warm regards, Kirra Pendergast - CEO Safe on Social Group Sydney - Brisbane - London - New York - Florence

  • Bold Insights into AI and Misinformation

    Frances Haugen, who worked at Facebook and is known for revealing some big secrets about the company, recently spoke at the National Press Club in Australia. She was in the country for the South by Southwest (SXSW) conference for discussions about technology over the past few days. Haugen shared some important thoughts about artificial intelligence (AI). She said that this technology is becoming a big part of our lives and could change society in huge ways. One of her biggest worries is about misinformation spreading very quickly online because of AI. Haugen used her experience at Facebook to explain that only a few people really understand how AI technology works, and these few people can have a lot of power and control. This can affect what kind of news and information everyone sees and shares online. She explained that the rules and laws are not strong enough right now to manage this technology properly. If things don’t change, there might be more problems like spreading bad information and less truth online. In her talk, Haugen encouraged making stronger rules and laws to manage AI technology better, making sure that it is used in a good and safe way. She also said that people should be given the right information and tools to understand and use this technology wisely. This includes being able to tell if the information shared by AI technology is true or not. The recent 'Voice to Parliament' referendum in Australia was not merely a testament to democracy in action but may have also bore witness to the subtle, yet pervasive influences of misinformation. The referendum seemed clouded by a torrent of misleading narratives and unverified claims, primarily propagated through various online platforms. This dissemination of misinformation, potentially accelerated by AI algorithms, may have played a role in swaying public opinions. The instance should serve as a forewarning of a rising global tide in political realms where misinformation could increasingly be a disruptive force. Such a trajectory, where manipulative information gains prominence, heralds concerns for the integrity and transparency of future political and democratic processes globally. It underscores an urgent need for resilient measures and informed public awareness to navigate the complexities of digital misinformation in preserving the sanctity of global political landscapes. We need to pay attention, learn more about AI, and support good rules and actions that will help make sure this technology is used in a way that is best for everyone and our society. Spotting misinformation generated by AI can be a bit tricky because AI-generated content can often seem quite convincing. Here are some strategies to help you identify AI-generated misinformation: Look for Inconsistencies in the Content AI-generated text may contain inconsistencies or contradictions within the content. There might be sentences or paragraphs that don’t quite align with each other or seem out of place. Evaluate Language and Coherence AI-generated content might show awkward phrasing, odd word choices, or sentences that don’t flow naturally. It might lack a coherent narrative or logical flow. Check Factual Accuracy Verify the facts presented in the content against reliable and established sources. AI-generated content might include false or misleading information. Examine Imagery If the content includes images, check for signs of manipulation or inconsistency. AI-generated images might have irregularities or flaws that don’t align with natural appearances. Look for Author or Source Information AI-generated content might lack credible author information or come from a source that isn’t well-known or established. Consider the reputation and reliability of the source. Use Technology There are tools and technologies available, like browser extensions and websites, that can help identify AI-generated content or images. Trust Your Instincts If something feels off or too good to be true, it might be. Trust your instincts and cross-verify the information, especially before sharing it. Review the Context Consider the broader context in which the content is presented. If it seems unusually biased, exaggerated, or aimed at eliciting strong emotions, it might be misleading.

  • Navigating the World of AI Celebrity Alter-Egos on Social Media

    Social media is evolving, and a new trend has emerged, capturing a lot of attention. Companies like Meta have created AI (Artificial Intelligence) alter-egos of celebrities, such as Kendall Jenner. These AI personalities interact, post content, and engage with fans on platforms like Instagram and Facebook, offering a mix of real and computer-generated content. This new trend has made it quite challenging to tell whether the content is coming from the actual celebrity or their digital twin. In the race to develop a deeper relationship with users, These AI alter-egos, are designed to mimic the celebrities' looks, movements, and even their way of speaking. Meta has worked closely with celebrities to make these virtual personalities seem as real as possible. However, not everything posted by these AI accounts is computer-generated. Sometimes, the actual celebrities contribute to the content, which adds to the confusion. Recognising AI-Generated Content Meta has provided some tools to help users identify whether a post is from the AI or a real celebrity. For instance, AI-generated content will have a special sign and the hashtag #ImaginedWithAI. But it's not always clear, and there's still a lot of mystery around which posts are actual and which are AI-generated. Tips for Navigating AI Content Here are a few ways to make sense of this new trend and understand whether the content is real or AI-generated: Look for Identifiers - Check if the post includes the hashtag #ImaginedWithAI or a special sign. This could mean it is created by the computer. Examine the Content - Sometimes, content that looks too perfect or slightly unusual may be computer-generated. Trust your instincts. Stay Updated - Try to keep up with news and updates from social media platforms. They often share information about new features and how to use them. Ask Questions - If you're unsure about a post, it might help to look at comments or discussions online. Others may have insights or information that can help clarify things. Educate Yourself - Learning more about AI and how it's used on social media can be very helpful. There are many online resources and articles that explain these concepts in simple terms. (Safe on Social has courses for businesses and schools available on Critical AI Literacy and the intersection of AI and Cyber Security/Safety? Privacy - hit reply to this email if you would like some information). AI alter-egos are reshaping the way we interact with celebrities on social media. It's a blend of real and virtual, making the online world even more complex but also fascinating. Understanding and adapting to this trend can help us become smarter and more thoughtful users of social media, ensuring that we can enjoy the exciting possibilities it brings without getting lost in the confusion.

  • Who is liable for what? Educating Young Minds on Internet Law and Regulation

    Understanding the nuances of social media law and regulation is beneficial and essential, especially for young people who are not only its primary users but also future policymakers. As someone who has monitored internet law and regulation since its inception, I believe we need to teach this to students and keep them updated with changes and progress. While they operate worldwide, many Social Media companies like Meta (Facebook, Instagram, Whatsapp) and Snap Inc (Snapchat) are headquartered in the U.S., making them primarily subject to those laws. This often results in a mismatch between the platform's regulations and the values or needs of users from other parts of the world. Freedom of speech on platforms based in the U.S. is protected under laws like Section 230c, something I speak about in every presentation I give. Because of social media misinformation and lack of education, many young Australians believe they have a right to free speech here.....but that is US law. Not ours. Australia does not have a national Bill or Charter of Rights. Australia is the only Western Democracy not to have one. Section 230(c) is a provision within the Communications Decency Act of 1996 in the U.S. that shields online platforms from being held liable for content posted by their users. It means platforms like Facebook or Snapchat can not be sued over what their users post. During his presidency, Donald Trump criticized Section 230, claiming it gave tech companies too much power to silence conservative voices. In May 2020, he signed an executive order to limit the protections offered by Section 230, arguing that platforms should not receive immunity if they engage in editorial decisions about user content. The executive order itself did not have the power to change the law. Instead, it directed federal agencies to consider new regulations and encouraged Congress to revisit the law. The executive order faced significant criticism and legal challenges. Many legal experts argued that the order was more symbolic than substantive, as only Congress can make changes to Section 230. Congress has various proposals to amend or revoke Section 230, but no significant changes have been made. In Australia, the law may hold Facebook page administrators responsible for defamatory comments on their pages. Instead of the Facebook app being targeted, the person managing the page may be sued. This highlights the importance of page administrators being vigilant about the content posted on their pages. We need to teach these laws to young people as many do not realise that they may be sued for defamation. In Australia, a child can be sued for defamation. The practicality and likelihood of such a lawsuit succeeding depend on various factors, including the child's age and understanding. The general principle is that anyone, regardless of age, can be held liable for their defamatory statements if they had the capacity to understand the nature and consequences of their actions. Australian law does not automatically make parents responsible for tortious acts (like defamation) of their children. However, parents might be held liable if it can be proven that they were negligent in some way that contributed to the defamatory act, such as if they were aware of their child's repeated defamatory behaviour online and did nothing to prevent it. Our youth need to be kept up to date about law and policy that affects them. They must be equipped with the knowledge and skills to safely and responsibly navigate the online world, especially as we move towards a metaverse. This way we not only protect them but also empower them to be the agents of positive change in the future. To book us to speak at your school email wecanhelp@safeonsocial.com Sources: Defamation Act 2005 (which varies slightly between states and territories but has a uniform approach to many issues). The case of Bleyer v. Google Inc [2014] NSWSC 897, where the court acknowledged that even a minor can be a publisher for the purposes of defamation. General principles of tort law in Australia, which cover issues of capacity and liability. https://johnstonwithers.com.au/news/defamation-digital-age For specific advice or more detailed information, consulting a legal expert or referring to specific case law and statutes would be necessary.

  • Roblox accused of allowing gambling sites to target minors

    Roblox is also accused of profiting from the scheme through a 30% fee on the conversion of Robux back into real currency. The claim is that the company is making millions annually from these fees, and hasn't taken action in spite of being aware of the the existence of these gambling platforms. The plantiffs claim that the gambling sites incentivise minors to promote their platforms, offering rewards such as free Robux for promoting the sites on platforms like TikTok. Roblox has responded to the accusations, stating that these third-party sites operate without any legal affiliation with the company and are infringing on Roblox's intellectual property and branding. The company has vowed to remain vigilant in ensuring its platform's safety and adherence to policies. Brands like AIA, H&M, Givenchy and Samsung have launched activations in Roblox. The ‘AIA Arena’ is all about health, energy and community, with an aim to engage a younger audience in a way that is core to its purpose, not commerce. Meanwhile, Samsung used CharliXCX for a metaverse concert in Roblox and Givenchy built a Beauty House to allow visitors to immerse themselves within a magic kingdom filled with cityscapes, dance floors, and even a castle inspired by the home of the brand’s late founder, Hubert de Givenchy. Kirra Pendergast, the founder and chief executive officer of Safe on Social Group tells Campaign while education is a powerful tool, expecting it to reach every child and parent is unrealistic. She says the allegations against Roblox highlight the urgent need for stricter regulations. "It is crucial to understand that these are not just games, but virtual environments where real-world consequences can occur," explains Pendergast, who is also a youth safety advisor to gaming platform TotallyAwesome. "Brands venturing into using Roblox for marketing to young people should also be working with and seeking advice from partners and providers deeply versed in youth online trust and safety." Pendergast urges platforms like Roblox and many others that cater to children to prioritise child safety. She notes the term "playing" in the context of online games like Roblox trivialises the potential risks associated with online gaming and social media. She notes gambling, sexualised content and violence are on the increase. The term "play" holds profound significance because it is a fundamental and innate behaviour observed across human development. According to Pendergast, play is a way for children to explore their environment, understand social dynamics, experiment with roles, and process emotions. "Through play, we develop cognitive, physical, and social skills and learn to navigate the complexities of the world around us as children. When we are discussing metaverse games like Roblox it is extremely important that we reframe the conversation and emphasise that online activities are like "visiting a place," we can instil a sense of caution and awareness in children and their parents," explains Pendergast. "The analogy of "visiting a place" is apt because, like any other place, there are safe and unsafe areas, and children are taught to be cautious and aware of their surroundings. If a child was encouraged to gamble or lay down next to someone they did not know in real life they would know that it is not ok. Online it is very, very blurry. Children see their online life as an extension of their physical life. To them it is just life." Pendergast concludes: "Online platforms offer numerous benefits and opportunities for learning and social interaction, they also come with risks. It's the collective responsibility of platform providers and governments and parents to keep children safe." Read the full article at: https://www.campaignasia.com/article/roblox-accused-of-allowing-gambling-sites-to-target-minors/488023

bottom of page