top of page
  • Writer's pictureLenny

Youth Voice - Monthly Trending Topics



Meta’s Underage Users Have Finally Caught Up with the Company

 

Meta, the parent company of Facebook and Instagram, have been screwing young people like me over for a long time now. Finally, in the face of overwhelming evidence, the company is facing legal challenges from 33 U.S. states who allege that it actively “coveted and pursued” underage users, while intentionally ignoring the vast majority of reports the company received about underage accounts.

 

 Though Instagram ostensibly refuses to allow those under 13 onto its platform, a trove of documents including employee chat logs, analytics data, and concealed internal studies points to the contrary. The legal filing also revealed that Meta even created internal company charts displaying the percentage of 11 and 12-year-olds who used Instagram daily.

 

 I checked the date my Instagram account was created, and I was surprised. My account was created in May of 2014, meaning I was 10 years old. Instagram doesn’t seem to know my date of birth, though I probably had to lie about it to create my account at the time. From then on, it seems Instagram had no interest in whether I was old enough to have the app. That being said, the amount of money Instagram was making off its underage users in 2014 was certainly nowhere near today’s levels, and perhaps Facebook hadn’t even realised the economic potential of the minors on its platform when I signed up as an underage user.

 

 The presence, pursuit, and sanctioning of economic and informational exploitation of underage users has been described as somewhat of an open secret at Meta. While Meta defends itself, accusing the states of utilising “cherry-picked” documents to mischaracterise Meta’s actions, the company simultaneously undermines the initiatives of investigators who attempt to uncover the level of harm that minors face on platforms such as Instagram; when the U.S. state of New Mexico filed their lawsuit against Meta, Attorney General Raul Torrez expressed concern about Meta’s suspension of Instagram accounts that the state was actively using to prosecute its investigation into child predation on the platform.

 

The lawsuit also reiterates the persistent claims that Meta knowingly designed and implemented addictive mechanisms in the creation of their social media platforms, a practice exposed by the whistle blower Francis Haugen, who also claims that the company intentionally catered to and exploited children under 18.

 

 Monopolistic businesses have a long history of exploiting and harming kids with dangerous products, while denying it, even while caught in the act of papering over the negative findings of their own studies. Big Tobacco, anyone?

 

As social media platforms face further scrutiny, the legal system is beginning to catch up. British coroner Andrew Walker concluded in late 2022 that the death of 14-year-old Molly Russell was brought about, in part, by platforms like Instagram and Pinterest, which played a “more than minimal” role the acts of self-harm that led to her death.

 

 To me, the prosecution of large social media companies, and specifically the individuals within them who encouraged, if not entirely constructed this toxic culture, could be the best investment in childhood health that any government has ever made. The consequences of social media use are becoming quickly apparent, meaning the next steps that governments take will be of paramount importance. They could save a generation.


 

Federal Government’s Vape Ban Implemented Jan. 1st 

 

 As the government’s new vaping importation ban comes into effect in the new year, medical professionals are concerned about the strain that the medical system may experience due to nicotine dependency.

 

The ban includes new plain packaging laws for vapes, and a set of conditions imposed by the Therapeutic Goods Administration (TGA) for those wishing to acquire a licence, granted by the federal government, to import vapes.

 

 Recent data is starting to reflect the real proportion of young people who regularly vape; 20 percent of 18 to 24-year-olds, as well as 14 per cent of 14 to 17-year-olds are current vapers. For a long time, I’ve maintained a healthy scepticism regarding vape use statistics; I always felt they were far too low. These new numbers are beginning to fall into line with my observations and experience.

 

As the true scale of vape use is revealed, the associated bills rack up for the medical industry. More addiction means more patients and more treatment, especially if young people face nicotine addiction without the ease of buying a vape from a convenience store. Unfortunately, it could also mean more profits for tradition tobacco companies, as people make the switch from vapes to cigarettes.

 

 As a seemingly pragmatic acceptance of this fact, all GPs and nurses will have the ability to prescribe vapes as a means of nicotine addiction treatment under the new scheme. Whilst in the past only GPs who had elected to undergo additional training could be certified to prescribe vapes, just 5 per cent of practitioners signed up. Likewise, the RACGP estimated that only 7% of users acquired vapes via prescription under the old system.

 

 In the meantime, the thriving black market of vaping products undermined any legitimate prescription scheme, as vapes were readily available at tobacconists and corner stores across the country. In an effort to change this, the Australian Border Force has also been allocated an additional $25 million to effect the ban.

 

 Whether or not the sale of illegal vapes can continue remains to be seen, though the government’s strategy has been embraced by state health organisations across the country. While it hasn’t been embraced by many of the young people I know, they tell me they’ll probably just switch to cigarettes if they can’t get their hands on a vape. Sadly, Big Tobacco have been playing this game a long time, and it seems that they may end up on top once again.

 

EU Drafts the World’s First Comprehensive ‘AI Act’

 

 After years of hard-pressed negotiation, the European Union (EU) has ensured that their groundbreaking AI Act will finally be enshrined in law. It’s a pivotal piece of legislation aimed at curbing potential harm in domains where AI poses the gravest threats to fundamental rights, including law enforcement, healthcare, border surveillance, and education. It also enables governments to ban applications of AI tech that present an "unacceptable risk."


Under this act, AI systems categorised as "high risk" will be subject to stringent regulations, necessitating the implementation of risk-mitigation mechanisms, including the use of high-quality datasets, full transparency during a technology’s development and deployment, and vitally, human oversight.

The AI Act is a monumental achievement, bringing much-needed regulations and enforcement mechanisms to a profoundly influential sector, though it took legislators a long time to reach a unanimous position, with dissent at times from countries like Germany, France, and Italy.


The Importance of Binding AI Ethics

 Silicon Valley, and especially it’s pack of AI evangelists, love to lecture the public about their approach to ethical design and development. However, as we’ve seen with the latest OpenAI saga, Sam Altman’s Lazarus-esque return, and the respect that Microsoft’s bottom-line commanded during these negotiations, it becomes clear that in the Valley, profit often smothers ethics in its sleep.

 Hence, the EU’s introduction of legally binding, enforceable rules surrounding the ethical design and deployment of AI technology may come to represent a cornerstone of user protection in the space. The implications of AI for law enforcement, biometrics data, copyright, and privacy necessitate that companies and governments shoulder a burden of responsibility to ensure the protection of fundamental human rights.

AI technologies deemed to pose unacceptable risks will be prohibited. These include systems engaging in cognitive behavioural manipulation, social scoring, and remote biometric identification, with limited exceptions for law enforcement purposes.

High-risk AI systems that impact safety or fundamental human rights will be subject to more stringent scrutiny. They encompass AI systems used in products falling under EU product safety legislation and AI systems in specific critical areas, all of which must be registered in an EU database. High-risk AI systems will undergo comprehensive assessments before entering the market and throughout their lifecycle.

For AI systems with limited risk, minimal transparency requirements will be enforced, allowing users to make informed decisions. Users interacting with AI applications must be made aware of the AI's involvement, particularly for systems generating or manipulating image, audio, or video content, such as deepfakes.

 The key here is that the government, and by extension its citizens, are awarded full transparency and a guarantee against any abridgement of their rights. This tenet is, and must be, the precursor for all serious AI legislation that intends to protect users.

 A Barrier to Dystopia

What is most impressive to me about the EU’s new AI regulation is how comprehensively it has reacted against the common conception of our worst dystopian nightmares; that is, it regulates the creation of AI technology that could precipitate an overbearing police state, or the rise of oppressive techno-capitalist overlords. Certain applications have been completely banned, like the creation of facial recognition databases via the generalised scraping of data from CCTV and the internet, or emotion recognition software in school or at the workplace. Likewise, AI systems are not allowed to engage in behavioural manipulation, social scoring, or biometric identification and classification of people.

Some EU countries have resisted the strenuous regulations surrounding the use of biometrics; France has continued to adopt new AI surveillance technologies, including legislation authorising police use of AI powered, algorithmic video surveillance ahead of the 2024 Paris Olympics.

While individuals’ fundamental human rights are now broadly protected, none of these regulations apply to technologies developed exclusively for military or defence purposes.

In sum, the EU now represents the gold standard for legislative reform surrounding AI, a move that in the coming decades will, I’m sure, prove itself to have been both highly prescient, and deeply necessary. I find the prospect of US regulation highly unlikely; the level of compromise and dilution involved in that lawmaking process would mean a product resembling barely a shadow of the EU’s legislation. This will prove to be one of that greatest A/B tests of all time, and while Australia watches from the backbenches, the government of the day ought to be taking note; given a few decades, the contrast between life in the EU and life in the rest of the world may grow increasingly stark.

 

 

 

144 views0 comments
bottom of page