The Urgent Call for Facial Recognition Laws in Australia - Balancing Security and Privacy
As facial recognition technology (FRT) becomes more prevalent, concerns about its impact on fundamental human rights, such as privacy and freedom of expression, have grown. With no dedicated laws to govern this technology, we must ask ourselves how to balance facial recognition technology's benefits and risks.
FRT is encountered in various contexts daily, such as unlocking smartphones, organising photos, home security, passport control, and surveillance by employers and law enforcement. While primarily used for identification and verification purposes, it is increasingly employed to evaluate attributes like age, gender, and emotions. Bias in facial recognition technology is a major concern, as it can disproportionately affect women and people of colour. This bias stems from the fact that the algorithms used in these technologies are often trained on datasets that lack diversity, leading to inaccuracies and misidentifications. As a result, women and people of colour are more likely to experience negative consequences, such as false matches, misinterpretation of emotions, or unjust targeting by law enforcement. Addressing this issue is crucial to ensure that facial recognition technology is fair and equitable for all users.
Governments and corporations are using facial recognition technology in an unregulated landscape. Although offering potential advantages like enhanced security and streamlined identification processes, also presents significant risks due to the lack of specific safeguards and oversight mechanisms.
The lack of effective regulation for Facial Recognition Technology (is posing significant challenges to upholding human rights and fostering positive innovation. Properly implemented, FRT can offer convenience and efficiency, with benefits such as aiding the visually impaired, locating missing persons, and identifying victims of crimes. However, FRT also threatens privacy and human rights, particularly as it relies on sensitive personal information, and its widespread deployment raises the risk of mass surveillance.
Clearview AI exemplifies the invasive nature of FRT. This New York-based company has developed a controversial database with 30 billion photos scraped from Facebook and other social media platforms without user consent. Critics argue that this database is essentially a "perpetual police line-up," identifying individuals who have not committed any crime. Major social media companies have sent cease-and-desist letters to Clearview AI for violating user privacy, yet the database remains a concern for privacy advocates.
In September 2022, the Human Technology Insitute (HTI) at the University of Technology Sydney (UTS) released a groundbreaking report proposing a Model Law for facial recognition. This report addresses the growing demand for reform from various sectors, including civil society, businesses, government, and academia. The goal is to safeguard against harmful facial recognition applications while promoting innovation for the public good.
The HTI report advocates for legal reforms to address the risks to Australians' privacy and human rights. It adopts a risk-based legislative strategy rooted in international human rights law, making the reform principles relevant to similar jurisdictions.
Australia needs a specific facial recognition law, and this HTI report encourages the Federal Attorney-General to spearhead this critical reform initiative. In February 2023, the Federal Attorney-General's Privacy Act Review report positively acknowledged the proposed Facial Recognition Model Law, endorsing a risk assessment approach to regulate facial recognition and other biometric technologies in principle.
In response to the Attorney-General's consultation on the Privacy Act Review report, the University of Technology Sydney reiterated its call for immediate action on specialised regulation for facial recognition technology.
The challenge posed by facial recognition technology is achieving a delicate balance between reaping its benefits and mitigating its risks. The legal framework proposed in Australia offers a starting point for a dedicated regulatory approach to protect citizens' privacy rights and ensure that human rights obligations are met.
You can read the HTI report here: https://www.uts.edu.au/sites/default/files/2022-09/Facial%20recognition%20model%20law%20report.pdf