“A man’s face is his autobiography. A woman’s face is her work of fiction.” – Oscar Wilde
For years, laws and regulations have existed to protect paintings, literature, music and all other types of art. But what about a human face? A hot topic for ethical debates, biometric data, and more specifically facial recognition software, has been used by private companies and governmental agencies across the world for some time now. While the debate regarding the benefits and risks of facial recognition is ongoing, most critics are advocating for the position that the perceived benefits should not be taken at their face value. And while the use of facial recognition originally appeared to be an issue among authoritative governments where “big brother” was always watching, it has been confirmed that it has officially crossed the borders and is now knocking on our doors.
On February 27, 2020, the Royal Canadian Mounted Police1 confirmed their use of Clearview AI, a facial recognition software, to investigate online child sexual exploitation cases. Even more recently, on March 1, 2020, the Ontario Provincial Police2 confirmed their use of Clearview AI as well.
Clearview AI has been the subject of much criticism, particularly after the company sustained a data breach when an individual gained unauthorized access to its client list. Most concerning, the company is known for collecting billions of public images from the internet, including public social media accounts, to build a proprietary image search tool that has been sold to law enforcement across North America.
Interestingly, the insurance industry was one of the first users of facial recognition software3. Lapetus Solutions developed a platform, Chronos, where consumers can buy life insurance without taking a life insurance medical exam. The platform uses facial recognition to allow companies to investigate a person’s habits by analyzing the attributes of the face. This can assist in determining whether someone is a smoker, which would in turn impact their life insurance plan.
While, there is arguably some good that can come from the use of biometric data, particularly when used in connection with criminal investigations, the impact of its use may be more damaging than the potential benefits, which may be why several States have moved to ban its use. As of October 2019, California, New Hampshire and Oregon passed legislation prohibiting their respective law enforcement agencies from using facial recognition and other biometric tracking technology in body cameras4. The dangers and vulnerabilities that facial recognition software possesses must be clearly understood and addressed to prevent, or at the very least limit, misuse.
The current privacy legislations, federal and provincial, do address the use of personally identifiable information without consent and there are measures in place to deter same; however, the various intricacies associated with the use of facial recognition software have yet to be addressed. While the States are progressively addressing the various concerns associated with the use of facial recognition software, Canada continues to lag behind. Policy development is becoming increasingly more important given the growing trend among law enforcement in using facial recognition without any guidelines. Aside from the obvious concerns of “surveillance capitalism”, there are issues in relation to its accuracy. Facial recognition software has been calibrated to the features of white men, accordingly, there has been evidence that certain groups, particularly people of colour, can be misidentified by facial recognition software and possibly erroneously associated with crimes5. Further, without any mandatory guidelines, it is unclear how organizations will use and share facial recognition information. When accessed by the wrong individuals, facial recognition software can be used for identify theft, fraud and predatory marketing.
In an effort to address the risks and vulnerabilities associated with facial recognition software, the Federal Privacy Commissioner and the Alberta Privacy Commissioner have conducted investigations into its use across Canada6. Yet, there are concerns which current privacy laws do not address. For instance, current privacy laws extend to prohibit any unauthorized access and / or dissemination of personal information that is not publicly available, but what if the information is publicly available? When social media accounts and photographs of individuals are available to the public, they are arguably no longer protected by privacy regulations.
While a strict ban against facial recognition software use may be drastic, regulations need to be enacted that address how such information is used, retained, disseminated and by who. Given the potential benefits associated with the use of facial recognition software, a quick and efficient legislative response can likely maximize these benefits and curtail potential threats.