The Technical Debt of Facial Recognition
In June of 2020, the US Technology Policy Committee of the Association for Computing Machinery published a letter calling for the suspension of "current and future private and governmental use of [facial recognition] technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights."
The ACM is arguing that facial recognition is not mature enough to be used well, its potential has driven presumptive adoption of the technology, and that its use has compromised privacy and other human rights., They also believe its use should be paused until legal standards for accuracy, transparency, governance, risk management, and accountability can be established.
This letter follows actions by large enterprises, which have restricted or halted access to facial recognition. In June, IBM announced that it would stop selling "general purpose" facial recognition software, and Amazon and Microsoft soon announced bans on selling facial recognition technology to law enforcement until legislation is passed to govern the technology. Recent headlines have demonstrated how facial recognition systems are perpetuating bias in law enforcement, hiring, and school surveillance. The industry is right to pause the development of this technology while they ponder potential side effects and develop an ethical approach to facial recognition.
Lather, Rinse, Repeat: With a Twist
Technology and ethics are often opposing forces. This call for careful deliberation is similar to previous ethical discussions of machine learning models. The letter cites ACM's earlier statement on algorithmic transparency and accountability as a foundation for this latest round of ethical exploration. Concepts such as transparency and accountability are common in ethical frameworks, but they haven't historically led to a call for a pause in access to technology.
Many technologies are difficult to understand or their impacts are hard to gauge. With facial recognition, the opposite is true. News coverage in the past few years has led the public to understand how facial recognition works and to see their perpetuation of cultural bias and discrimination (see Joy Buolamwini's TEDTalk and the Algorithmic Justice League for more detail). People are quick to realize the dangers of ubiquitous surveillance, even if they’re not the targets of active discrimination (Thanks, George Orwell!). This understanding of the technology and risks means that facial recognition is having a unique moment; a caesura in the rush to innovate, a unique pause for moral introspection.
Admirable, But Questions Remain
This pause is needed. All too often, ethics lags technology. With all apologies to Jeff Goldblum, there's no need to be hunted by intelligent dinosaurs to realize that we often do things because "we can rather than that we should." This ACM's call for restraint is appropriate, although a few issues remain.
What about the facial data that already exists from currently deployed systems? This is not unique to facial recognition, but rather one that is well known from GDPR compliance and other use cases.
The stoppage is intended for private and public entities, but personal cameras — and an opening for facial recognition — are rapidly becoming ubiquitous. Log in to your neighborhood watch program for a close-to-home example. (What street doesn't have a doorbell camera?) Public life is being monitored and passive data on our habits and lives is continually collected; any place that there is a camera, facial recognition technology is in play.
The call by the ACM could be stronger. They urge the immediate suspension of use of facial recognition technology anywhere that is "known or reasonably foreseeable to be prejudicial to established human and legal rights." What is considered reasonable here? Is good intent enough to absolve misuse of these systems from blame, for instance? The potential harm of these systems — and the repurposing of its data — is often not readily apparent. By the time the bias is observed, the damage has been done. Given the risks and the uncertainty involved, it would be better to remove the call's dependency on expected harm. The use of facial recognition should be suspended until its ethical impact can be documented and governed properly.
Government Response
Governments have taken notice of public concern, of course, and have responded with proposed legislation. Several US cities, including Boston, Portland, and San Francisco, have banned the use of the technology. (See: US map of use and bans of facial recognition.)
There is also action on the national level. Currently proposed legislation in the US seeks to govern or declare a moratorium on facial recognition technology. In Europe, a five-year hiatus on the use of facial recognition in public spaces was proposed last year but was subsequently dropped this past January. These efforts are welcome, but if ethics lags technology, legislation is slower still.
Adversarial Technology
Another approach may be useful as well. Recently, researchers have developed "adversarial technology," using innovation to equip people to defeat location tracking, artificial intelligence, and other components of surveillance systems. These have run the gamut from using fashion to defeat license plate camera systems to full on fabrication of fake identities and personas to throw off location and online tracking.
This adversarial approach has now been developed for facial recognition as well, with the most notable being Fawkes, an open source tool released by researchers from the University of Chicago. Rather than making physical changes to a person's face, it seeks to mask photographs with slight alterations. Though these changes are not prominent to the human eye, this tricks the facial recognition system into misidentifying the person — cloaking the individual's true identity. Over time, an increasing set of altered photos is incorporated into the collection of images that facial recognition systems use to catalogue and identify people, polluting its knowledge base and protecting the true identity of the individual.
A Pause for Reflection
The ACM is right to call for a suspension in the use of facial recognition to address bias and abuse, but our path towards ethical use of this kind of technology is likely not a straight, clear line. A combination of approaches is necessary to make responsible progress; consistent reporting on surveillance technology, governmental regulation, a sense of corporate responsibility, and adversarial technology all have their role to play. These approaches take time, and the ACM is correct to call for a break to allow these approaches time to develop.
It's time to address our ethical debt.