As concerns of racial bias in facial recognition software grow, the Department of Homeland Security has plans to use it in airports and to monitor Black Lives Matter protests. Cooley LLP partner Travis LeBlanc, a board member of the U.S. Privacy and Civil Liberties Oversight Board, says now's the time for the government to require companies to look for and identify racial inequities in the design of new products or technologies.

A Black man was wrongfully arrested by Detroit police after facial recognition software pinned him for a crime he never committed. Robert Williams spent the night in jail after his young daughters witnessed him being taken away in handcuffs.

The charges against Williams have now been dropped, but his plight is as old as our criminal justice system, albeit with one key difference: He was misidentified by a machine algorithm.

While some of the largest tech players have suspended the sale of facial recognition tools due to concerns over racial bias, the Department of Homeland Security is moving forward with using it at airports and, apparently, to monitor Black Lives Matter protests.

As a member of the U.S. Privacy and Civil Liberties Oversight Board, I am conducting an oversight investigation into the DHS's use of biometric technologies like facial recognition that pose substantial privacy and civil liberties risks, and likely will also disproportionately target people of color.

In addition, as these algorithmic technologies are being developed, now is the perfect time for Congress to require government agencies developing these technologies to publish "equality impact assessments" that asses their likely effect on key demographics such as race, gender, or disability.

Racial Bias in Facial Recognition Tools

Concerns about racial bias in facial recognition algorithms are not new. Countless researchers have observed that these algorithms misidentify people of color. In 2018, MIT Media Lab noted the inability of facial recognition algorithms to detect dark-skinned faces.

In December, the National Institute of Standards and Technology published a study finding evidence of racial bias in nearly 200 facial recognition algorithms.

IBM, Amazon, and Microsoft recently suspended sales of their facial recognition tools to domestic law enforcement over concerns of racial inequity and irresponsible use. For similar reasons, municipalities like San Francisco and Boston completely banned its use by local government authorities.

Along with many Americans, I have deep concerns about its use, whether to counter terrorism in airports or monitor protected First Amendment gatherings. Not only does the government's adoption of algorithmic surveillance tools pose substantial privacy and civil liberties risks, they likely will also disproportionately target people of color.

Privacy concerns have long been a fundamental challenge to the adoption of facial recognition systems. We must broaden this scrutiny to include civil rights concerns. We must ensure that facial recognition technologies are consistent with our social expectations of equality.

To proceed otherwise risks encoding the very racism and inequality that are endemic to our current system in the machines of tomorrow.

Baking Equality Into Design

We need equality by design. Privacy professionals often speak of privacy by design-a concept that involves proactively integrating privacy protections into the development and operation of new devices and technologies so that privacy is "baked" into the lifecycle of the project.

Racial inequities are equally worthy of identification early in the design of new products or technologies. We need not await a race "breach" such as the false positive identification of Robert Williams to assess vulnerabilities in facial recognition algorithms. We already know these algorithmic biases exist and that they often mirror human cognitive biases.

Algorithms are designed by humans and trained to emulate human decision making, which is itself inherently biased. Knowing these risks and the likely results, we have a responsibility to proactively identify, assess, and remediate inequalities that are consciously or unconsciously built into new systems. This is equality by design, a principle that equality must be built into technology and system design proactively and be fully present on day one, before the government makes any use of the technology.

Call for Equality Impact Assessments

The DHS is already required to publish a privacy impact assessment (PIA) when developing a new program or technology that collects or uses personally identifiable information. These assessments identify the privacy risks of the data collection, as well as measures to mitigate those privacy risks. While PIAs should be required of any government agency seeking to deploy invasive technologies like facial recognition, they are not sufficient to assess the impact of these new programs on equality and civil rights.

Government agencies should also be required to publish an equality impact assessment (EIA), which would examine the likely effect a new technology or program would have on key demographics such as race, gender, or disability.

While the EIA should seek to identify and mitigate negative impacts, particularly those that rise to the level of unlawful discrimination, an EIA also offers the opportunity to identify new strategies to promote equality that were previously overlooked or unsuccessfully implemented.

EIAs have been used in the U.K. for these purposes, including evaluation of the London Metropolitan Police's proposed facial recognition program. While EIAs will not themselves root out all inequities, they can be a critically important, evidenced-based accountability tool for ensuring that equality considerations are identified and addressed by decision makers.

Now Is the Time to Correct the Biometric Path

Congress should pass a law mandating EIAs and PIAs in the adoption of decision-making technologies like facial recognition. We all inherited a criminal justice and law enforcement system built upon centuries of inequality, with the resulting inequities baked into both their structure and operations. Facial recognition systems, however, are being created as we all breathe. We have the opportunity to ensure that these historic inequities and cultural biases do not produce systems of algorithmic injustice. We owe it to future generations to get this right.

We are at a crossroads for biometric technologies like facial recognition; course corrections can still be made. If we get this moment wrong, we risk perpetuating a reality where some demographic groups have access to a contactless and frictionless experience with the government while others, like Robert Williams, are subjected to showing their ID, submitting to secondary interrogations, overcoming a presumption of guilt, and being detained in a system that is unable or unwilling to distinguish between people of color.

As Mr. Williams noted, "I wouldn't be surprised if others like me became suspects but didn't know that a flawed technology made them guilty in the eyes of the law."

It would be a shame if we squander this moment by hastily embracing new technologies that have not been cleared of the same racial biases that have long contaminated our social structures.

Originally published by Bloomberg Law

We can and must take the time to ensure that these systems are designed with equality in mind. Until equality protections are required by law, government use of facial recognition technology should be minimized or banned.