It has been a week of public reckoning for police use of facial recognition software. On Monday, IBM disavowed the technology entirely and CEO Arvind Krishna condemned its use “by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms.”
Amazon responded Wednesday by putting a one-year moratorium on police use of its Rekognition software, closely followed by Microsoft’s promise not to sell the technology to US police.
The announcements came in the wake of weeks of intense protests against racism and police brutality sparked by the death of George Floyd.
But they also followed years of academic research and activism highlighting the biases embedded in commercial facial recognition software and its grave potential for misuse.
To be clear, IBM, Amazon, and Microsoft make up a relatively small share of the market for police facial recognition software.
While Amazon did sell Rekognition to a number of law enforcement agencies, most police departments have contracts with smaller, less well-known firms like Clearview AI. None of these companies have stopped making deals with law enforcement.
But the high-profile exits of tech companies are still likely to have a major impact on facial recognition in the US. All three firms called on members of Congress to pass legislation regulating the industry, ratcheting up pressure to create a uniform, national policy governing the technology’s use.
By inviting regulation, those firms are accepting its inevitability and welcoming the chance to have a hand in shaping the rules. For example, in Washington—the first US state to regulate facial recognition—the laws were literally written by a Microsoft employee.
The announcements may also have a chilling effect on fundamental AI research. IBM, Microsoft, and Amazon play an outsized role in generating basic AI research and driving academic interest in the field.
Their condemnations, along with souring public opinion, could leave a stain on facial recognition that discourages researchers from advancing available tools.
Among academics, there’s a growing conversation about whether research on facial recognition can ever be ethical.
After all, as UT Austin AI ethicist Maria De-Arteaga pointed out when we interviewed her this week, if you could fix all the biases that make facial recognition less accurate for women and people of color than for white men, you’d be left with an even more powerful surveillance tool that could more readily be used to target marginalized communities.
(By arrangement with Quartz)