Facial Recognition Beyond the Lab

Photo by Chris Yang on Unsplash

Writing in BuzzFeed News, Ryan Mac reports on an investigation of the use of facial recognition software from NYC start-up Clearview AI based on data from thousands of public agencies across the country where employees have used the software, often without institutional approval, in their investigative work. The unregulated and mostly unmonitored use of this tool raises serious questions about privacy and the rights of individuals to photos of their faces scraped from online sources without their consent.

The article provides details on how Clearview AI distributed free trials of the software to individual employees of a variety of organizations including the military, police departments, and several public schools. Although Clearview AI has made unverified claims for the accuracy and reliability of its tool, at least in some cases the software appears to be less reliable in recognizing faces of people of color. All of this raises serious questions about privacy and civil rights violations and highlights the need for regulation of tools of this sort. Mac reports that both Microsoft and Amazon have called a temporary moratorium on the use of their facial recognition tools until government regulators can address some of the serious issues surrounding their use.

Be the first to comment

Leave a Reply