IBM CEO Arvind Krishna announced today that the company has decided to withdraw from selling facial recognition tech. He also voiced support for a new bill aiming to increase police accountability and reduce violence.

In a letter reported by CNBC, written in support of the newly introduced Justice in Policing Act, Krishna details IBM's exit from the controversial business of selling facial identification:

"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency."

Facial recognition has been frowned upon by lawmakers and privacy advocates, with several cities already banning the municipal use of the technology. In 2019, Democratic lawmakers in the U.S. pushed to stop the use of technology in public housing units. However, facial recognition software is still being used in business establishments, such as shopping centers and airports. Police departments are using the tech as well.

As for IBM's move, it's worth noting that facial recognition tech isn't exactly the company's moneymaker. To be fair, the technology is still in its infancy, and there aren't many applications where an enterprise vendor like IBM makes sense.

In Amazon's case, its Rekognition software has received backlash as well. IBM profiting from the same type of technology wouldn't be a good business move, especially with what's happening in America these days.

The capabilities of facial recognition technology in themselves could be seen as an excuse to add additional surveillance to an event, or at a location, it wasn't previously used before. This sparks the usual criticisms of the "big brother" watching society.

And while facial recognition algorithms may be neutral themselves, the databases they are tied to are anything but. Krishna's letter also called out vendors of such systems, saying that it is their responsibility to make sure that AI is tested for bias, especially if it is to be used by law enforcement, seemingly throwing shade on Amazon.

Critics of facial recognition have since cited studies showing that the technology doesn't have high accuracy rates when it comes to detecting minorities and women. However, it can be exploited to become a tool for surveillance, like in the case of Clearview AI, which has raised concerns over the power of facial recognition because it allows users to identify people by comparing their faces to photos scraped from the internet.