IBM to no longer develop, or research facial recognition
IBM will no longer develop or offer general purpose facial recognition or analysis software, said Arvind Krishna the CEO of the company.
The CEO showed a huge concern for racial bias that is usually found in artificial intelligence systems today. He also voiced support for a new bill aiming to reduce police violence and increase accountability by a country.
He further emphasized on the need to audit artificial intelligence tools specifically when they’re used in law enforcement and national policies that allow transparency and accountability to policing, for equipments like body cameras and modern data analytics techniques.
The decision follows nationwide protests in the US over the death of George Floyd. Krishna explains the company’s exit from the controversial business of facial identification as a service in his letter saying, “IBM firmly opposes and will not pardon uses of any facial recognition technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” Krishna further added, “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
As per Krishna’s explanation there should be a new bill that should provide grants for the hardware, but only if they are used under protocols publicly developed.
In addition to all this Krishna further said, “We need to create more open and equitable pathways for all Americans to acquire marketable skills and training.”
Krishna’s letter also says that vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.
There have been a large number of changes in the facial recognition software. It has improved immeasurably over the last decade due to the advancement in artificial intelligence. At the same time, the technology has been suffering from bias along lines of age, race, and ethnicity, which make the tools unreliable for security and law enforcement.
IBM had initially put in an effort to help the problem of bias in facial recognition, releasing a public data set in 2018. However, the company was also found to be sharing a separate training data set of nearly one million photos in January 2019 taken from Flickr without the consent of the subjects.
However IBM supported the move saying that at that time the data was only be accessed by verified researchers and just consisted of the images that were publicly available.
The fact remains true that facial recognition does not seem to be a revenue generation project for the company. It seems to be unclear that how the company will continue to perform AI research.