Microsoft is limiting access to its facial recognition tools, citing the societal risks that artificial intelligence systems may pose.

On Tuesday, the tech company published a 27-page "Responsible AI Standard" that outlines the company's goals for equitable and trustworthy AI. Microsoft is restricting access to facial recognition tools in Azure Face API, Computer Vision, and Video Indexer to comply with the standard.

In practice, this means that Microsoft will restrict access to some functions of its facial recognition services (known as Azure Face) while eliminating others. Users must apply to use Azure Face for facial recognition, for example, by telling Microsoft exactly how and where they will deploy their systems. Some use cases with a lower risk of harm (such as automatically blurring faces in images and videos) will be allowed to continue.

Microsoft is retiring Azure Face's ability to identify "attributes such as gender, age, smile, facial hair, hair, and makeup" in addition to removing public access to its emotion recognition tool.

Facial recognition technology is causing increasing civil liberties and privacy concerns. According to studies, the technology misidentifies female subjects and subjects with darker skin tones at a disproportionate rate.

This has serious implications when used to identify criminal suspects or in surveillance situations. Other companies, such as Amazon and Facebook, have reduced or eliminated their facial recognition tools.

Microsoft's guidelines for balanced and responsible AI technology go beyond facial recognition. They also apply to the speech-to-text technology provided by Azure AI's Custom Neural Voice. After a March 2020 study revealed high error rates in speech-to-text technologies used across African American and Black communities, Microsoft said it took steps to improve the software.

Microsoft announced on Tuesday that new customers will need to apply to use Azure's Face API, and returning customers will have a year to reapply to continue using the facial recognition software.

The departure comes as Microsoft makes public for the first time its Responsible AI Standard framework. This is also the first major update to the standard since its introduction in late 2019, and it promises more fairness in speech-to-text technology, stricter controls for neural voice, and "fit for purpose" requirements that exclude the emotion-detecting system.

Microsoft isn't the first company to reconsider facial recognition. IBM ceased work in that field due to concerns that its projects could be used for human rights violations. Having said that, this is still a significant change of heart. one of the largest cloud and computing companies in the world.