A core priority for the cognitive services Team aims to ensure that its AI technology, including facial recognition, is developed and used responsibly. While we have adopted six essential principles to broadly guide our work in AI, we recognized early on that the unique risks and opportunities of facial recognition technology require their own principles guiding principles.
To strengthen our commitment to these principles and create a stronger foundation for the future, Microsoft Announcing significant updates to its Responsible AI standard, the internal playbook that guides our AI product development and delivery. As part of aligning our products with this new standard, we’ve updated our approach to facial recognition, including adding a new restricted access policyremoving AI classifiers of sensitive attributes and strengthening our investments in fairness and transparency.
Safeguards for responsible use
We continue to provide consistent and clear guidance on the responsible use of facial recognition technology and lobby for legislation to regulate thisbut we still have to do more.
Starting today, new customers must request access to use facial recognition operations in Azure Face API, Computer Vision, and Video Indexer. Existing customers have one year to request and obtain approval for continued access to the facial recognition services based on their deployed use cases. By introducing Limited Access, we are adding an additional layer of verification to the use and delivery of facial recognition to ensure that use of these services is consistent with Microsoft’s Responsible AI standard and contributes to quality end-user and societal benefit . This includes introducing use cases and customer eligibility requirements to gain access to these services. Read about Examples of use cases and use cases to avoid can be found here. Effective June 30, 2023, existing customers will no longer be able to access facial recognition features if their facial recognition request is not approved. Submit an application form for face and celebrity recognition operations in Face API, Computer Vision, and Azure Video Indexer hereand our team will get back to you via email.
Face detection features (including detection of blur, exposure, glasses, head pose, landmarks, noise, occlusion and face bounding box) remain generally available and do not require an application.
In another change, we will discontinue facial analysis features that purport to infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup. We worked with internal and external researchers to understand the limitations and potential benefits of this technology, and to find trade-offs. Particularly in the case of emotion classification, these efforts raise important questions about privacy, the lack of consensus on a definition of “emotions,” and the inability to generalize the link between facial expression and emotional state across use cases, regions, and demographics. API access to functions that predict sensitive attributes also opens up a variety of ways in which they can be abused — including stereotyping people, discriminating against them, or unfairly denial-of-services.
To mitigate these risks, we have chosen not to support a general purpose system in the Face API that purports to infer emotional states, gender, age, smile, facial hair, hair, and makeup. Recognition of these attributes will no longer be available to new customers as of June 21, 2022, and existing customers have until June 30, 2023 to stop using these attributes before they retire.
While API access to these attributes will no longer be available to customers for general purposes, Microsoft recognizes that these capabilities can be valuable when used for a range of controlled accessibility scenarios. Microsoft remains committed to supporting technology for people with disabilities and will continue to leverage these capabilities to support that goal by including them in applications such as see AI.
Responsible Development: Improving performance for inclusive AI
In line with Microsoft’s AI Principle of Fairness and the supporting goals and requirements outlined in the Responsible AI Standard, we are increasing our investments in fairness and transparency. We conduct responsible data collection to identify and mitigate differences in technology performance across demographic groups and are exploring ways to present this information in a way that would be insightful and actionable for our customers.
Given the potential socio-technical risks posed by facial recognition technology, we seek to bring the expertise of statisticians, AI/ML fairness experts, and human-computer interaction experts to this effort, both inside and outside Microsoft. We also consulted with anthropologists to deepen our understanding of human facial morphology and ensure our data collection reflects the diversity our customers encounter in their applications.
While this work is underway, in addition to the safeguards described above, we are providing guidance and tools to enable our customers to use this technology responsibly. Microsoft is making new tools and resources available to customers to evaluate the performance of the models against their own data and use the technology to understand limitations in their own deployments. Azure Cognitive Services customers can now take advantage of it Microsoft’s open-source Fairlearn Package and Fairness Dashboard to measure the fairness of Microsoft’s face verification algorithms against their own data – allowing them to identify and address potential fairness issues that may affect different demographics before deploying their technology. We encourage you to do so contact us for questions about carrying out a fairness assessment with your own data.
We’ve also updated the transparency documentation with guidance to help our customers improve the accuracy and fairness of their systems by incorporating meaningful human verification to detect and correct instances of misidentification or other errors by supporting individuals who believe their Results were incorrect, and by identifying and correcting variations in accuracy due to variations in operating conditions.
When working with clients using our Face service, we also found that some errors originally attributed to fairness issues were caused by poor image quality. If someone’s submitted image is too dark or blurry, the model may not be able to match it properly. We recognize that this poor image quality may be unfairly focused on demographic groups.
That’s why Microsoft is offering its customers a new Recognition Quality API that flags problems with lighting, blurring, occlusions, or head angles in images submitted for face verification. Microsoft also offers a reference app that provides real-time suggestions to help users capture higher-quality images that are more likely to produce accurate results.
To use the image quality attribute, users need to call the face recognition API. see the face Quick Start to test the API.
Look to the future
We are excited about the future of Azure AI and what responsibly developed technologies can do for the world. We thank our customers and partners for embracing responsible AI practices and for walking with us as we adapt our approach to new responsible AI standards and practices. As we roll out the new limited access policy for our facial recognition service, in addition to the new computer vision capabilities your feedback will continue to advance our understanding, practices and technology for responsible AI.
Learn more at Frequently asked questions about restricted access.