AWS has introduced new software to make biometrics easier and more practical for users. With the new software, facial changes are more easily recognized by technology and fewer mistakes are made.

Removing the need for annotation makes testing bias’s much more practical.
In recent years, the algorithm of this technology has become a central topic of research in various AI disciplines. Interest in the topic has grown following a 2018 study of biases in facial recognition software – where various errors were found in people from different demographic groups.

At this year’s European Conference on Computer Vision (ECCV), a new method for assessing bias in facial recognition systems was presented that does not require identity-annotated data. Although the method only estimates the performance of a model on data from different demographic groups, our experiments show that these estimates are accurate enough to detect performance differences that are indicative of bias.

This result – the ability to predict the relative performance of a face recognition model without test data with annotations indicating the identity of the face – was surprising and suggests an evaluation paradigm that should make it much more practical for face recognition software developers to test their models for bias.

Based on the hierarchical clustering of the test samples, we can compute error bounds for our accuracy estimates, and our experiments show that even accounting for error, our approach can still give a clear signal of inconsistency. This methodology will help artificial intelligence practitioners working on face recognition or similar biometric tasks to ensure the fairness of their models and make working with these types of programs even more error-free.

 

Tags: , , , , , , , , , , , , , , ,
Editor @ DevStyleR