IBM Releases Diverse Dataset For Fairer Facial Recognition


Posted On Sep 20 2019 by

first_imgStay on target Amazon’s New Facial Recognition Smells Your FearLondon Police’s Facial Recognition System Has 81 Percent Error Rate Computers have advanced significantly since Charles Babbage’s day. But even the most progressive machines are only as good as the humans who program them.And by “good” I mean “moral.”IBM on Tuesday released a new large and diverse dataset aimed at helping advance the study of fairness and accuracy in facial recognition technology.“The AI systems learn what they’re taught, and if they are not taught with robust and diverse datasets, accuracy and fairness could be at risk,” IBM Fellow John Smith wrote in an announcement.“For the facial recognition systems to perform as desired—and the outcomes to become increasingly accurate—training data must be diverse and offer a breadth of coverage,” he continued. “The images must reflect the distribution of features in faces we see in the world.”Available now to the global research community, Diversity in Faces (DiF) provides annotations of 1 million human facial images. The IBM team used 10 coding schemes that include objective measures (craniofacial features) and subjective assessments (human-labeled predictions of age and gender).“Our initial analysis has shown that the DiF dataset provides a more balanced distribution and broader coverage of facial images compared to previous datasets,” Smith said.Interested analysts can request access to the Diversity in Faces dataset online. IBM urges others to contribute to the growing body of research “and advance this important scientific agenda.”The MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) is doing just that: Researchers created a new algorithm that can allegedly automatically “de-bias” facial recognition data by resampling it to be more balanced.The algorithm can learn a specific task, like face detection, as well as the underlying structure of the training data—allowing it to identify and minimize hidden prejudices.In tests, the system reduced “categorical bias” by more than 60 percent, compared to “state-of-the-art” facial detection models.More on Geek.com:No Trolls Needed: Robots Will Probably Develop Prejudice On Their OwnDid Taylor Swift Use Facial Recognition to Catch Stalkers at Concert?Study: 25 Percent of U.S. Workers At Risk of Job Automationlast_img

Last Updated on: September 20th, 2019 at 7:49 am, by


Written by admin


Leave a Reply

Your email address will not be published. Required fields are marked *