Machine learning and artificial intelligence promise to be the future of tech. Facial detection and recognition are just one advancement in machine learning that has been widely introduced into many programs. Many companies have developed facial recognition software to benefit their consumers in a variety of ways. 

Most people would best understand facial detection and recognition in social media programs like Snapchat, Apple, Instagram, and Facebook. Snapchat, with the help of software developed at Looksery, launched “filters” which could recognize a person’s face and superimpose an interactive facial modification like adding makeup, slimming the face, or allowing users to have dog ears and flower crowns. Facebook and Instagram followed suit, providing the same types of filters but also using facial recognition to identify its users in photographs.  Lastly, Apple went so far as to create Face ID, a facial recognition software that allowed users to unlock their phones using only camera detection of their faces. Clearly, facial recognition is becoming ubiquitous and provides an enhanced experience. But, does it enhance the experience for all users?


In a research study titled Gender Shades, researchers Joy Buolamwini and Timnit Gebru set out to analyze the accuracy of various facial recognition software programs when it comes to racial and gender biases. The study found that major facial recognition programs like Microsoft and IBM fell short inaccuracy when it came to people of color, especially with women of color. The programs sometimes misgendered women of color or were unable to even detect faces in certain images. These findings are troubling, especially considering that these are commercial products designed to cater to their consumers – people of all genders and skin tones. 


The bias in machine learning seems to be a reflection of deeper social biases present in our society. Facial recognition software must be developed through machine learning based on a data set of images. Understandably, the programs will work only as good as they are trained to mean that they if aren’t properly trained to recognize a wide variety of skin tones, they may fall short. The data set needs to be representative and diverse if we wish to resolve this racial bias in facial recognition.


One method of training the systems has been to use databases of celebrities such as that of the University of Massachusetts Amherst called “Labeled Faces in the Wild”. A quick glance through this database of images reveals a predominately lighter tone. The disclaimer on the databases acknowledges this shortcoming by stating that “many ethnicities have very minor representation or none at all”. It also warns programmers not to use this data set to conclude that a product is “suitable for any commercial purpose”. However, this approach requires each and every developer to then actively search for images representative of those people of color who are underrepresented in the data set. If the programmers are not themselves aware of this racial bias, this active step may never take place and the ultimate product may continue to perpetuate the racial bias in facial recognition. 


Clearly, this is a serious issue that tech companies must fix. If facial recognition software is inaccurate and misgenders or misidentifies people of color, this creates a demeaning experience for the consumer. It is made even more serious when law enforcement becomes involved. If law enforcement is to rely on facial recognition that misidentifies a person of color as a perpetrator of a crime, this serves to further exacerbate the institutional racism in today’s world. It is a serious problem that requires an immediate solution.

A major step forward has been seen in California where the Body Camera Accountability Act (AB 1215) was signed and banned police from integrating facial recognition software in their body cameras for the next three years. It shows a positive step in the right direction toward mitigating the racial injustices present in the U.S. and globally.


In June 2020, many tech giants also took accountability for this issue. IBM was the first to announce that it would no longer offer facial recognition software and wrote to U.S. Congress to condone the use of facial recognition in any way that interfered with “basic human rights and freedoms” or contributed to “racial profiling”. Amazon came next to announce that it would ban law enforcement from accessing its facial recognition software. These moves come in such an important time of racial reform and provide hope in eradicating racial bias in facial recognition software. Everyone in tech must take a step back and recognize how their software might be contributing to racial injustice and do something to make a change. 



Sources:


[1] Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, (2018), Buolamwini, J., Gebru, T., Proceedings of Machine Learning Research 81:1-15; http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf 


[2] http://gendershades.org 


[3] Labeled Faces in the Wild, University of Massachusetts Amherst http://vis-www.cs.umass.edu/lfw/ 


[4] https://www.ibm.com/blogs/policy/facial-recognition-sunset-racial-justice-reforms/ 


[5] https://www.theverge.com/2020/6/10/21287101/amazon-rekognition-facial-recognition-police-ban-one-year-ai-racial-bias 


[6] https://www.eff.org/deeplinks/2019/10/victory-california-governor-signs-ab-1215