Filters for face ai are an increasingly popular way to enhance your selfies. You can change your skin tone, smooth out lines and wrinkles, add blush, a touch of lipstick, elongate your eyelashes, and more. Some of these filters are aimed at improving your facial appearance, while others are simply fun.
But there is one thing that many people don’t know about the technology behind these face transforming effects: the underlying algorithms that make them possible are extremely biased against women, based on data that shows women have less of a chance of winning awards for their beauty. This is a problem, because it means that women are being told that their faces are not beautiful enough without any basis for that belief.
What’s more, these AI filters can also be used to create “faces” that are not a person at all. This is especially true for young children, who often lack facial structure and are forced to conform to unrealistic standards of beauty.
Luckily, there are ways to stop face ai from contributing to racial inequality and make the technology more inclusive. For starters, lawmakers can regulate the use of these technologies. They can also require that developers and companies prove that their algorithms are trained for accuracy, privacy, and racial equity.
A growing number of advocacy groups have criticized these algorithms for their lack of racial equity, and several Congressional hearings have explored the issue. These efforts have helped shape an ongoing conversation about the issue, and have led to a few significant steps forward in the face recognition industry.
The Safe Face Pledge calls for industry players to make their algorithms more racially equitable and to evaluate the impact of their applications on racial minorities. In addition, the 2019 Algorithmic Accountability Act requires companies to train their algorithms for accuracy and privacy, and provide reports about the effectiveness of those trainings.
But the complexities of the face-matching problem have also made it difficult to develop effective policy solutions. There are also reports that show that some algorithms, such as Amazon’s Rekognition, can actually discriminate against people of color.
These results have prompted immediate responses from companies such as Microsoft and IBM, who pledged to make their systems less prone to bias. They also promised to modify testing cohorts and improve data collection on specific demographics, such as Black women.
However, these changes may not be sufficient to address the discrimination that is rooted in these algorithms. In fact, Georgetown Law’s recent report found that law enforcement continues to rely on these algorithms in the U.S. Even though these algorithms are perfectly accurate, they have a history of disproportionately targeting people of color and can be used to perpetuate racial bias.
The most effective solution to curbing racial bias in these AI face-matching algorithms is to implement strict regulations and oversight. This can be accomplished through legislation that makes it a crime to use them for surveillance, and by educating and demanding accountability from those who produce these tools.