The artificial intelligence field is too white and too male, researchers say

The artificial intelligence industry is facing a "diversity crisis," researchers at the AI ​​Now Institute said in a report released today, which raises key questions about the direction of the field.

Women and people of color are deeply underrepresented, the report found, noting studies that found that approximately 80 percent of AI teachers are men, while only 15 percent of AI's research staff in Facebook and 10 percent on Google are women. People of color are also marginalized, which represents only a fraction of the staff of leading technology companies. The result is a workforce often driven by white and masculine perspectives, which build tools that often affect other groups of people. "This is not the diversity of people who are being affected by these systems," says AI Now Institute co-director Meredith Whittaker.

Worse yet, plans to improve the problem by fixing the "pipeline" of possible candidates for jobs have largely failed. "Despite many decades of" pipeline studies "that evaluate the flow of diverse job candidates from school to industry, there has been no substantial progress in diversity in the AI ​​industry," the researchers write.

The researchers make some suggestions to improve the problem. Companies, they say, could improve transparency by publishing more data on compensation, disaggregated by race and gender, and publishing transparency reports on harassment and discrimination.

Diversity, while an obstacle in the technology industry, presents specific dangers in AI. where potentially biased technology, such as facial recognition, can disproportionately affect historically marginalized groups. The researchers write: tools such as a program that explores faces to determine sexuality, introduced in 2017, that echo the injustices of the past. Rigorous tests are needed. But more than that, the creators of artificial intelligence tools must be willing to not build the most risky projects. "We need to know that these systems are safe and fair," says AI Now Institute co-director Kate Crawford.

Employees in the technology industry have taken a position on some of AI's main problems, pressuring their companies to abandon or review the use of confidential tools that could affect vulnerable groups. Amazon workers have questioned executives about the company's facial recognition product. Recently, Google workers rejected an AI review board that included the president of the Heritage Foundation, highlighting the history of the lobby group against LGBTQ rights issues. The company soon dissolved the board completely.

"The diversity crisis in AI is well documented and is powerful," the researchers conclude. "It can be seen in unequal workplaces across the industry and in the academic world, in disparities in recruitment and promotion, in AI technologies that reflect and amplify biased stereotypes, and in the resurgence of biological determinism in the automated systems ".