metamorworks - stock.adobe.com

Diversity in hiring a key to eradicating AI bias

HR is turning to AI systems to help rank and sort job candidates. But the systems like these are mostly designed by men, who may unknowingly encode their own biases.

The people writing AI code or building AI models are overwhelmingly male, according to a new study. That lack of diversity is correlated to the risk of AI bias in systems that, for instance, are used by HR to rank job candidates.  

In a study released this week, the AI Now Institute at New York University pointed out that 80% of AI professors are men, and that AI research staffs at some of the most prominent technology companies in the United States are overwhelmingly male -- with women making up just 15% of the staff at Facebook and 10% at Google. The representation of some minorities is even lower.

The paper is titled "Discriminating Systems" and was written by Sarah Myers West, a postdoctoral researcher at the institute; Meredith Whittaker, the co-founder and executive director at AI Now; and Kate Crawford, the institute's co-founder and director of research. It argued that the lack of diversity in AI development can't be separated from the AI bias problem present in algorithms and models. Both are "deeply intertwined."

"Tackling the challenges of bias within technical systems requires addressing workforce diversity, and vice versa," the report said.

How AI systems amplify bias

Tech's efforts to address diversity are failing

Tech's efforts to address diversity by improving the talent pipeline have not addressed "workplace cultures, power asymmetries, harassment, exclusionary hiring practices, unfair compensation, and tokenization that are causing people to leave or avoid working in the AI sector altogether," the paper said.

AI systems can function "as systems of discrimination," as "classification technologies that differentiate, rank and categorize," the report noted.

Kimberly Houser, an attorney and professor of legal studies at the Spears School of Business at Oklahoma State University, said the AI Now report is a warning that the "the future development of AI could create even more inequity because of this lack of diversity." It can do this either by replicating bias contained in data itself or by AI model creators failing to consider data from underrepresented groups, she said.

Houser examined this problem in a forthcoming paper in the Stanford Technology Law Review, titled, "Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making."

AI can mitigate faulty human bias

Houser also hopes that AI can help do just the opposite. Her paper suggested "that the way to increase diversity in the tech industry is to reduce the impact of noise and bias present in human decision-making." Rather than inject bias into a decision-making process, AI might be able to help mitigate the risk of human bias. Diversifying those who do the AI model building is necessary in making that happen, she said.

"The AI Now report is correct in noting that diversification is not enough and that relying on technical solutions is not enough," she said. "However, diversifying those creating AI is an important first step."

Tackling the challenges of bias within technical systems requires addressing workforce diversity, and vice versa.
Report finding by AI Now Institute at New York University

In terms of systems development, "responsible AI requires that attention is paid to data sources and outcomes are monitored for bias," Houser said. She noted that IBM, Accenture, Google and Microsoft are working on tools to detect bias in automated decision-making.

"We need to know there is risk, we also need to start working to solve it," she said.

AI Now recommended some actions for improving diversity in hiring, including transparency in pay and hiring practices that maximize diversity, such as targeting recruitment efforts away from elite universities.

For users of the systems, AI Now noted that examining systems for AI bias "is almost impossible when these systems are opaque." The report called for transparency, rigorous testing, independent auditing, and ongoing monitoring to test for AI bias, among other recommendations.

The AI Now Institute's funding supporters include Microsoft, Google, the Ford Foundation, and the MacArthur Foundation. The institute's mission is to "provide independent research on the social implications of artificial intelligence."

Dig Deeper on Core HR administration technology

SearchSAP
SearchOracle
Business Analytics
Content Management
Sustainability and ESG
Close