WHEN GOOD ALGORITHMS GO SEXIST: WHY AND HOW TO ADVANCE AI GENDER EQUITY
By Genevieve Smith, Associate Director | Center for Equity, Gender & Leadership (UC Berkeley Haas School of Business), and Ishita Rustagi, Analyst | Center for Equity, Gender & Leadership (UC Berkeley Haas School of Business). Genevieve Smith and Ishita Rustagi are Members of the Women4AI Daring Circle.
Many institutions make decisions based on artificial intelligence (AI) systems using machine learning (ML), whereby a series of algorithms takes and learns from massive amounts of data to find patterns and make predictions. Yet gender bias in these systems is pervasive. Our analysis finds that gender-biased AI has profound impacts on women’s psychological, economic, and health security. It can also reinforce and amplify existing harmful gender stereotypes and prejudices.
Social change leaders and ML systems developers alike must ask: How can we build gender-smart AI to advance gender equity, rather than embed and scale gender bias? There are roles for social change leaders, as well as leaders at organizations developing ML systems.
Read about the 7 actions for social change leaders and ML developers to make AI gender-smart, as well as new analysis on the impacts of gender-biased AI systems in our article published with Stanford Social Innovation Review – HERE.