Research Reveals AI Bias Against Women in Leadership
New research from the Tasmanian School of Business and Economics at the University of Tasmania has revealed that AI-generated content can perpetuate harmful gender biases.
Through analyzing AI-generated content about what makes a ‘good’ and ‘bad’ leader, Dr. Toby Newstead and Dr. Bronwyn Eager found that men were consistently depicted as strong, courageous, and competent, while women were often portrayed as emotional and ineffective. The findings are published in the journal Organizational Dynamics.
“By exploring how AI tools generate responses to questions around good and bad leadership, we were able to show that AI reflects and perpetuates known gender biases,” Dr. Newstead said.
“Any mention of women leaders was completely omitted in the initial data generated about leadership, with the AI tool providing zero examples of women leaders until it was specifically asked to generate content about women in leadership.”
“Concerningly, when it did provide examples of women leaders, they were proportionally far more likely than male leaders to be offered as examples of bad leaders, falsely suggesting that women are more likely than men to be bad leaders.”
Generative AI technologies, which use machine learning processes to understand and create content, are trained by processing vast amounts of data from the Internet as well as human input designed to reduce harmful or biased information being generated.
Dr. Eager says that the findings of this research highlight the need for further oversight and investigation into AI tools as they become part of daily life.
“Biases in AI models have far-reaching implications beyond just shaping the future of leadership. With the rapid adoption of AI across all sectors, we must ensure that potentially harmful biases relating to gender, race, ethnicity, age, disability, and sexuality aren’t preserved,” she said.
“Our research highlights the need for ongoing monitoring and evaluation of AI-generated content to ensure that it is not perpetuating harmful biases. We hope that our research will contribute to a broader conversation about the responsible use of AI in the workplace.”