Human error?
Humans bring gender biases to their interactions with Artificial Intelligence (AI), according to a study – “AI’s assigned gender affects human-AI cooperation” by Sepideh Bazazi, Jurgis Karpus and Taha Yasseri – that was published in iScience.
The study involving 402 participants found that people exploited female-labelled AI and distrusted male-labelled AI to a comparable extent as they do human partners bearing the same gender labels.
Notably, in the case of female-labelled AI, the study found that exploitation in the Human-AI setting was even more prevalent than in the case of human partners with the same gender labels.
According to the researchers, the findings show that gendered expectations from human-human settings extend to human-AI cooperation, and “this has significant implications for how organizations design, deploy, and regulate interactive AI systems.”
Key findings:
- Patterns of exploitation and distrust toward AI agents mirrored those seen with human partners carrying the same gender labels.
- Participants were more likely to exploit AI agents labelled female and more likely to distrust AI agents labelled male.
- Assigning gender to AI agents can shape cooperation, trust, and misuse implications for product design, workplace deployment, and governance.
Bazazi, first author of the study, said: “As AI becomes part of everyday life our findings that gendered expectations spill into human-AI cooperation underscore the importance of carefully considering gender representation in AI design, for example, to maximize people’s engagement and build trust in their interactions with automated systems.
“Designers of interactive AI agents should recognise and mitigate biases in human interactions to prevent reinforcing harmful gender discrimination and to create trustworthy, fair, and socially responsible AI systems.”
Yasseri, co-author of the study, said: “Our results show that simply assigning a gender label to an AI can change how people treat it. If organizations give AI agents human-like cues, including gender, they should anticipate downstream effects on trust and cooperation.”
Karpus, co-author of the study, added: “This study raises an important dilemma. Giving AI agents human-like features can foster cooperation between people and AI, but it also risks transferring and reinforcing unwelcome existing gender biases from people’s interactions with fellow humans.”






























