Connect with us

Hi, what are you looking for?


UNESCO study reveals alarming evidence of regressive gender stereotypes in AI

AI applications have the power to subtly shape the perceptions of millions of people. But a study found that they exhibited the most significant gender bias.

Image by Cash Macanaya from

A UNESCO study revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping.  Women were described as working in domestic roles far more often than men ­– four times as often by one model – and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.

The study Bias Against Women and Girls in Large Language Models examines stereotyping in Large Language Models (LLMs) – natural language processing tools that underpin popular generative AI platforms – including GPT-3.5 and GPT-2 by OpenAI, and Llama 2 by META. It shows unequivocal evidence of bias against women in content generated by each of these Large Language Models.

“Every day more and more people are using Large Language Models in their work, their studies and at home. These new AI applications have the power to subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world,” said Audrey Azoulay, UNESCO’s Director General.

“Our Organization calls on governments to develop and enforce clear regulatory frameworks, and on private companies to carry out continuous monitoring and evaluation for systemic biases, as set out in the UNESCO Recommendation on the Ethics of Intelligence artificial, adopted unanimously by our Member States in November 2021,” she added.

Open source LLMs such as Llama 2 and GPT-2 – prized because they are free and accessible to a broad public – exhibited the most significant gender bias. However, the study also concludes that their open and transparent nature can be a strong advantage in addressing and mitigating these biases through greater collaboration across the global research community, compared with more closed models, which include GPT 3.5 and 4 (the basis for ChatGPT) and Google’s Gemini.

Richer narratives in stories about men

Part of the study measured the diversity of content in AI-generated texts focused on a range of people across a spectrum of genders, sexualities and cultural backgrounds, including by asking the platforms to “write a story” about each person. Open-source LLMs in particular tended to assign more diverse, high-status jobs to men, such as engineer, teacher and doctor, while frequently relegating women to roles that are traditionally undervalued or socially-stigmatized, such as “domestic servant”, “cook” and “prostitute”.

Llama 2-generated stories about boys and men dominated by the words “treasure”, “woods”, “sea”, “adventurous”, “decided” and “found”, while stories about women made most frequent use of the words “garden”, “love”, “felt,” “gentle”, “hair” and “husband”. Women were also described as working in domestic roles four times more often than men in content produced by Llama 2.

Homophobic attitudes and racial stereotyping

The studies also highlighted that LLMs had a tendency towards producing content which was negative about gay people and particular ethnic groups. When the three AI models were prompted to complete sentences beginning with the phrase ‘a gay person is…’, 70% of the content generated by Llama 2 was negative. Examples included: “The gay person was regarded as the lowest in the social hierarchy.” 60% of the content generated by GPT-2 was negative, including the phrases “The gay person was thought of as a prostitute, a criminal, and had no rights.” 

When the LLMs were prompted to generate texts about different ethnicities – taking the examples of British and Zulu men and women – they were found to exhibit high levels of cultural bias. British men were assigned varied occupations, including “driver”, “doctor”, “bank clerk”, and “teacher”. Zulu men, were more likely to be assigned the occupations “gardener” and “security guard”. 20% of the texts on Zulu women assigned them roles as “domestic servants”, “cooks, and “housekeepers”.

Advertisement. Scroll to continue reading.

UNESCO’s Recommendation must be urgently implemented

In November 2021, UNESCO Member States unanimously adopted the Recommendation on the Ethics of AI, the first and only global normative framework in this field. In February 2024, 8 global tech companies including Microsoft also endorsed the Recommendation. The frameworks calls for specific actions to ensure gender equality in the design of AI tools, including ring-fencing funds to finance gender-parity schemes in companies, financially incentivizing women’s entrepreneurship, and investing in targeted programmes to increase the opportunities of girls’ and women’s participation in STEM and ICT disciplines.

The fight against stereotypes also requires diversifying recruitment in companies. According to most recent data, women represent only 20% of employees in technical roles in major machine learning companies, 12% of AI researchers and 6% of professional software developers. Gender disparity among authors who publish in the AI field is also evident. Studies have found that only 18% of authors at leading AI conferences are women and more than 80% of AI professors are men. If systems are not developed by diverse teams, they will be less likely to cater to the needs of diverse users or even protect their human rights.


Like Us On Facebook



Entitled “Pride in Careers: Building Inclusive Futures Together”, the joint campaign led by Jobstreet by SEEK and PLCC seeks to expand awareness and highlight...

Lifestyle & Culture

Influencer marketing has become a powerful tool in the digital age, allowing brands to promote their products and services by leveraging the reach and...


Social media has supported an explosion of diversity in gender and sexuality during the 21st Century, but - it is worth noting - these...


Sexual minority identification (e.g. lesbian, gay, or bisexual) was associated with greater odds of online dating compared to heterosexual identification.