arXiv:2409.13869v2 Announce Type: replace
Abstract: In this study, I investigate how generative artificial intelligence (AI) systems reproduce and reinforce societal biases, with a specific focus on the representation of women, Black individuals, age groups, and people with visible disabilities in AI-generated occupational images. I analyzed 444 images generated by Microsoft Designer, Meta AI, and Ideogram across 37 occupations and found significant disparities in representation. Women are underrepresented in senior and technology roles, Black individuals are nearly absent, and people with visible disabilities are completely absent across all categories. I also observed clear age bias, with younger individuals predominantly depicted. These patterns suggest that generative AI tools replicate, and in some cases amplify, existing workplace inequalities and stereotypes, undermining democratic values of equity and inclusion. My findings highlight the urgent need for algorithmic diversity exposure, and I recommend that AI developers and corporate users audit their tools for equity, diversity, and inclusion (EDI) risks. I argue for the critical inclusion of diverse groups in AI development and governance to foster more democratic and socially responsible technologies.
Disclosure in the era of generative artificial intelligence
Generative artificial intelligence (AI) has rapidly become embedded in academic writing, assisting with tasks ranging from language editing to drafting text and producing evidence. Despite



