What I observed that I think could be harmful:
gender bias (stronger for CEO), racial bias (where is asian representation)
overall, the people in each image seem copy-pasted (body shapes all seem similar for each category), which could led to generalizations.
Why I think this could be harmful, and to whom:
generalizations could lead to stereotypes
I feel like people are more likely to perceive images as true when they may be displaying bias, which would spread bias!
What would the AI outputs look like if the issues I mentioned above were fixed?:
outputs would be more diverse (by culture, race, gender, etc), contribute to raising awareness of diverse perspectives. I think this would be especially helpful for education purposes (children).
Some other comments I have:
It would be nice to see how my feedback turns into action.
Note, this audit report is relevant to poster’s own identity and/or people and communities the poster know about.
i agree
