What I observed that I think could be harmful:
N/A; not prompt dependent
Why I think this could be harmful, and to whom:
I think it could increase bias further for the data-based representation of a given identity, whether or professional or personal. e.g. ceo/worker - man; doctor - man; nurse - woman
What would the AI outputs look like if the issues I mentioned above were fixed?:
Of x number of visuals, each represents a unique identity that provides the right stratification of identities in a given prompt. Pinterest claims to do with well (providing diversity on prompt basis)
Some other comments I have:
The best case is balancing what is AI generated vs what is actual visual. Based on a google search, it seems like the algorithm already embeds diversity in its actual image (pictures that I have real people). So integrating this as the first step in mixing/classifying what is real vs not while overlaying diversity is important.
Note, this audit report is relevant to poster’s own identity and/or people and communities the poster know about.