Auditing Dall-E images for bias

Here are some design issues for us to figure out. Here are my initial thoughts.

There are some images that you only need a single instance to say, “yup, that’s pretty bad.” For example, Google Images and Yahoo images labeled Black people as “gorilla”.

There are some other images where you need to compare within the same set. For example, currently for “unprofessional hairstyle”, Craiyon generates what looks like Latino men only. This is also an example where searching the opposite is useful too, i.e. “professional hairstyle”.

In the short term, having our bias audit forms just show one image at a time seems sufficient. In the long term, we should consider ways of letting people search for related images.

(Note that previously, when we were also looking at labels generated by computer vision, another example is the Black guy holding a temperature gun but mislabeled as “danger” and “gun”. If it mislabels all similar images as “danger” and “gun”, it’s wrong but not harmfully biased. But if it only mislabels Black men as so, then it’s likely biased)