What I observed that I think could be harmful:
I notice that whenever I type the words ‘racism’ or ‘racist’, the results mainly show images of black people, not necessarily partaking in racist actions.
Why I think this could be harmful, and to whom:
I think this could be harmful as it shows that racism is an issue that might only apply to black people, even though it’s an issue that is prevalent in many cultures and amongst other races as well.
How I think this issue could potentially be fixed:
I think the harms could be mitigated through showing more examples and use cases of this search result.
It is indeed very problematic that only racism against black people is represented in these images. It is also quite unexpected to see that “racist people” are also portrayed as black people. This might be due to the lack of presentation of the actual “racist people” when discussing this issue as probably more victim photos are included.
I am very surprised to see how poor the results are here. Though the generative term “racisms” could be interpreted in a variety of ways, “racist person” is pretty clear in what should be represented. Stable diffusion completely messed this one up. It’s most likely because there is a problematic association between black people and racism in the training set and this got shunted over to “racist person” but this shows particularly bad performance on the part of the AI.