What is Algorithmic Bias?
Computer systems are increasingly being used in all aspects of our everyday lives. However, these computer systems sometimes exhibit behaviors that can be considered biased and harmful, especially for minorities. Some examples include:
- Black people being labeled as “gorillas” by online photo or video sharing sites (see news articles here and here)
- Automated resume analysis systems that were heavily biased against hiring women
- When doing an image search, “Unprofessional hairstyle” showing only Black women while “Professional hairstyle” showing only Caucasian women.
- The grading algorithm being used in the UK was in favor of school with smaller student numbers (private schools) when predicting students’ final grades.
YouTube recommendation algorithm appeared to be demonetizing and penalizing queer content, disadvantaging LGBTQ content creators.
Generally, algorithmic bias can be particularly harmful when (a) resources are allocated or automated decisions are made in an unfair manner, or (b) when computer systems reinforce negative stereotypes.
What is WeAudit?
WeAudit is a community-oriented site aimed at identifying algorithmic bias. Using WeAudit, people can submit possible examples of algorithmic bias, audit individual examples, discuss examples of bias with others, and analyze the overall data to see if there are repeated biases against specific demographic groups.
How does WeAudit help fight algorithmic bias (longer explanation)?
WeAudit is a suite of tools for helping crowds of people to identify instances and trends of algorithmic bias in systems. WeAudit can be thought of as a set of independent steps, with different people making different kinds of contributions at each of these steps.
For example, at the data collection step, people can use our WeAudit browser plug-in or data submission form to contribute new instances of possible algorithmic bias.
At the auditing step, people can use the WeAudit auditing tools to view individual instances or groups of instances and share their thoughts as to whether they feel there is harmful bias or not.
At the discussion step, people can use the WeAudit discussion forums to view individual data, and see results of other people’s audits as well as discussion on that data.
Every project also has a single project spreadsheet that shows all of the collected data as well as all of the audits. We are using these project spreadsheets in a manner similar to a database, since people’s familiarity with spreadsheets would make it easier for them to analyze the data. We chose to use Google Sheets for project spreadsheets, because of its reliability and multi-user functionality.
Who is WeAudit for?
WeAudit is primarily intended for everyday people, to help educate people about algorithmic bias and making it possible for them to help fight it. Computer systems are being deployed in all kinds of places, and everyone should have a say in making sure that these systems are fair.
WeAudit is also useful for software developers, especially if a system you are working on is being audited. WeAudit can help you find potential problems so that you can also make sure that your systems do not have any kind of harmful biases.
Lastly, WeAudit is useful for policy makers. WeAudit presents a new way of auditing systems for potential biases that empowering everyday people, and also helps inform the general public about algorithmic bias.
I’d like to contribute to WeAudit. What can I do?
There are many ways you can contribute! You can:
- Add new data to a project using our browser plugin or our data submission form
- Audit individual instances that others have submitted
- Examine a project spreadsheet to see what data people have already collected and audited
- Read and contribute to our discussion board
- Add suggestions for other things that we can audit on WeAudit
Where can I learn more about algorithmic bias?
Here are some books about algorithmic bias:
- Algorithms of Oppression: How Search Engines Reinforce Racism by Safiya Umoja Noble
- Artificial Unintelligence: How Computers Misunderstand the World by Meredith Broussard
- The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns and Aaron Roth
- Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks
- Fairness and Machine Learning Limitation and Opportunities by Solon Barocas, Moritz Hardt, Arvind Narayanan.
Here is a video of an online tutorial:
- NeurIPS 2017 Tutorial on Fairness in Machine Learning NIPS 2017 Tutorial on Fairness in Machine Learning
You can also discuss algorithmic bias on our discussion boards.
How can I start a new audit case on WeAudit?
Currently, we are focused on auditing Dall-E 2. We felt that its widespread availability, ease with which everyday people could understand its potential for bias, as well as relative ease of auditing, made it a good candidate for our initial audit case.
For now, if you are interested in having us open up a new audit case, add a new post to Proposed Projects and we’ll take a look.
Who is the team behind WeAudit?
We are researchers at Carnegie Mellon University in the School of Computer Science. The faculty include Motahhare Eslami, Ken Holstein, Jason Hong, Adam Perer, Nihar Shah, and Hong Shen. The PhD students include Alex Cabrera, Wesley Deng, Alicia DeVrio, and Charvi Rastogi. There are also a large number of undergraduate and master’s students who have contributed to WeAudit, including Favour Adesina, Aubrey Bao, Sally Chen, Neha Chopade,Aditi Dhabalia, Yiying Ding, Anupriya Gupta, Niharkia Jayanthi, Kyungmin Kim, Claire Lee, Rachel Lee, Tim Lee, Lena Li, Alicia Ng, Ram Potham, Rituparna Roy, Ruhan Prasad, Yi Sun, Khushi Wadhwa, Shang-Yuan Wang, Wei-Chieh Wang, Sami Wurm, Xiaofeng Yan, Taeyoung Yun, Luke Zhang, Qianru Zhang, Tianyou Zhang, Charley Zhao, Yufeng Zhao, Chenjun Zhou.
Who funded WeAudit?
WeAudit is funded by the National Science Foundation (FAI Award number 2040942), as well as generous gifts from Cisco and Amazon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funders.
Are there any research papers about WeAudit?
Here are some papers our research team has written:
- Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors (CSCW 2021)
- Toward User-Driven Algorithm Auditing: Investigating users’ strategies for uncovering harmful algorithmic behavior (CHI 2022)
- Understanding Practices, Challenges, and Opportunities for User-Driven Algorithm Auditing in Industry Practice (arxiv)
- “Public(s)-in-the-Loop”: Facilitating Deliberation of Algorithmic Decisions in Contentious Public Policy Domains (arxiv)
This is a Civilized Place for Public Discussion
WeAudit deals with topics that may be contentious, including for example issues of race, gender, sexual orientation, culture, and more. As such, it is important that everyone follow community guidelines, so that we can have productive conversations. We may also delete or move threads on the WeAudit platform that violate our community guidelines.
These are not hard and fast rules. They are guidelines to aid the human judgment of our community and keep this a kind, friendly place for civilized public discourse.
Improve the Discussion
Help us make this a great place for discussion by always adding something positive to the discussion, however small. If you are not sure your post adds to the conversation, think over what you want to say and try again later.
One way to improve the discussion is by discovering ones that are already happening. Spend time browsing the topics here before replying or starting your own, and you’ll have a better chance of meeting others who share your interests.
The topics discussed here matter to us, and we want you to act as if they matter to you, too. Be respectful of the topics and the people discussing them, even if you disagree with some of what is being said.
Be Agreeable, Even When You Disagree
You may wish to respond by disagreeing. That’s fine. But remember to criticize ideas, not people. Please avoid:
- Name-calling
- Ad hominem attacks
- Responding to a post’s tone instead of its actual content
- Knee-jerk contradiction
Instead, provide thoughtful insights that improve the conversation.
Your Participation Counts
The conversations we have here set the tone for every new arrival. Help us influence the future of this community by choosing to engage in discussions that make this forum an interesting place to be — and avoiding those that do not.
Discourse provides tools that enable the community to collectively identify the best (and worst) contributions: bookmarks, likes, flags, replies, edits, watching, muting and so forth. Use these tools to improve your own experience, and everyone else’s, too.
Let’s leave our community better than we found it.
If You See a Problem, Flag It
Moderators have special authority; they are responsible for this forum. But so are you. With your help, moderators can be community facilitators, not just janitors or police.
When you see bad behavior, don’t reply. Replying encourages bad behavior by acknowledging it, consumes your energy, and wastes everyone’s time. Just flag it. If enough flags accrue, action will be taken, either automatically or by moderator intervention.
In order to maintain our community, moderators reserve the right to remove any content and any user account for any reason at any time. Moderators do not preview new posts; the moderators and site operators take no responsibility for any content posted by the community.
Always Be Civil
Nothing sabotages a healthy conversation like rudeness:
- Be civil. Don’t post anything that a reasonable person would consider offensive, abusive, or hate speech.
- Keep it clean. Don’t post anything obscene or sexually explicit.
- Respect each other. Don’t harass or grief anyone, impersonate people, or expose their private information.
- Respect our forum. Don’t post spam or otherwise vandalize the forum.
These are not concrete terms with precise definitions — avoid even the appearance of any of these things. If you’re unsure, ask yourself how you would feel if your post was featured on the front page of a major news site.
This is a public forum, and search engines index these discussions. Keep the language, links, and images safe for family and friends.
Keep It Tidy
Make the effort to put things in the right place, so that we can spend more time discussing and less cleaning up. So:
- Don’t start a topic in the wrong category; please read the category definitions.
- Don’t cross-post the same thing in multiple topics.
- Don’t post no-content replies.
- Don’t divert a topic by changing it midstream.
- Don’t sign your posts — every post has your profile information attached to it.
Rather than posting “+1” or “Agreed”, use the Like button. Rather than taking an existing topic in a radically different direction, use Reply as a Linked Topic.
Post Only Your Own Stuff
You may not post anything digital that belongs to someone else without permission. You may not post descriptions of, links to, or methods for stealing someone’s intellectual property (software, video, audio, images), or for breaking any other law.
Powered by You
This site is operated by your friendly local staff and you, the community. If you have any further questions about how things should work here, open a new topic in the site feedback category and let’s discuss! If there’s a critical or urgent issue that can’t be handled by a meta topic or flag, contact us via the staff page.
Terms of Service
Yes, legalese is boring, but we must protect ourselves – and by extension, you and your data – against unfriendly folks. We have a Terms of Service describing your (and our) behavior and rights related to content, privacy, and laws. To use this service, you must agree to abide by our TOS.