WeAudit

Fighting Harmful Biases in AI 

Context
This project is designing a tool for researchers to identify harmful algorithmic biases in AI.

As AI is adapting and learning from the accumulated data in human history, it is a problem that the AI algorithm also inherite stereotypes and biases. In addition, the myth of AI being an objective machine overlooks the potential flaws in AI.

Therefore, it is important to audit AI to detect and discover harmful biases to improve the algorithm and its results.
Problem
Researchers need a tool to assist in auditing generative AI with efficiency and evidence.
- Inefficiency in generating numerous AI results as a dataset to audit.
- Limited cognitive capacity to review a large dataset to define a bias.
Solution
A platform that generates extensive AI results to audit from a single prompt.
A tool powered with computer vision to visualize results and support the user’s decision-making.
Team
2 Designers
Carol Auh, Phyllis Feng

Director
Professor Jason Hong
Contribution
Project Management,
Research, Ideation,
Wireframing, Prototyping,
User Testing
Timeline
Phase 1 Design
May - Aug 2023

Phase 2 Development
Aug 2023 - ongoing
Tools
Figma, Figjam
Research Methods
Expert Interview, Technology Research,
Usability Test
Phase 1 Design
We are currently in the stage of testing a lo-fi prototype.
Back to Top