Building immunity to digital manipulation through transparency, collective action, and algorithmic accountability.
Algorithmic systems increasingly shape our choices, opinions, and opportunities. From what content we see to whether we get a loan, AI models are making consequential decisions about our lives—often without transparency, accountability, or recourse.
AI Ethics Oversight exists to shine a light on these black-box systems. We are a public ombudsman: collecting evidence, aggregating reports, and creating visualizations that make algorithmic harms visible to all.
"Transparency is the first step toward accountability. You cannot fix what you cannot see."
We believe users have the right to know how algorithms affect them. No more hidden manipulation.
We provide tools and knowledge so individuals can identify and resist manipulative design.
We stand with communities disproportionately harmed by biased AI systems.
We accept reports from anyone who has experienced algorithmic harm, dark patterns, or discriminatory AI behavior. Our volunteer moderators verify and categorize each submission.
Individual reports become powerful evidence when combined. Our Bias Heatmaps and Dark Pattern Wall visualize systemic issues that would otherwise remain hidden.
We share our findings with regulators, journalists, and the public. We believe informed citizens can demand better from the companies that build AI systems.
Whether you've experienced algorithmic harm, witnessed a dark pattern, or want to volunteer as a moderator, your contribution matters. This is a collective effort—and we need your voice.