Tutorial: Best practices for prioritizing fairness in AI systems
- Amit Deshpande, Amit Sharma | Microsoft Research India
- Microsoft Research Summit 2021 | Responsible AI
As artificial intelligence (AI) continues to transform people’s lives, new opportunities raise new challenges. Most notably, when we assess the societal impact of AI systems, it’s important to be aware of their benefits, which we should strive to amplify, and their harms, which we should work to reduce. Developing and deploying AI systems in a responsible manner means prioritizing fairness. This is especially important for AI systems that will be used in high-stakes domains like education, employment, finance, and healthcare. This tutorial will guide you through a variety of fairness-related harms caused by AI systems and their most common causes. We will then dive into the precautions we need to take to mitigate fairness-related harms when developing and deploying AI systems. Together, we’ll explore examples of fairness-related harms and their causes; fairness dashboards for quantitatively assessing allocation harms and quality-of-service harms; and algorithms for mitigating fairness-related harms. We’ll discuss when they should and shouldn’t be used and their advantages and disadvantages.
Resources:
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
-
-
Amit Deshpande
Researcher
-
Amit Sharma
Principal Researcher
-
-
Responsible AI
-
Opening remarks: Responsible AI
- Hanna Wallach
-
Demo: RAI Toolbox: An open-source framework for building responsible AI
- Besmira Nushi,
- Mehrnoosh Sameki,
- Amit Sharma
-
Tutorial: Best practices for prioritizing fairness in AI systems
- Amit Deshpande,
- Amit Sharma
-
-
Panel discussion: Content moderation beyond the ban: Reducing borderline, toxic, misleading, and low-quality content
- Tarleton Gillespie,
- Zoe Darmé,
- Ryan Calo
-
Lightning talks: Advances in fairness in AI: From research to practice
- Amit Sharma,
- Michael Amoako,
- Kristen Laird
-
Lightning talks: Advances in fairness in AI: New directions
- Amit Sharma,
- Kinjal Basu,
- Michael Madaio
-
Panel: Maximizing benefits and minimizing harms with language technologies
- Hal Daumé III,
- Steven Bird,
- Su Lin Blodgett
-
Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
- Saleema Amershi,
- Mihaela Vorvoreanu
-
Panel: The future of human-AI collaboration
- Aaron Halfaker,
- Charles Isbell,
- Jaime Teevan
-
Closing remarks: Responsible AI
- Ece Kamar