Lightning talks: Advances in fairness in AI: From research to practice
- Amit Sharma, Michael Amoako, Kristen Laird, Solon Barocas, Chad Atalla | Microsoft Research India, Microsoft, Microsoft, Microsoft Research NYC, MSAI
- Microsoft Research Summit 2021 | Responsible AI
Over the past few years, we’ve seen that artificial intelligence (AI) and machine learning (ML) provide us with new opportunities, but they also raise new challenges. Most notably, these challenges have highlighted the various ways in which AI systems can promote unfairness or reinforce existing societal stereotypes. While we can often spot fairness-related harms in AI systems when we see them, there’s no one-size-fits-all definition of fairness that applies to all AI systems in all contexts. Additionally, there are many reasons why AI systems can behave unfairly. In this session, we explain the diversity of work taking place on fairness in AI systems at Microsoft and in the broader community. We highlight how we’re applying fairness principles in real-world AI systems by measuring and mitigating different kinds of fairness-related harms in vision, speech-to-text, and natural language systems.
Introduction
Speaker: Amit Sharma, Senior Researcher, Microsoft Research India
Fairness in speech-to-text
Speakers:
Michael Amoako, RAIL Program Manager – Quality of Service Fairness Lead, Microsoft
Kristen Laird, Program Manager, Microsoft Cognitive Services Responsible AI
Representational harms in image tagging
Speaker: Solon Barocas, Principal Researcher, Microsoft Research NYC
SAVII: Measuring fairness-related harms in NL services
Speaker: Chad Atalla, Applied Scientist & Tech Lead, MSAI Responsible AI VTeam
-
-
Amit Sharma
Principal Researcher
-
Kristen Laird
-
Solon Barocas
Principal Researcher
-
Chad Atalla
Senior Applied Scientist
-
-
Responsible AI
-
Opening remarks: Responsible AI
- Hanna Wallach
-
Demo: RAI Toolbox: An open-source framework for building responsible AI
- Besmira Nushi,
- Mehrnoosh Sameki,
- Amit Sharma
-
Tutorial: Best practices for prioritizing fairness in AI systems
- Amit Deshpande,
- Amit Sharma
-
-
Panel discussion: Content moderation beyond the ban: Reducing borderline, toxic, misleading, and low-quality content
- Tarleton Gillespie,
- Zoe Darmé,
- Ryan Calo
-
Lightning talks: Advances in fairness in AI: From research to practice
- Amit Sharma,
- Michael Amoako,
- Kristen Laird
-
Lightning talks: Advances in fairness in AI: New directions
- Amit Sharma,
- Kinjal Basu,
- Michael Madaio
-
Panel: Maximizing benefits and minimizing harms with language technologies
- Hal Daumé III,
- Steven Bird,
- Su Lin Blodgett
-
Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
- Saleema Amershi,
- Mihaela Vorvoreanu
-
Panel: The future of human-AI collaboration
- Aaron Halfaker,
- Charles Isbell,
- Jaime Teevan
-
Closing remarks: Responsible AI
- Ece Kamar