Demo: RAI Toolbox: An open-source framework for building responsible AI
- Besmira Nushi, Mehrnoosh Sameki, Amit Sharma | Microsoft Research, Microsoft Azure Machine Learning, Microsoft Research India
- Microsoft Research Summit 2021 | Responsible AI
Assessing and investigating machine learning (ML) models prior to deployment remains at the core of developing trustworthy and responsible artificial intelligence (AI). While different open-source tools have been proposed for assessing fairness, explainability, or errors of an ML model, these properties are not independent, and ML practitioners may need several of these functionalities to fully identify, diagnose, mitigate issues, and take action in the real world. In this session, we will demonstrate the Responsible AI Toolbox. This toolbox was built with two intentions: accelerate the development lifecycle for ML in a way that implements and applies Responsible AI principles, and serve as a collaboration framework for research in the Responsible AI field. We will introduce the overall workflow, from the ease of configuring the interoperable dashboards up to the intended experience. We will showcase how the toolbox can be used to assess models through a responsible AI lens and to analyze data for causal decision-making with the goal of identifying actions that can impact desirable outcomes in the real world. Attendees will be able to access the different parts of the demo through online interactive deployments of the toolbox on illustrational datasets and models.
Resources: https://github.com/microsoft/responsible-ai-widgets/ (opens in new tab)
Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit (opens in new tab)
-
-
Besmira Nushi
Senior Principal Research Manager
-
-
Amit Sharma
Principal Researcher
-
-
Responsible AI
-
Opening remarks: Responsible AI
- Hanna Wallach
-
Demo: RAI Toolbox: An open-source framework for building responsible AI
- Besmira Nushi,
- Mehrnoosh Sameki,
- Amit Sharma
-
Tutorial: Best practices for prioritizing fairness in AI systems
- Amit Deshpande,
- Amit Sharma
-
-
Panel discussion: Content moderation beyond the ban: Reducing borderline, toxic, misleading, and low-quality content
- Tarleton Gillespie,
- Zoe Darmé,
- Ryan Calo
-
Lightning talks: Advances in fairness in AI: From research to practice
- Amit Sharma,
- Michael Amoako,
- Kristen Laird
-
Lightning talks: Advances in fairness in AI: New directions
- Amit Sharma,
- Kinjal Basu,
- Michael Madaio
-
Panel: Maximizing benefits and minimizing harms with language technologies
- Hal Daumé III,
- Steven Bird,
- Su Lin Blodgett
-
Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
- Saleema Amershi,
- Mihaela Vorvoreanu
-
Panel: The future of human-AI collaboration
- Aaron Halfaker,
- Charles Isbell,
- Jaime Teevan
-
Closing remarks: Responsible AI
- Ece Kamar