Our second annual Responsible AI Transparency Report covers the progress we’ve made since the publication of our inaugural report in 2024. It highlights our continued commitment to responsible innovation, covering how we develop and deploy AI models and systems responsibly; how we support our customers; and how we learn, evolve, and grow.
In 2024, we made key investments in our responsible AI tools, policies, and practices to move at the speed of AI innovation.
We improved our responsible AI tooling to expand coverage for risk evaluation and mitigations across modalities as well as for agentic systems.
We took a proactive, layered approach to compliance with new regulatory requirements.
We launched an internal workflow tool to centralize responsible AI requirements and simplify documentation for pre-deployment reviews.
We continued to provide hands-on counseling for high-impact and higher-risk uses of AI, particularly in areas related to healthcare and the sciences.
We established the AI Frontiers lab to push the frontier of AI capabilities, efficiency, and safety.
We collaborated with stakeholders around the world to make progress towards building coherent governance frameworks.
Responsible AI transparency |Build
When we embark on the development and deployment of a new AI system, we enlist the AI Risk Management Framework created by the National Institute for Standards and Technology (NIST), which includes four key functions: govern, map, measure, and manage.
Govern:
Our responsible AI governance architecture helps us uphold our principles consistently across the company. It involves establishing clear policies, processes, roles, and responsibilities.
Map:
Mapping and prioritizing risks enables us to make informed decisions about mitigations and the appropriateness of an AI application for a given context.
Measure:
AI risk measurement helps inform the prioritization and design of mitigations—a practice that grew in importance in 2024 as AI capabilities became more complex.
Manage:
Once we’ve mapped and measured risks, we manage them across the AI technology stack through a “defense in depth” approach. After deployment, we continue to manage risks through ongoing monitoring.
Case study
In 2024, more people voted in elections across the world than at any other time in history. Microsoft took proactive measures in partnership with governments, nonprofit organizations, and private sector companies to prevent the creation and dissemination of deceptive AI-generated election content.
Responsible AI transparency |Decide
Throughout 2024, we continued to refine our pre-deployment oversight processes which include our deployment safety process for generative AI systems and models, as well as the Sensitive Uses and Emerging Technology program. We also launched an internal workflow tool to further support responsible AI documentation and review processes.
Before deploying our generative AI applications and models, teams review their risk management approach with experts across the Responsible AI Community. These experts provide recommendations and requirements grounded in our responsible AI policies.
Our Sensitive Uses and Emerging Technologies program provides pre-deployment review and oversight of high-impact and higher-risk uses of AI. Reviews often culminate in requirements that go beyond our Responsible AI Standard.
In 2024, 77% of cases that received consultations from the Sensitive Uses and Emerging Technologies team were related to generative AI.
Case study
The Phi model team released three collections of Phi models in 2024 and early 2025, each unlocking new capabilities. The team used a “break-fix” framework to inform deployment safety for each release.
Case study
Smart Impression is an AI-powered productivity tool for radiologists. Through the Sensitive Uses review process, the product team identified and mitigated key risks related to using AI in a healthcare setting.
Responsible AI transparency |Support
As developers and deployers of AI technology, it’s our responsibility to support our customers in their own responsible AI journeys. We regularly share our tools and practices with our customers and eagerly engage in dialogue to learn how we can better support them in innovating responsibly.
We continue to expand and build on the AI Customer Commitments we first announced in 2023. In 2024, we extended our Customer Copyright Commitments to include our reseller partners.
Responsible AI tooling is critical to achieving consistent alignment with our internal AI policies. We’ve released 30 responsible AI tools that include more than 155 features to support our customers’ responsible AI development.
We’re committed to equipping our customers with the information they need to innovate responsibly. Since 2019, we’ve published 40 Transparency Notes containing key information about our platform services.
Case study
Microsoft-owned platform LinkedIn became the first professional networking platform to display the C2PA Content Credentials for all AI-generated images and videos uploaded to LinkedIn’s feed.
Responsible AI transparency |Learn
From the beginning, Microsoft has committed to scaling our responsible AI program to meet the growing demand for this technology. For us, this means investing in research, working across sectors to advance effective global governance of AI, and tuning into a wide range of perspectives.
Throughout 2024, Microsoft researchers collaborated closely with our policy and engineering teams to push the frontiers of how we map, measure, and manage AI risks.
We are working with governments around the world to build globally coherent governance frameworks that enable organizations of all kinds to innovate with AI.
Harnessing the expertise of a wide range of stakeholders is essential to effective AI risk management. We actively seek out underrepresented voices for input on how our AI systems can be safer and more reliable.
The Accelerating Foundation Models Research (AFMR) community, founded by Microsoft Research, has published over 300 papers co-authored by computer scientists and researchers outside computer science, supporting over 123 institutions in 19 countries.
Case study
Advancing AI governance requires standardizing methods for evaluating AI risks. To support this effort, MLCommons developed a new AI safety benchmark called AILuminate, which offers a scientific, independent analysis of large language model risk.