Skip to main content
Corporate Responsibility

2025
Responsible AI
Transparency
Report

Share on LinkedInShare on X

How we build, support our customers, and grow

Our second annual Responsible AI Transparency Report covers the progress we’ve made since the publication of our inaugural report in 2024. It highlights our continued commitment to responsible innovation, covering how we develop and deploy AI models and systems responsibly; how we support our customers; and how we learn, evolve, and grow.

Key takeaways

In 2024, we made key investments in our responsible AI tools, policies, and practices to move at the speed of AI innovation.

Responsible
AI tooling

We improved our responsible AI tooling to expand coverage for risk evaluation and mitigations across modalities as well as for agentic systems.

Approach to
compliance

We took a proactive, layered approach to compliance with new regulatory requirements.

Pre-deployment
reviews

We launched an internal workflow tool to centralize responsible AI requirements and simplify documentation for pre-deployment reviews.

Sensitive
uses of AI

We continued to provide hands-on counseling for high-impact and higher-risk uses of AI, particularly in areas related to healthcare and the sciences.

Investments
in research

We established the AI Frontiers lab to push the frontier of AI capabilities, efficiency, and safety.

Coherent governance frameworks

We collaborated with stakeholders around the world to make progress towards building coherent governance frameworks.

Follow us on LinkedInFollow us on XFollow us on InstagramFollow our blog

Responsible AI transparency |Build

How we build
AI responsibly

How we build generative AI systems and models responsibly

When we embark on the development and deployment of a new AI system, we enlist the AI Risk Management Framework created by the National Institute for Standards and Technology (NIST), which includes four key functions: govern, map, measure, and manage.

Govern:

Our responsible AI governance architecture helps us uphold our principles consistently across the company. It involves establishing clear policies, processes, roles, and responsibilities.

Map:

Mapping and prioritizing risks enables us to make informed decisions about mitigations and the appropriateness of an AI application for a given context.

Measure:

AI risk measurement helps inform the prioritization and design of mitigations—a practice that grew in importance in 2024 as AI capabilities became more complex.

Manage:

Once we’ve mapped and measured risks, we manage them across the AI technology stack through a “defense in depth” approach. After deployment, we continue to manage risks through ongoing monitoring.

A triangle diagram labeled Map, Manage, and Measure at each point and Govern in the center.
Two people dressed in business attire seated on a sofa, engaged in conversation.

Case study


Managing AI-related risks in 2024 elections

In 2024, more people voted in elections across the world than at any other time in history. Microsoft took proactive measures in partnership with governments, nonprofit organizations, and private sector companies to prevent the creation and dissemination of deceptive AI-generated election content.

Responsible AI transparency |Decide

How we
make decisions

How we make decisions about releasing generative AI systems and models

Throughout 2024, we continued to refine our pre-deployment oversight processes which include our deployment safety process for generative AI systems and models, as well as the Sensitive Uses and Emerging Technology program. We also launched an internal workflow tool to further support responsible AI documentation and review processes.

Deployment safety for generative AI systems and models

Before deploying our generative AI applications and models, teams review their risk management approach with experts across the Responsible AI Community. These experts provide recommendations and requirements grounded in our responsible AI policies.

 


Learn more


Sensitive Uses and Emerging Technologies program

Our Sensitive Uses and Emerging Technologies program provides pre-deployment review and oversight of high-impact and higher-risk uses of AI. Reviews often culminate in requirements that go beyond our Responsible AI Standard.

 


Learn more


77% generative AI


In 2024, 77% of cases that received consultations from the Sensitive Uses and Emerging Technologies team were related to generative AI.

Three people gathered around a computer in an office space filled with multiple monitors.

Case study


Safely deploying Phi small language models

The Phi model team released three collections of Phi models in 2024 and early 2025, each unlocking new capabilities. The team used a “break-fix” framework to inform deployment safety for each release.

Two healthcare professionals standing and reviewing a tablet together.

Case study


Safely deploying Smart Impression

Smart Impression is an AI-powered productivity tool for radiologists. Through the Sensitive Uses review process, the product team identified and mitigated key risks related to using AI in a healthcare setting.

Responsible AI transparency |Support

How we support
our customers

How we support our customers in building AI responsibly

As developers and deployers of AI technology, it’s our responsibility to support our customers in their own responsible AI journeys. We regularly share our tools and practices with our customers and eagerly engage in dialogue to learn how we can better support them in innovating responsibly.

AI Customer Commitments

We continue to expand and build on the AI Customer Commitments we first announced in 2023. In 2024, we extended our Customer Copyright Commitments to include our reseller partners.

 


Learn more


Tooling to support customers

Responsible AI tooling is critical to achieving consistent alignment with our internal AI policies. We’ve released 30 responsible AI tools that include more than 155 features to support our customers’ responsible AI development.

 


Learn more


Transparency to support customers

We’re committed to equipping our customers with the information they need to innovate responsibly. Since 2019, we’ve published 40 Transparency Notes containing key information about our platform services.

 


Learn more


AI-generated art of buildings in Manhattan at sunset, with the World Trade Center standing prominently in the middle.

Case study


Content credentials on LinkedIn

Microsoft-owned platform LinkedIn became the first professional networking platform to display the C2PA Content Credentials for all AI-generated images and videos uploaded to LinkedIn’s feed.

Responsible AI transparency |Learn

How we learn,
evolve, and grow

How we learn, evolve, and grow in our responsible AI work

From the beginning, Microsoft has committed to scaling our responsible AI program to meet the growing demand for this technology. For us, this means investing in research, working across sectors to advance effective global governance of AI, and tuning into a wide range of perspectives.

Investments in research

Throughout 2024, Microsoft researchers collaborated closely with our policy and engineering teams to push the frontiers of how we map, measure, and manage AI risks.

 


Learn more


Advancing AI adoption through good governance

We are working with governments around the world to build globally coherent governance frameworks that enable organizations of all kinds to innovate with AI.

 


Learn more


Tuning in to multistakeholder input

Harnessing the expertise of a wide range of stakeholders is essential to effective AI risk management. We actively seek out underrepresented voices for input on how our AI systems can be safer and more reliable.

 


Learn more


Over 300 papers


The Accelerating Foundation Models Research (AFMR) community, founded by Microsoft Research, has published over 300 papers co-authored by computer scientists and researchers outside computer science, supporting over 123 institutions in 19 countries.

Two people reviewing an analytics report on a desktop computer.

Case study


AILuminate from MLCommons

Advancing AI governance requires standardizing methods for evaluating AI risks. To support this effort, MLCommons developed a new AI safety benchmark called AILuminate, which offers a scientific, independent analysis of large language model risk.