Microsoft AI Red Team building future of safer AI

Date: 08-Aug-2023



Microsoft AI Red Team building future of safer AI

An essential part of shipping software securely is red teaming. It broadly refers to the practice of emulating real-world adversaries and their tools, tactics, and procedures to identify risks, uncover blind spots, validate assumptions, and improve the overall security posture of systems. Microsoft has a rich history of red-teaming emerging technology with the goal of proactively identifying failures in the technology. As AI systems became more prevalent, in 2018, Microsoft established the AI Red Team: a group of interdisciplinary experts dedicated to thinking like attackers and probing AI systems for failures. We’re sharing best practices from our team so others can benefit from Microsoft’s learnings. These best practices can help read more...