Microsoft, the technology giant, has recently released its inaugural Responsible AI Transparency Report for the year 2024. The report highlights Microsoft's efforts to build and deploy AI responsibly and transparently. According to the report, Microsoft grew its responsible AI team from 350 to over 400 people in the second half of last year.
One of the significant developments in this area was the release of Python Risk Identification Tool (PyRIT) for generative AI in February. This tool allows security professionals and machine learning engineers to identify risks in their generative AI products. Microsoft also expanded its set of generative AI evaluation tools, which enable customers to evaluate their models for basic quality metrics and safety risks.
However, despite these efforts, Microsoft's responsible AI team has faced several challenges. In March 2023, the Copilot AI chatbot generated inappropriate responses when manipulated by a user. The Bing image generator also allowed users to generate images of popular characters flying planes into the Twin Towers last October.
Microsoft's Natasha Crampton, who leads responsible AI efforts at Microsoft, expressed concern about the impact of chatbots on the open web. She emphasized that search engines citing and linking to websites is part of the core bargain of search.
The report covers various aspects of Microsoft's responsible AI practices, including safely deploying AI products, measuring and mapping risks throughout development cycles, and providing tools for Azure AI customers to evaluate their models. It also discusses Microsoft's red-teaming efforts to identify vulnerabilities in its generative AI applications.
Microsoft is committed to sharing its learnings around responsible AI practices with the public and engaging in a robust dialogue around these issues. The company will continue working on improving its responsible AI systems and building on the progress made so far.