Microsoft Toolkits Support Differential Privacy, Algorithmic Fairness, and Transparency

Photo by Markus Spiske on Unsplash

Microsoft is releasing three new toolkits to support data privacy and ethical AI as part of Microsoft’s initiative for more explainable, secure, and just artificial intelligence in systems. The White Noise toolkit supports differential privacy. The Fairlearn toolkit will assess the fairness of AI systems and reduce observed unfairness in algorithms. The InterpretML toolkit aims to help to explain the reasoning behind models visually. The toolkits are available through Microsoft’s Azure Machine Learning and as open source on GitHub.

As we continue to work on ways to make our use of data more respectful of individual privacy rights and to conform with emerging standards regarding the ethical use of data, these types of tools provide solutions to the ongoing challenges we face. Incorporating these and similar approaches will strengthen our efforts to gain support for both the gathering of data and for gaining acceptance of the use of AI solutions.

Be the first to comment

Leave a Reply