Open-Source Tools for Responsible AI
Tuesday, June 20, 2023
While there is a lot of talk about the need to train AI models that are safe, robust, unbiased, and equitable – few tools have been available to data scientists to meet these goals. This session describes new open-source libraries & tools that address three aspects of Responsible AI. The first is automatically measuring a model’s bias towards a specific gender, age group, or ethnicity. The second is measuring for labeling errors – i.e. mistakes, noise, or intentional errors in training data. The third is measuring how fragile a model is to minor changes in the data or questions fed to it. Best practices and tools for automatically correcting some of these issues will be presented as well, along with real-world examples of projects that have put these tools for use, focused on the medical domain where the human cost of unsafe models can be unacceptably high.