Keynote: Responsible AI: A Practical Guide
One of the biggest challenges facing AI today is the fact that, despite volumes of guidance detailing how AI should be safe, ethical, and responsible, many organizations don’t have established standards and practices to enforce responsible AI.
Responsible AI requires firm model development governance standards, including algorithms allowed, processes followed, testing completed, and auditability to ensure accountability to meeting responsible AI standards.
In this keynote address, FICO CAO Scott Zoldi will discuss three pillars of Responsible AI: explainable AI, ethical AI, and auditable AI in the context of life-altering decisions derived from AI models. Establishing a corporate model development standard and enforcing adherence through auditable AI not only allows meeting regulatory rules and guidance, but further establishes customer trust and safe usage of AI.
|
|
Check out more sessions on Responsible ML:
|
|
|
|
Bias vs. Unfair Discrimination: The difference is more than perception
The terms “Bias” and “Unfair” have become almost synonymous, but that is not always the case. This session opens the discussion on the ethics of fair vs. unfair discrimination which is the basis for pricing insurance. We have an principle obligation to understand the loss per exposure, which includes understanding when data is or is not representative of that exposure. We’ll walk through a couple of cases to spark that conversation.
|
|
|
Statistical Methods for Imputing Race & Ethnicity
Events in recent years have led to a fresh wave of discussions about racial justice and equality in the United States. This has led to an increased focus in the insurance industry and regulatory community on bias and equity. However, a lack of consistent data collection is often a significant barrier to the study of disproportionate impacts and equity across race/ethnicity cohorts in various contexts.
In this presentation, we describe a range of techniques for developing probabilistic estimates or predictions of individual race and/or ethnicity. We will show how to apply some of these methods to a simulated dataset to illustrate how to use them in practice. In addition, we will share results from a case study that assesses the predictive performance of these probabilistic estimates using an actual dataset from the insurance industry that has self-reported race/ethnicity recorded.
|
|
Explainable AI Component of Responsible and Ethical AI Development
Akshata Moharir, Lead & Data Scientist, Microsoft
As artificial intelligence (AI) becomes increasingly integrated into our lives, it is important to ensure that it is developed and deployed in an ethical and responsible manner. One key component of this is explainable AI, which provides transparency and accountability by enabling users to understand how AI systems make decisions and identify potential biases or errors. This can help mitigate risks and build trust in AI, leading to more ethical and responsible use. In this presentation, we explore the importance of explainable AI and its role in promoting responsible and ethical AI development.
|
|
Free book: Event registrants will receive a copy of MLW Founder Eric Siegel's new book, The AI Playbook, which presents a six-step practice called bizML that ushers ML projects from conception to deployment.
|
|
|
Tell your colleagues about these conferences alongside MLW
|
|
|
Explore the array of conferences alongside Machine Learning Week. Inform your colleagues about the tailored events that cater to their interests.
👉 Click here to learn more
|
|
|
|