see 2021
on-demand
close
Session / On-Demand

"It’s Not Fair!": Detecting Algorithmic Bias with Open-Source Tools

Tuesday, June 22
1:30pm - 1:55pm MDT

The harm that the misuse of AI/ML can have is obvious, from the ProPublica Recidivism piece from 2016 to the latest discovery of bias in facial recognition classifiers by Joy Buolamwini.

The need for tools to use AI/ML ethically is concentrated in two particular areas: transparency and fairness. Transparency involves knowing why an ML system came to the conclusion that it did—something that is essential if we are to identity bias. In some forms of ML, this is difficult. We’ll cover three tools to assist with transparency: LIME, SHAP, and the “What-If” tool from Google. We’ll highlight where each of these tools performs well and poorly, and provide recommendations for utilizing them in unison where appropriate.

Once transparency is established, we’ll pause to evaluate potential sources of bias that would affect the fairness of a particular algorithm. Here the number of tools available is far-reaching. We’ll start with an explanation of bias metrics, explaining the roles that true/false positives and true/false negatives play in calculating various accuracy metrics. The basics of fairness established, then we will explore various tools used against a few, publicly available sample ML implementations. Tools in this review will include: Aequitas, AIF360, Audit-AI, FairML, Fairness Comparison, Fairness Measures, FairTest, Themis™, and Themis-ML. We’ll demonstrate and compare these tools with demonstrations, providing recommendations on their usage and profiling their strengths and weaknesses.