This article was written by Natalie Bucklin and originally appeared on the DataRobot Blog here: https://www.datarobot.com/blog/how-to-build-and-govern-trusted-ai-systems-technology/
This is the final post in a three part series that describes what is needed for companies to properly govern and ultimately trust their AI systems. This article will discuss the technologies DataRobot uses to help ensure trust in the AI systems built on our platform. We’ll focus on evaluating a model for biased behavior, which can occur during the training process or after it has been deployed in a production environment.
A model is biased when it predicts different outcomes for features in the training dataset. We refer to features that we’re interested in examining biased behavior toward as protected features. This is because they often contain sensitive characteristics about individuals, such as race or gender. As with model accuracy, there are many metrics one can use to measure bias. These metrics can be grouped into two categories: bias by representation and bias by error. Bias by representation examines if the outcomes predicted by the model vary for protected features. For example, do different percentages of men and women receive the positive prediction? Bias by error examines if the model’s error rates are different for the protected features. For example, does the false positive rate differ between white and black individuals?
DataRobot’s Bias and Fairness tool allows users to test if their models are biased and diagnose the root causes of bias. The image below shows the Per-Class Bias insight with the Proportional Parity bias metric selected. The chart tells us that individuals over 40 and individuals under 40 are receiving different percentages of outcomes, which means the model is biased. DataRobot offers five bias definitions to choose from that are aligned to the categories described above.
After bias has been identified, the next step is to understand why. DataRobot’s Cross-Class Data Disparity insight helps us understand differences in the training data that might cause bias. The insight evaluates the data disparity between features when the dataset is partitioned by two classes in a protected feature. The chart below tells us that the feature Internships has a high degree of disparity between individuals over 40 and under 40. That disparity is caused by individuals under 40 having a higher number of internships
Understanding bias plays an important role in trusting AI systems, but it’s not the only part. There are many other evaluations that can be performed to further trust and transparency. DataRobot offers a wide range of insights to help facilitate trust, including tools to evaluate performance, understand the effect of features, and explain why a model made a certain prediction. Together, these technologies help ensure that models are explainable and trusted.
You must be logged in to post a comment.