This article is by Colin Priest and originally appeared on the DataRobot Trusted AI Blog here: https://blog.datarobot.com/how-to-understand-a-datarobot-model-ebook-1
As more and more companies rely on AI, people are questioning whether or not AI can be trusted. Business reputations are damaged when inscrutable black box AI systems make mistakes or make biased decisions. To avoid these issues, organizations are seeking out ways to apply best practices of AI governance to ensure that AIs are following business rules and making sensible and trustworthy decisions. Model interpretability is about ensuring humans can easily understand the models and how decisions are made, because trust in AI can ultimately only be achieved when people can align AI behavior with their organization’s business rules, goals, and values.
Read the eBook, How to Understand a DataRobot Model, to learn the ins and outs of a DataRobot model.
For more on this topic, check out the eight-part blog series, "How to Understand a DataRobot Model":
- How to Understand a DataRobot Model
- How to Understand a DataRobot Model: Comparing Models for Accuracy [Part 2]
- How to Understand a DataRobot Model: Drilling Down into Model Accuracy [Part 3]
- How to Understand a DataRobot Model: Quickly Find What’s Important in Your Data [Part 4]
- How to Understand a DataRobot Model: See Patterns The Model Found in Your Data [Part 5]
- How to Understand a DataRobot Model: When You Absolutely Must Have a Formula [Part 6]
- How to Understand a DataRobot Model: Unlocking How a Model Was Made [Part 7]
- How to Understand a DataRobot Model: Understanding Why a Prediction Has Its Value [Part 8]