This article is by Colin Priest and originally appeared on the DataRobot Blog here: https://blog.datarobot.com/ai-ethics-building-trust-by-following-ethical-practices
As machine learning and artificial intelligence (AI) usher in the Fourth Industrial Revolution, it seems like everyone wants to get in on the action. And who can blame them? AI promises improved accuracy, speed, scalability, personalization, consistency, and clarity in every area of business. With all those benefits, why are some businesses hesitating to move forward?
On the one hand, businesses know that they need to embrace AI innovation to remain competitive. On the other hand, they know that AI can be challenging. Most everyone has heard news stories of high profile companies making mistakes with AI, and they are worried that it may happen to them too, damaging their reputation. In regulated industries, there’s the question of how to explain AI decisions to regulators and customers. Then there’s the challenge of how to engage with staff so that they can embrace organizational change.
How do you manage AI to ensure that it follows your business rules and core values, while reaping the most benefits? It’s all about building trust in AI.
Let’s take a look at the four main principles that govern ethics around AI and how these can help build trust.
- Principle 1: Ethical Purpose
- Principle 2: Fairness
- Principle 3: Disclosure
- Principle 4: Governance
Principle 1: Ethical Purpose
Just like humans, AIs are subject to perverse incentives, maybe even more so than humans. So, it stands to reason that you need to choose carefully the tasks and objectives, as well as the historical data, that you assign to AIs.
When assigning a task to an AI, consider asking questions such as: Does the AI free up your staff to take on more fulfilling human tasks? Does your new AI task improve customer experience? Does it allow you to offer a better product or expand your organization’s capabilities?
In addition, there is more to this than merely considering the impacts upon your organization’s internal business goals. Consider the negative externalities, the costs suffered by third parties as a result of the AI’s actions. Pay particular attention to situations involving vulnerable groups, such as persons with disabilities, children, minorities, or to situations with asymmetries of power or information.
Principle 2: Fairness
Most countries around the world have laws protecting against some forms of discrimination, including everything from race and ethnicity to gender, disability, age, and marital status. It goes without saying that companies need to obey the law with regard to protected attributes. But beyond that, it is also good business practice to safeguard certain sensitive attributes, such as where there is an asymmetry of power or information.
If the historical data contains examples of poor outcomes for disadvantaged groups, then an AI will learn to replicate decisions that lead to those poor outcomes. Data should reflect the diversity of the target population with which the AI will be interacting. Bias can also occur when a group is underrepresented in the historical data. If the AI isn’t given enough examples of each type of person, then it can’t be expected to learn what to do with each group.
The good news is that with AIs, it is easier to detect and remove bias than with humans. Since an AI will behave the same way every time it sees the same data, you can run experiments and diagnostics to discover AI bias.