AI Ethics: Building Trust by Following Ethical Practices (Part 2)

September 17th, 2019

This article is by Colin Priest and originally appeared on the DataRobot Blog here:

In our first blog post on the topic of AI Ethics, we covered the promise that artificial intelligence (AI)holds to improve the speed, accuracy, and operations of businesses across a range of industries. With the potential of AI, it’s hard to believe that businesses are hesitating to move forward with AI projects, but fear holds people back. They fear making mistakes that could damage their company’s reputation or doing something illegal or unethical.

Many of these pitfalls can be avoided by following the four main principles that govern ethics around AI. In part one, we covered the first two main principles. In this blog, we’ll take a look at principles three and four, Disclosure and Governance.

  1. Principle 1: Ethical Purpose
  2. Principle 2: Fairness
  3. Principle 3: Disclosure
  4. Principle 4: Governance


Principle 3: Disclosure

One of the four fundamental principles of ethics is respect for autonomy. This means respecting the autonomy of other persons and respecting the decisions made by other people concerning their own lives. Applying this to AI ethics, we have a duty to disclose to stakeholders about their interactions with an AI so that they can make informed decisions.

In other words, AI systems should not represent themselves as humans to users. Where practical, give the choice to opt out of interacting with an AI.

Whenever an AI’s decision has a significant impact on people’s lives, it should be possible for them to demand a suitable explanation of the AI’s decision-making process in human-friendly language and at a level tailored to the knowledge and expertise of the person. In some regulatory domains this is a legal requirement, such as the EU’s General Data Protection Regulation (GDPR) “right to explanation” and the “adverse action” disclosure requirements in the Fair Credit Reporting Act (FCRA) in the U.S.


Principle 4: Governance

An organization’s governance of AI refers to its duty to ensure that its AI systems are secure, reliable and robust and that appropriate processes are in place to ensure responsibility and accountability for those AI systems.

Like any other technology, AI can be used for ethical or unethical purposes, and AI can be secure or dangerous. With the possibility of negative outcomes from AI failures comes the obligation to manage AIs and to apply high standards of governance and risk management.

Humans must be responsible and accountable for the AIs they design and deploy. The comparative advantage of humans over computers in the areas of general knowledge, common sense, context, and ethical values means that the combination of humans plus AIs will deliver better results than AIs on their own.