An Overview of AI Ethics and Trustworthy AI

In Greek myth, Pandora was a woman fashioned out of the earth by a Greek God at the request of Zeus. Pandora was bestowed with powerful gifts and attributes — her name means “all-giving” or “all gifted.” She brought with her a jar (later incorrectly translated as a “box”) that contained “countless plagues,” i.e., all types of evil and misery. Pandora opened the jar, and the contents scattered, leaving the earth and sea “full of evils.”[1]

The field of AI ethics has emerged as a response to the growing concern regarding the impact of AI and the acknowledgment that while it can deliver great gifts, it could also represent a modern “Pandora’s box.” AI ethics is the “psychological, social, and political impact of AI.” Ethical AI aims to utilize AI in a lawful manner that adheres to a set of ethical principles that respects human dignity and autonomy, prevents harm, enables fairness, and is transparent and explainable. And it must meet a robust set of requirements from a technical and social perspective that helps ensure that AI performs safely, securely, and in a reliable manner that does not cause any unintended adverse impacts. The European Commission says that if an AI system meets this “overarching value framework” and takes a “human-centric approach” — i.e., designed, developed, and used for the betterment of humankind — then it can be considered “trustworthy.”[2]

 

"Pandora's Box and Artificial Intelligence" generated by DALL E

 

Lawful AI

The first core component of trustworthy AI is to ensure that AI is built and implemented lawfully. For example, in the US, this means adhering to anti-discrimination laws and regulations in housing and unemployment.

Ethical AI Principles

Laws are not always up to date with the rapid pace of technology development, so simply relying on if an AI system is legal does not mean it meets ethical norms or standards. The European Commission defines these core ethical AI principles as respect for human autonomy, harm prevention, fairness, and explicability (i.e., an AI system is transparent and explainable).[3] Let’s look at each of these core foundations of trustworthy AI.

First and foremost, respect for human autonomy means that AI systems need to be human-centric. This means that AI systems need to be based on a “commitment to their use in the service of humanity and the common good, with the goal of improving human welfare and freedom.” In other words, AI systems need to be built for the betterment of humanity and to advance human well-being and dignity, i.e., they must enable humans to flourish. From there, they must respect the freedom and autonomy of humans. Humans should maintain self-determination over themselves, and AI systems should not treat humans merely as objects to be “sifted, sorted, scored, herded, conditioned, or manipulated.”[4]

The principle of harm prevention means that AI systems should not adversely affect humans from a dignity and mental and physical perspective. For example, AI systems should not cause asymmetries of power between employers and employees, businesses and consumers, and governments and their citizens. In addition, vulnerable populations such as children and the elderly need to be specifically protected.[5]

The design and implementation of AI should also be fair and treat groups equally and free from unfair bias. That being said, it is difficult to define and measure fairness. As mentioned earlier, fairness is a human, not a technological or mathematical determination. [6] For example, a moving company requires a lot of heavy lifting from its employees. It then hired males at a greater rate than females, which could be justified in that males are typically stronger than females, so there is potentially a job-relevant reason for differential hiring rates.[7] The broader point is that there should be human involvement in the more sensitive automated decisions.

And finally, another core AI ethics principle is transparency and explainability. I talked about “black box” algorithms that caused frustration when internal and external people could not figure out why businesses’ AI-based automated decision-making systems made the decisions they did. This principle requires the capabilities and purposes of AI systems to be clearly communicated and their decisions and any output to be explainable to those affected. Otherwise, it is challenging to question and contest a decision. Transparency and explainability (also referred to as “explicability”) are keys to building trust in AI.[8]

Robust AI Requirements

Different individuals and organizations have identified key requirements for AI to be trustworthy that flow from the above four principles. I am going to highlight seven of them.[9]

The first requirement is human well-being. This requirement is based on the ethical principle of respect for human autonomy. This involves whether an AI system diminishes the deliberative capacity of humans, e.g., human oversight. It also includes informed consent, in that the AI system should allow humans to withdraw their consent.

The second is safety. This includes that the AI system must be resilient to attacks from hackers. In addition, AI systems should be designed so they cannot be misappropriated for a different use (e.g., delivery drones are weaponized). The AI systems should also be designed to have a “fallback plan” in that if the system goes awry, it should be able to self-monitor and correct or stop. Finally, AI should also protect the physical and mental integrity of humans.

The third is privacy. People should know how their data is used and consent to its usage. People should have the right to access their data, delete it, or correct it. Personal information should be protected and not sold or shared without consent.

The fourth is transparency. This requirement is linked to the principle of explicability. There needs to be transparency on how the AI-based system works and how it makes its decisions. The goal is to avoid the “black box” scenario where it is unclear how an AI system made its decision — those decisions should be traceable. Furthermore, for AI systems that mimic humans (e.g., a technical support chat interface), it should be clear that the consumer is interacting with an AI system versus an actual human.

The fifth is fairness. This requirement requires developers of AI systems to avoid AI bias, which is the unjustified differential treatment of or outcomes for different subgroups of people. Not surprisingly, this requirement ties in with the ethical principle of fairness.

The sixth is societal and environmental well-being. This ties in with the principle of preventing harm, not only to humans interacting with the AI system but to the broader society and sustainability of the environment.

The seventh and last is accountability. This links to the principle of fairness. It requires human oversight of AI, i.e., keeping the “human-in-the-loop,” and designates that humans have ultimate responsibility for any harms caused by AI. It also requires auditing in the form of algorithmic impact assessment that encompasses issues of fairness, explainability, and resilience to failure or hackers.

Many of these have made their way into proposed legislation to regulate AI in Europe and the US.  I will look at those in a future blog post.


[1] Wikipedia, “Pandora,” https://en.wikipedia.org/wiki/Pandora.

[2] Emre Kazin and Adriano Koshiyama, “A high-level overview of AI ethics,” ScienceDirect, September 10, 2021. European Commission, “Ethics Guidelines for Trustworthy AI,” April 8, 2019, https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf.

[3] European Commission, “Ethics Guidelines for Trustworthy AI,” April 8, 2019, https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf.

[4] Emre Kazin and Adriano Koshiyama, “A high-level overview of AI ethics,” ScienceDirect, September 10, 2021. European Commission, “Ethics Guidelines for Trustworthy AI,” April 8, 2019.

[5] European Commission, “Ethics Guidelines for Trustworthy AI,” April 8, 2019.

[6] European Commission, “Ethics Guidelines for Trustworthy AI,” April 8, 2019. Nicol Turner Lee et al., “Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms,” The Brookings Institution, May 22, 2019, https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.

[7] Thanks to Airlie Hilliard with Holistic.AI for this example.

[8] European Commission, “Ethics Guidelines for Trustworthy AI,” April 8, 2019.

[9] The seven requirements I list in this section are an amalgamation of “Ethics Guidelines for Trustworthy AI” and “A high-level overview of AI ethics.” European Commission, “Ethics Guidelines for Trustworthy AI,” April 8, 2019. Emre Kazin and Adriano Koshiyama, “A high-level overview of AI ethics,” ScienceDirect, September 10, 2021.

Previous
Previous

A Look at AI Legislation in Europe

Next
Next

Is CPRA Actually Stronger than the ADPPA?