Ethics and Bias in AI 🧑‍🤝‍🧑

We hope that you found the ImageNet Roulette case study interesting and learned what can go wrong with AI models in general. In this chapter, we will go through yet another example of a powerful technology that has cool applications but can also raise ethical concerns if kept unchecked. Let’s first quickly summarize the ImageNet Roulette case study and its after-effects.

Before we take a look at another example of a powerful technology, let’s step back and reflect on some questions. In general, is technology good or bad? Is electricity good or bad? Is the internet generally safe or harmful? Etc. Keep these questions in mind as we begin our journey.

Deepfakes 🎥

Imagine you are a recent graduate who wants to learn about Deep Learning. You enroll in a course called “Introduction to Deep Learning (MIT 6.S191)” by MIT. To make things more interesting, the course team released a really cool video on things that can be done using deep learning. Check the video here:

An introduction to the course MIT 6.S191, where deepfakes were used to give an impression of a welcome by Barack Obama.

Yup, the introduction session was made in such a way that it puts an impression that the students are being welcomed by none other than Barack Obama himself. Really cool application, a course on deep learning with an introduction curated to showcase one of the use cases of deep generative models. For the first-timers, this would be really engaging, making them interested in the technology and everyone would like to try it out. After all you can actually make such videos and images easily within a few minutes using a decent GPU and start posting memes, posts etc surrounding this. Let’s see another example of this technology but with different after-effects. Imagine if we could come up with the same deepfake of an influential political leader or actor during elections or wars. The same fake video can be used to spread hatred and misinformation, leading to marginalizing different people. Even though the person did not spread misinformation, the video itself can cause massive outrage. This can be horrifying. However, the main problem lies in the fact that once the misinformation is spread, the harm is already done, and people are divided, even if it later becomes clear that the video was manipulated. So, the harm can only be avoided if the manipulated video is not made public in the first place. This makes this technology dangerous, but is the technology itself safe or harmful? Technology itself is never good or bad, but its usage (who uses it and for what purpose) can have good or bad effects.

Deepfakes are synthetic media that are created using the help of deep generative CV models. You can actually manipulate images with a different person’s image and also generate videos through it. Audio deepfake is another technology that can complement the CV counterpart by mimicking the exact voice of the subject under consideration. This was just one of the examples of how deepfakes can cause havoc, but in reality, the implications are far more dangerous as they can have a lifelong impact on the lives of the victims.

What is Ethics and Bias in AI?

From the previous example, a few aspects of this technology to keep in mind would be:

💡Check out The Consentful Tech Project here. This project raises awareness, develops strategies, and shares skills to help people build and use technology consentfully.

Let us now formalize some definitions based on these examples. So what is Ethics and Bias? Ethics can be defined simply as a set of moral principles that help us distinguish between wrong and right. Now, AI Ethics can be defined as the set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI. AI Ethics is a multidisciplinary field that studies how to optimize AI’s beneficial impact while reducing risks and adverse outcomes. This field involves a variety of stakeholders:

Bias in AI refers to the biases in the output of the algorithms, which might happen due to assumptions during model development or training data. These assumptions stem from the inherent biases that are inside humans who were responsible for the development. As a result, AI models and algorithms start reflecting on these biases. These biases can disrupt ethical development or principles and, therefore, need attention and ways to mitigate them. We will cover more about biases, how they creep into different AI models, their types, evaluation, and mitigation (with a focus on CV models) in detail in the upcoming chapters of the unit. To understand more about Ethics in AI, let us look closely into the principles for Ethical AI.

Ethical AI Principles 🤗 🌎

Asimov’s Three Laws of Robotics 🤖

There have been many historic works that reflect on development of ethics for technology. The earliest work can be traced back to the famous science fiction writer Isaac Asimov. He came up with the three laws of robotics keeping in mind the potential risks of autonomous AI agents. The laws are:

Asilomar AI Principles 🧑🏻‍⚖️🧑🏻‍🎓🧑🏻‍💻

Asimov’s laws of robotics were one of the earliest works in ethics for technology. In 2017, a conference was organized at Asilomar Conference Grounds, California. This conference was held to discuss the impacts of AI on society. The outcome of this conference was the development of guidelines for the responsible development of AI. The guideline has 23 principles, which were signed by around 5,000 individuals, including 844 AI and robotics researchers.

Asilomar AI Principles 23 Asilomar AI Principles for Responsible AI Development

💡You can check the full list of the 23 Asilomar AI Principles and the signatories here.

These principles are a guide for ethical development and implementation of AI models in general. Let’s now look into a recent work on ethical AI guidelines by UNESCO.

UNESCO’s report: Recommendations on the Ethics of Artificial Intelligence 🧑🏼‍🤝‍🧑🏼🌐

UNESCO came up with a global standard on AI ethics in the form of a report named “Recommendation on the Ethics of Artificial Intelligence”, which was adopted by 193 member countries in November 2021. Previous guidelines on Ethical AI were lacking in terms of actionable policy. Still, the recent report by UNESCO allows policymakers to translate the core principles into action concerning different domains like data governance, environment, gender, health, etc. The four core values of the recommendation which lay the foundation for AI systems are:

AI Policy Areas 11 key policy areas for responsible developments in AI.

The ten core principles that lay out a human-rights centered approach to Ethics of AI by UNESCO are give below:

💡To read the complete report by UNESCO on “Recommendations on the Ethics of Artificial Intelligence”, you can visit here.

As we close the unit we will also look into Hugging Face’s efforts to ensure Ethical AI practises. In the next chapter, we will learn more about biases, types and how they creep in different AI models.

< > Update on GitHub