Ethical AI: Basic readings, schools of thought, an overview

Introduction

Over the last few years, everyone I speak to seems to know about AI. Ethical issues around AI are being actively discussed and debated, not just in the professional sense but also around coffee tables and at informal discussions by users.

This post is an attempt to provide an overview of the issues and approaches.

What we have now are essentially three interfaces of or approaches to ethics in AI.

Besides the differences in budgets, access to VC funding and who gets favorably written about in the NYT, 1 the main difference between the various factions (strange to see factions in ethics, but that is what we get for trying to pre-print our way into a science) is the temporal profile and the concreteness of the problems they are talking about.

Anyway my irritations aside, the ̷f̷a̷c̷t̷i̷o̷n̷s̷  approaches or interfaces are 2

  1. The professional ethics people
  2. The AI risk People
  3. The AI alignment n̸u̸t̸s̸ people

Professional AI ethics

The professional Ethics people are dealing with the immediate and current harms. They focus on identifying such harms and developing frameworks and knowledge that can be used to improve things now and for the future.  They are a lot like the bioethics people

One group of professional ethics people make guidelines. Another group fights big tech.Back then, when LLMs were not all that AI was, and people were using  regression models to predict recidivism, deciding who to hire, and identify people from video surveillance, these people were studying the harms of such systems and talking about what to do about it.3

Overview

Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Equity and access in algorithms, mechanisms, and optimization (pp. 1-9).

This paper provides a model for understanding the harms and risks that arise in different parts of the model training and deployment process. This is a good high-level overview.4

Recidivism and algorithmic bias

  1. Algorithm is racist: ProPublica
  2. Algorithm’s feelings are complicated, results have a sensitivity/specificity tradeoff that is poorly studied: Washington Post

Linguistically encoded biases:

For many language models as well as image generation models King – Man + Woman = Queen and Man :: Computer Programmer as Woman :: Homemaker. These are looked into in the following papers

  1. Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183-186. DOI:10.1126/science.aal4230 |Preprint on arxiv
  2. Manzini, T., Lim, Y. C., Tsvetkov, Y., & Black, A. W. (2019). Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. arXiv preprint arXiv:1904.04047. (pdf)
  3. Malvina Nissim, Rik van Noord, and Rob van der Goot. 2020. Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor. Computational Linguistics, 46(2):487–497.

Pictorially encoded biases:

Facial recognition tech has a long history of being super duper racist, creepy, used for oppression, as well as not being very good. Tech companies, especially the superduperbig guys have been getting into this game and are releasing models that are seemingly better, but only if you’re a white male.

  1. Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
  2. White D, Dunn JD, Schmid AC, Kemp RI. Error Rates in Users of Automatic Face Recognition Software. PLoS One. 2015 Oct 14;10(10):e0139827. doi: 10.1371/journal.pone.0139827. PMID: 26465631; PMCID: PMC4605725.

LLMs and their problems

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623.

Google the authors, and what they went through when they came out against LLMs. This will give you a lot of info about the stakeholders involved in this. The paper itself is highlighting the issues with Large Language models.

What about some practical stuff ?

A philosophical framework

While the debates go on, a zillion guidelines on how to deal with data and how to things ethically have come up. Chances are you will be overwhelmed if you start looking at them. I haven’t found any specific frameworks that speak to me. Personally I recommend and use the medial ethics framework along with the spider-man framework. which states that

cartoon image showing a clsoeup of someone's eyes, with captions on either side of the eyes reading "with great power" and "...comes great responsibility" this is from a spiderman comic

That is as self explanatory as an ethical dictum gets. The medical framework has the following four principles

  1. Respect for autonomy – Don’t make stuff that interferes with the autonomy of the user. When you’re in a position to make decisions for them, do this only after clearly explaining the harms to them and with their consent. Consent is king.
  2. Beneficence – an AI programmer should act in the best interest of the end user and not of the employer
  3. Justice – Is about the distribution of scarce resources, the decision of who gets what. Which means if your product, algorithm or system worsens disparities between people and creates inequalities, do better than that, think very hard about how you’re using your power.
  4. Non-maleficence5 – to not be the cause of harm. Also, to promote more good than harm, to the best of your ability. Also known as Above all, do no harm

Checklists and guidelines

If you’re looking for checklists and stuff that you can start using immediately to do responsible/ethical AI here is a github repo with the best links Awesome AI guidelines

If you want just *one* guideline, read this: Mitchell, Margaret, et al. “Model cards for model reporting.” Proceedings of the conference on fairness, accountability, and transparency. 2019. Model cards for model reporting.

This paper is great to get a conceptual understanding of the need for clear declarations about models. But it is too ambitious and a bit too bulky for wide use. Despite this, hugging face has implemented this on their website, although it isn’t always used. I recommend developing your own model card/checklist and having them attached to wherever you store your models.

This github repo with links to responsible AI resources is one more resource that focuses on practical advice and frameworks : Awesome Responsible AI

 Explainable AI

For me one foundational source of ethical issues with deep learning models is that the algorithm is a black-box.  The interpretability of the predictions or results of a neural network algorithm is poor. And so people who are working on model interpretability and explainability are also working on things that are critical for ethical decision making. 

AI Risk

Image of SWORDS military robot from wikimedia commons
The SWORDS system allows soldiers to fire small arms weapons by remote control from as far as over 3,937 feet (1,200 meters) away. This example is fitted with an M249 SAW.

 
The risk people are talking about existential and military and other big risks from AI. There is definitely some overlap between the risk people and the professional ethics people but a lot of what they discuss is about future or possible harms. Still very concrete harms and stuff that we can definitely see happening.

Think autonomous weaponized drones, robot soldiers,  autonomous robot doctors etc. The key here being to prevent things from getting out of hand when algorithms and robots are deployed to make autonomous decisions. I see this as  robocop-dredd-dystopia prevention work

Healthcare is another area where a great deal of harm could come from automation (a great deal of good too) and it is important that we think hard and work towards systems that safe-keep the interests of the patient.

This debate captures a great deal of nuance on  AI risk. Melanie Mitchell is a delight to listen to.

I don’t think there is enough work being done about real risk. We have a lot of thought leaders talking about it, but the engineering and the science of it is not getting a lot of traction.

AI Alignment

Imagine a world where an AI who is much smarter than us all has emerged and is currently demanding that everyone should call it lord and master and pray to it thrice a day.

The AI alignment people are

  1. Figuring out how to prevent such an AI from emerging and
  2. How best to align the interests of such an AI with our own.

I am not kidding you.

There are some actual cults involved and a lot of though experiments that are so bizarre I cannot even. 6

The problem is that these folks are completely ignoring addressing the  current harms using the doom and gloom of this possible outcome.

This is a doomsday cult kinda ideology. And sadly some very big names in AI have  signed up on this cult.

Worse, this approach is being used by some large players to superficially meet the demands of ethical AI, while completely sidestepping accountability for  the issues that are relevant today. As you can imagine, there are many important people in governments all over the world for whom the biggest worry is an AI that will replace them. So those guys are also treating this like its a real and credible threat right now.

There is no doubt that we need to ask ourselves this question, about how do we deal with systems that have more power than us. But I think the answer  to those questions lies in building in accountability, transparency, safety, informed consent and things that we already know how to do pretty well and don’t do because its bloody inconvenient and cost  money. I definitely believe that we need better engineering research into this, threat assessments, all that. But this issue is not so novel that we need to come up with an entirely new discipline which ignores and laughs at the stuff other experts on harm-reduction are saying. That is stupid.

I am not going to link to any of the alignment cultists but here are two  analytical articles about them which I think are great, and they link to plenty of stuff that you can explore.

Leopold Aschenbrenner: Nobody’s on the ball on AGI alignment ( this person works on an alignment team for one of the largest players ) 

Alexey Guzey :  AI Alignment Is Turning from Alchemy Into Chemistry

I dont really fully agree with these authors, but they make some sense and what would be an ethical guide without stuff that one disagrees with.

Concluding remarks

I admire greatly the activists who are fighting the good fight against Big-LLM, I really do. But I do not like agendas for change and progress being exclusively drafted by activists on social media. Social media is like reality TV, what you get if you do all your intellectual debate there is some form of Donald Trump.

I think that at some point the IT/AI engineering profession is going to realize the same thing that the medical world did. That if you don’t start doing things ethically, you will lose your power and create harms far beyond you can imagine and this shit will haunt you. If the tech world looks at the medical, i guess it can see a lot of unethical stuff. That is our shame.  But at the same time, I hope they will investigate the issue historically, and ask just how many checks and balances are there in healthcare to ensure the patient is not harmed. There is a lot to learn from the history of medical ethics.

Ethics make sense because it improve systems. Great AI will be ethical just like the best healthcare is ethical. And just like a doctor ultimately works with the patients best interest at heart, at some point AI engineers too  will adopt this dogma because it is the rational best choice and has a proven track record for reducing harm, and no one wants to build stuff that harms people.  Also, the workers of the world have a lot more in common with each other than with the bossmen. 7

This little fella knows ?she looks FABULOUS

Image from: Welch S. C. & Metropolitan Museum of Art (New York N.Y.). (1985). India : art and culture 1300-1900. Metropolitan Museum of Art : Holt Rinehart and Winston.

If you wish to cite this post here is the citation

Philip, A. (2023, July 11). Ethical AI: Basic reading, schools of thought, an overview. Anand Philip’s blog. https://anandphilip.com/ethical-ai-basic-reading-schools-of-thought-an-overview/

Ok that is all I have for you now folks, please do comment and subscribe and like and share this on boobtube instamart dreads and feathers as you wish.

Footnotes

  1. i.e. what kind of power does who have ↩︎
  2. Some people are calling this a schism, but it is not a schism ↩︎
  3. This work is then being carried forward by the actually-LLMs-are-not-that-great activists. ↩︎
  4. It however lacks the liberal-progressive-activism priorities ↩︎
  5. No relation with Angelina Jolie, having horns or dressing in black ↩︎
  6. I love thought experiments, they teach us a lot of things including that one must not confuse a thought experiment about a distant and remote possibility with something real and applicable now. ↩︎
  7. I am a Bourgeoisie malayali, can you blame me for bringing up Marx?. ↩︎