Foreward

This is an excerpt from a position paper I wrote on the ethics of machine learning and its use its use in society. In the paper I argue that it is only ethical for machine learning to be used if its decisions are explainable and are provably bias free through the use of interpretability methods.

To read the entire paper, click the link above.

Introduction

These days, artificial intelligence has made its way into every other conversation we have about technology. Whether it is someone marveling over a new innovation or debating its potential to cause the destruction of mankind, AI has established itself as a fully-fledged buzzword. One field of AI that has grown markedly in popular is machine learning. Machine learning is an analytical technique that involves inferring patterns from trends in a large set of data in order to make predictions about future data. In the past decade, the results that machine learning models have been able to produce have been very significant. For example, researchers have been able to train models capable of labelling images using a vocabulary of one hundred different labels with an accuracy rating of around 97% [10]. These image recognition capabilities have been used in the medical field to recognize brain tumors from MRI scans. In many cases, the model was able to achieve accuracy better than that of a human doctor. In another case, this research has been used in the cybersecurity field, where systems were trained to detect fraud by analyzing a target’s behavior on an application or service [4]. These kinds of applications demonstrate the exciting potential impact of machine learning across many different sectors.

Machine Learning For Good, For Bad

Although using this technology to diagnose cancer in patients or prevent online fraud seems like a generally acceptable application, there have been other instances in which applications of machine learning has been met with skepticism and even ethical objections. In 2018 it was found that Amazon was testing the out the use of a machine learning algorithm to help vet applicants before bringing them to the interviewing stage [9]. This program, however, was shut down after engineers discovered that the model had learned a bias against women candidates; a clearly objectionable result. In another example, a software system used to recommend prison sentences based on the defendant’s profile and history of crime called COMPAS received a lot of criticism after analysis done by a third-party. The analysis claimed that African Americans were statistically more likely to receive a higher prison sentence under COMPAS [6]. This claim was heavily disputed by Northpointe, the company that develops COMPAS. They argue that their software is statistically fair and unbiased.

Ethics of Machine Learning

Outcomes like these make it easy to understand why there is fierce debate about the ethicality of using machine learning models in such critical environments. Companies like Northpointe, the data firm that built COMPAS, claim that these tools transform data into decisions using research-backed methods. Opponents, however, argue that this is not enough to justify the potential consequences of relying on these models. Thus, represents the ethical divide that is preventing artificial intelligence and powerful machine learning models from being adopted in our everyday lives. Is it ethically objectionable for machine learning to be used in these high-stakes scenarios? Even if a model might be able to achieve state-of-the-art accuracy benchmarks or have its methods proved mathematically, it is still not an indication that its use is ethically sound. Thus, the debate should not be about whether this technology can be effective in these kinds of scenarios, but rather about how to address the fundamental issues that machine learning faces which prevent the general public from trusting this powerful technology. The answers to these problems may lie in the field of interpretability. Interpretability is a hot topic of machine learning research that focuses not on improving the accuracy of predictions, but explaining why those predictions were made in the first place. For the most part, current machine learning techniques are seen largely as black boxes, meaning that values are inputted into the system and new values are read out, but what happens in between is unknown. Though the field is relatively young, interpretability contains a number of different topics, drawing inspiration from statistics, economics, and even biology [7].

Interpretability

The most pertinent topic in relation to ethics is the concept of feature importance. Feature importance focuses on highlighting the most important aspects of a data set. This can provide a human-understandable explanation for predictions that the model makes. For example, in a basic model that were to recognize different kinds of fruit from an image, the existence of red would contribute more to the prediction of an apple than the existence of brown would. While this is an instinctive example, sometimes the most important are not always as intuitive which can provide important insight as to how the model is dealing with data. As a result, information can also be used to detect bias in the model [7]. If it is found that the model is relying on features such as race or gender when it should not be, then scientists can use this information to attempt to debug the system to remove that bias.

How Machine Learning can be Used Ethically in Society

Does this mean that it is ethical to use machine learning in our society? Machine learning offers an enormous potential to improve the world we live in. At the same time, however, it carries the risk of moral disaster. After examining both sides of this debate, I believe that this technology can only be used only when made transparent and free of bias by employing these interpretability techniques. When considering the benefits that machine learning offers, one could argue that even absent interpretability, it is ethical for these models to be used in critical decision-making scenarios. From a utilitarian perspective, if these models are able to provide a good for society that outweighs the potential negative consequences, it should ethical to use them. For applications such as medical diagnosis tools, it is clear to see that the value that these models provide outweigh any potential risk that is posed from using the model. Even if it were to potentially make a wrong decision, state of the art models have been able to achieve accuracy beyond that of trained professionals which means that the risk of misidentification is only mitigated.