Midterm Assignment for CI 4311W at UMN
Introduction One of the greatest accomplishments of Artificial Intelligence has been Machine Learning Algorithms (MLAs). They are not the same as average, everyday algorithms with a ‘little a’. The biggest difference is that MLAs are semi-autonomous at least, sometimes entirely autonomous, and if either condition is true, that creates ethical complexities related to individual responsibility, attribution of virtues, vices, and beneficiaries, as well as the transparency necessary to understand and measure the effects of outcomes and outputs. Furthermore, the autonomy of MLAs puts them into a unique class with ambiguous rights and duties. To unravel some of these complexities, at least enough to engage in a basic ethical discussion about them, we need to examine a machine-oriented definition of intelligence and rationality. We will need to reflect on problems as old as the Wizard of Oz, the movie through which the world witnessed the transformation of moving pictures from black-and-white to color; a radical change. The transformation of algorithms to MLAs has a comparable scale, i.e., from two colors to infinite colors. A simple and well-known example of an MLA is the spam filter in email; semi-autonomous because we click buttons in unsolicited emails to help train it. On the other hand, the “k-means clustering” MLA learns and operates unsupervised, independently discovering answers to questions about observations that have not occurred in a human mind yet. These MLAs exceed the brain’s capacity to process information. I will use this type of algorithm as a reference point because I have worked directly with it to build global fraud prevention programs. On several occasions, my brain was the first step of the validation process to ensure that a newly coded k-means clustering analytic tool was accurate and useful. I am neither mathematically inclined, nor scientific – my input was the external reference point. It was important that I did not fully understand the mechanisms of MLAs, and that remains true today. Ethics of Evidence My human brain was the bulwark to filter inconclusive evidence, an epistemic ethical concern. Algorithms “encourage the practice of apophenia: ‘seeing patterns where none actually exist, simply because massive quantities of data can offer connections that radiate in all directions’ (Boyd and Crawford 2012, 668)” (Tsarnados et al., 2021). In ‘The ethics of algorithms: key problems and solutions’ a team of researchers provides two additional types of evidentiary concerns, both more ethically concerning: inconclusive evidence and misguided evidence. The latter creates risks to justice and fairness because neither the algorithm’s designers nor the MLA has been well prepared to assess the social context that the outputs affect. In addition, once the MLA has been implemented, the technical team will move to a new project or new jobs, creating a risk of context transfer bias should the new owners of the algorithm apply it to a novel problem. Ethical concerns with inconclusive evidence are compounded by challenges with tracing the decision process of machine intelligence. This transparency problem is another significant ethical concern related to the traceability of machines’ learning and subsequent decision-making. The ability to understand algorithms is limited to a relatively small audience, often affluent and educated, furthering social inequality as MLAs become more prolific and integrated into our basic tools for daily living. Documentation is too much for users, too difficult to produce by business, and slows innovation in return for ubiquitous benefits. A related problem is attributing responsibility for the actions of MLAs. “The technical complexity and dynamism of ML algorithms make them prone to concerns of ‘agency laundering’: a moral wrong which consists in distancing oneself from morally suspect actions, regardless of whether those actions were intended or not, by blaming the algorithm (Rubel et al. 2019)” (Tsarnados et al., 2021). The compound effect of a small group of knowledgeable owners of machine learning algorithms and the general difficulty in determining when an algorithmic output is the result of human design or independent and unsupervised machine learning leads to conflicts of moral duties. These conflicts are created by the black box in which the algorithm exists and the utilitarian difficulties created by challenges predicting consequences when only a small group of experts control the machine, or the machine makes decisions by itself based on the rules it has written, which are based on the learning in which it has engaged independently. It is important to note that transparency is a human problem too because most people cannot explain the ’what, how, and why’ for every action. Presuming that this human limitation is accurate, we would be treating MLAs unfairly by expecting it in their domain. Intelligence, Rationality, and the Ethics of Agency Two common philosophical definitions of Intelligence apply to our understanding of MLAs. First, intelligence is the capacity to achieve complex goals, and second, intelligence does the right thing at the right time. “The teleological framework of multiple goals, within which humans move about balancing reason and emotion, brings us back to Aristotle’s view of ethics as the pursuit of the good life through a well-orchestrated ethos. Aristotle highlights the capacity to listen as one of the key competencies of ethical-practical reasoning, phronesis, which he also calls a ‘sense’” (Holst, 2021). In my work validating MLAs’ k-means clustering analysis, it was what had been professionally labeled my “Spidey sense” for uncovering complex, well-hidden webs of fraud that was being compared to the outputs of the intelligent, unsupervised machine. It is important to note that my intuition was less transparent than the outputs of the machine that was being built to replace me. Predictability and explainability are both important aspects of virtuous decisions. ‘An FDA for Algorithms’ highlights “the analogy between complex algorithms and complex drugs. With respect to the operation of many drugs, the precise mechanisms by which they produce their benefits and harms are not well understood. The same will soon be true of the most important (and potentially dangerous) future algorithms” (Tutt, 2016). The prospect of becoming a full ethical agent is a human problem that applies to MLAs. For Plato, it is a two-part problem, first, “in his dialogues, the crucial capacity for rational thinking is called ‘logon didonai’ which means to give an account of something” (Holst, 2021). We have explored that problem of transparency already. The second part, developed further and adopted by Aristotle, includes desire and emotion – both mastery and the use of those irrational components of reason. How does one apply this second part of being a full ethical agent to a machine? This is the existential dilemma faced by the Tin Man in the Wizard of Oz: he does not have a heart and thus, cannot experience emotion. One can imagine a machine designed for this purpose, but the complexity of designing a problem-solving machine like an MLA with these components embedded seems unfeasible and impractical. “In contrast to reinforcement learning methods, both supervised and unsupervised learning methods are… poorly suited methods for the raising of ethical machines” (Kaas, 2021) Two topics mentioned in different articles in the book “Machine Law, Ethics, and Morality in the Age of Artificial Intelligence” can be combined to propose a solution. In a game called Cake or Death, the subject will be presented with two options, both of which lead to ethically appropriate rewards – bake a cake or kill three people. There is a third option, too, after which the program loops back to the original decision – ask a virtual entity for advice. Additional, more complex, games were outlined in Kaas’ article, but at the core of each one was a nudge of the machine to learn to ask for help when making decisions, and they present all decisions as benefitting from this ethical process through a reward mechanism. With this approach, ethical MLAs would be designed, implemented, and maintained for use by other MLAs that need this specific kind of advice, using common technological procedures like today’s APIs that already request and receive the high volumes of data needed to support frequent ethics checks. Conclusions and Implications These are just a few of the ethical concerns with machine learning algorithms. One simple solution addresses concerns with the ethical categories that are the broad focus of this paper (Rights, Utilitarian Good, Fairness, Common Good, and Virtue): ongoing maintenance. The technical teams that build these machines will transition to new roles and projects. Contexts in which these machines are used will change over time. The mathematics and science that created MLAs will produce new discoveries that will incrementally, sometimes fundamentally, change the technology. For these reasons, maintenance is an ethical requirement for the builders and owners of MLAs, even when these roles are taken over by the MLAs. Maintenance enables input from the participants, users, and social contracts. The environment in which technology develops is naturally iterative so the maintenance will not be a burden to development. Also, using the ethical guidelines of pharmaceuticals may inform the ethics of intelligent machine learning algorithms. AlphaGo and Deep Blue were successful in their domains because they used probability theory refined through deep self-reflection achieved through the machines’ observations of the games they played against themselves. A machine playing a game against itself appears to be an independent entity, which is why they call that machine learning process unsupervised. But the games these two MLAs play have minimal ethical concerns compared to the medical advice, fraud prevention, and language processing that the more well-integrated MLAs will produce. Finally, after his thorough discussion of the Black Box and Tin Man problems with artificial intelligence, he ends optimistically, and I would like to echo his thoughts as I conclude. “Due to a mismatch of our rational and emotional nature, many of our human ways of acting and thinking are flawed with incoherent, contradictory, and biased elements that still stand in the way of realizing fully ethical rationality. Should the future development of AI become truly ethical, it will also help us become more ethical.” (Holst, 2021) Because it is most likely that one day, we will rely on the outputs of MLAs in most aspects of our lives, assessing the ethics of our design decisions is an essential component of the MLA design process today. Successful, independent learning machines may help us overcome some of the human challenges that we all experience every day, most notably the struggle between emotional desire and our rational intelligence. It is my belief that these two constitutional aspects of humanity are not only the result of natural processes outside of our control but also the result of the learning processes that we engage throughout our lives. Because our machines have achieved the threshold of unsupervised learning, like their original makers, these infinite decisions from which we learn can be traced back to the black-and-white of ethically good or bad. References
0 Comments
|
AuthorStudent of Education, English, and Learning Technology at UMN. Archives
May 2022
Categories
All
|