Final Assignment for JOUR 3751 at UMN
Introduction Before the rise of digital media in journalism, humans determined what was newsworthy using their prior knowledge and investigative research, constrained by their physical and mental capacity to write articles before a deadline, and based on their experience during their lifetime or the shared experiences and values of a newsroom. As digital media became pervasive during the 20th and 21st centuries, news has benefited from the inputs of computer algorithms that can write, research, and moderate at least as efficiently as the human journalists on which the computer scientists have based their designs. Three aspects of news production have changed: article writing has been enhanced by machine learning algorithms, computer science has expanded the capacity of news organizations to leverage research, and finally, technology has improved the efficiency and civility of audience-provided feedback to the news organization. The importance of these technological changes has been measured economically through standard metrics like unique visitors, illustrated by a case study of Le Monde, below. Though the value of other improvements, such as greater gender inclusivity and civility, have been difficult to measure they are equally important to society. Finally, optimization of basic journalistic processes by digital technology led to more efficient newsrooms that produce more content so that they can be relevant to a wider audience. News production before digital media Before digital media, local beat writers would cover one sports team or one sport in their area, speaking to athletes and taking notes on a pad or recording device that required transcription, both of which required additional time to reorganize before someone could begin to write a story. News organizations were constrained by their human resources, for example, “AI tools can help journalists tell new kinds of stories that were previously too resource-impractical or technically out of reach” (Hansen et al., 2017). Physical proximity to the subjects of their stories also constrained the organizations. The financial costs of travel, in addition to the travel time that was not conducive to productive writing, have been mitigated by technology. Once someone had written a story during the era before digital media, it would be given to another person in the newsroom who focused on production tasks, such as layout, bylines, a headline, and images. These copy editors were experts at writing headlines based on a formula of engagement so that people would buy and subscribe to newspapers. They had a knack for selecting captivating images, too. Unfortunately, “Copy editors have been sacrificed more than any other newsroom category” (Beaujon, 2013). Algorithms are most effective when replacing human activities that are based on a “knack” because those human skills tend to use factors that can be captured and built into a computer model. While much of the attention on artificial intelligence in the newsroom has been focused on the bias within the algorithms, it is important to understand the context of bias in newsrooms before digital media, which continues today. News production has been overwhelmingly white, and predominantly male, and remains this way today according to a Women’s Media Center analysis of the “Five Big Sunday Shows” in 2020 in which “findings confirm many years of research… as ample evidence shows, the news is fundamentally male” (Byerly, 2021). This article also refers to a study by Gaye Tuchman from over 40 years ago that quantified gender representation in news organizations as overwhelmingly white male. Perspectives offered by the news before digital media tended to follow the ideology of the writers and production staff, who were white, and though the statistics indicate that minority and female representation has not changed with the demographic changes in the United States, technology is beginning to offer insights about inclusivity and more in-depth research from more points of view. I believe that the first step toward being inclusive is being open to feedback. We are imperfect beings who benefit from listening to others’ viewpoints. In the past, the audience of journalism wrote letters to the newspaper. Large newsrooms invested in mailroom headcount to receive and sort, then route the opinions and feedback of their readers to the parts of the organization that produced a particular section. I experienced a world that required a visit to the post office or a visit from the postman, plus a few days for a letter to travel to the news organization, then a day or more for the letter to be open, read, and sorted. The feedback loop between audience and news producer required a week or longer. Reacting to feedback about a story was a slow process in most circumstances. Also, rude or nasty letters could be sorted into a separate basket that was routed differently, to avoid offending the writer and/or the producers. The feedback loop has been one of the biggest changes in digital media. Effect of technology in the age of digital media Many major news organizations maintain a robot writer on staff. “Cyborg… accounts for an estimated one-third of the content published by Bloomberg News” (Contributor, 2019) and “Bertie is part of a broader focus on using artificial intelligence to make publishing more efficient for Forbes staff” (Willens, 2019). Those are just two examples of many algorithms that are writing the content for news publishers. These companies maintain a robot writer, not a staff of robot writers, to supplement the production of their human writers. “The AP estimated that it’s freed up 20 percent of reporters’ time spent covering corporate earnings and that AI is also moving the needle on accuracy” (Moses, 2017). I think the coolest news writing algorithm is GPT-3 that, or who (depending on bias toward machine learning algorithms), writes for The Guardian. Following a recent article that GPT-3 titled “A robot wrote this entire article. Are you scared yet, human?” the editor provided a brief recap of the process they used: “prompts were written by The Guardian… GPT-3 produced eight different outputs… Each was unique, interesting, and advanced a different argument.” (Editor, 2020) Like they would have done for a human writer, the editor picked “the best parts of each… Editing GPT-3’s op-end was no different to editing a human op-ed. Overall, it took less time to edit than many human op-eds.” Efficiency has been a huge benefit of algorithms, as well as several others that will be discussed later. These Robot Writers reduce the time needed to write the article, as the AP noted, and they are more accurate because computers are parameter-based technology that struggles with intuitive decision-making. They stay on topic. The second writing technology reduces the time to write headlines and choose images, “reinforcement learning can also be applied to optimize publishing; for example, to help choose the best headlines or thumbnails for a particular story.” (Marconi, 2020). The methodology of good headlines was based on the experience of copy editors, which has been recreated in machine learning algorithms that are reactive to feedback and produce more options, more quickly (Clark, 2019). A case study of the UK press association reported, “journalists have developed templates for particular topics and use automation to create multiple variations” (Marconi, 2020) which are presented to editors as suggested headlines, images, and entire stories, just like GPT-3. After discussing two good examples of technological efficiency, let’s examine how algorithms can affect inclusivity and gender bias. In a predominantly white male news organization, connecting stories to female and minority points of view can be difficult. Algorithms are being used by the Financial Times to track personal pronoun usage. “As reporters write their piece, the bot will alert them of any imbalance in gender ratios” and the same technology is being used to monitor images to ensure there is gender equity in coverage, “based on ‘research showing a positive correlation between stories including quotes of women and higher rates of engagement with female readers’” (Marconi, 2020). Human writers are the arbiter of this algorithmic feedback, choosing to adjust their story based on suggestions, or not, but the correlation to engagement translates to marketing revenue and higher-quality work, so it is likely that the technology will continue to be deployed. The computer processing power and databases required to support these on-demand suggestions are immense, but the infrastructure exists already. Known as neural networks because their complexity is like “the way neurons are wired in the brain” (Hutson, 2021), these algorithms can browse massive datasets in milliseconds to find connections within the data, connections based on templates and prompts created by newsrooms. “Content and news organizations are making increasing use of AI systems to uncover data from multiple sources and automatically summarize them into articles or supporting research for those articles” (Schmelzer, 2019). A case study of the 2015 French election coverage by Le Monde explains how the news organization beat a major competitor “by having more published stories online” (Marconi, 2020) increasing page views, which is also happening in the Real Estate- and Sports-writing industries (Duncan, 2020). Quality content that is “automatically generated provides a net value for us – and our readers” has increased conversions of new paying subscribers (Kalim, 2021) just like coverage of corporate earnings. Finally, the most obvious new technology is the Comment feature in digital media. This pervasive and ubiquitous technology enables an audience to give feedback immediately, not a week later, and anyone can access their feedback. Algorithms are being used to moderate the comments, group comments for human review, and score comments for civility. The New York Times (NYT) uses a tool called Moderator that has allowed them to open more stories for comments (Etim, 2016; Salganik & Lee, 2020). Not just an efficiency tool, the “main goal is to create a safe space for discussions” (Kovalyova, 2021) that drives engagement with readers. Internally, these moderator tools are used to fact-check reporters, too, because both processes depend on accuracy. Collaboration, a strength of networked journalism, improves the entire news-gathering process from story research and writing to the comments posted by audiences. Personal Reflection and Conclusion The ethical obligations of organizations that use algorithms have been altered by digital media. Bylines include references to the robot reporters based on suggested headlines and images. More news content is being written as audience participation escalates through comments. I appreciate the benefits of efficiency and inclusivity in the form of more articles that offer a broader perspective, even if the writer is a machine, or a machine influences the writer. Egalitarian systems that allow everyone to post feedback improve the user experience of journalism by eliminating the delays created by snail mail and by forcing news organizations to address alternative opinions that are posted online. Many of these systems exist in “black boxes” though, so only a privileged few understand the inputs and outputs of the algorithms. Transparency is being prioritized by organizations like NYT and my hope is that this open-source approach continues. I think it will continue because the business models measure the financial benefits through audience engagement. I believe that algorithms will save us in the end, because they can overcome the limits of the human mind. Only an algorithm can easily track our use of personal pronouns in our articles, and mention when we’re off track. News writing, research, and audience participation have been changed by digital media. The slow pace of writing in the past is exponentially faster using algorithms. News producers’ dependence on “the use of algorithms to automatically generate news from structured data has shaken up the journalism industry” (Graefe, 2016). Disruption of old enterprises is a good thing, though we need to carefully monitor ethical concerns. News is being optimized by computer technology, but it is also being optimized by greater audience participation, the inclusion of more gender and minority perspectives, and giving human writers more time to reflect and revise. References
0 Comments
No Black Boxes (Procedural Transparency): Refers to the transparency necessary to understand and measure the effects of outcomes and outputs. This requires proper documentation of the algorithm as well as proper retention policies to ensure the system designs, database schemas, and major product decisions have been clearly defined in detail. Ethical concerns caused by inconclusive evidence for algorithmic outputs are compounded by challenges with tracing the decision process of machine intelligence but being able to reference documentation that explains the decision tree of the algorithm ensures responsibility for agency through the attribution of action to designers, circumstances, and/or machine intelligence.
Predictability and Explainability: To properly attribute agency of the algorithmic outputs, the forementioned transparency is a useful tool, but additionally, the designers and technical teams that support the development of the algorithm need to ensure that their design has consistently produced the expected outputs along with the evidence to explain the logic the algorithm used to make its decisions. Most notably in unsupervised machine learning algorithms, it is difficult to differentiate between a designed outcome of an algorithm vs. an unexpected outcome based on what the algorithm learned independently. As Safiya Noble noted, algorithms are “automated decisions” that we must trust, most easily accomplished by inspection of the decision-making process to measure its predictability. Aggressive Collection of Feedback (user and technical): Solicit input from the most varied group of stakeholders possible. Similar to a Works Council in Germany that includes a representative from every facet of the business – from janitorial to management to senior leadership – consideration of any new updates to the algorithm must be authorized by large, broadly representative groups of users and stakeholders. Users often have more detailed product knowledge than designers, like the poet in this week’s video, Joy Buolamwini, who noticed the problem with facial recognition technology and black skin. Stakeholders often possess a broader awareness of extra-contextual factors that may influence the algorithm. Both groups must be frequently consulted by the teams that design, build, and maintain the algorithm. Ongoing maintenance: Systematic collection of feedback from users will identify problems with the current design and opportunities for improvement based on new technology and the wisdom gained through experience. Sometimes, an algorithm will do harm to its users, so it is essential to continuously solicit feedback, analyze reporting produced by the algorithm, and aggressively monitor performance trends. Observations or reports of harm to users should generate an immediate maintenance response from the design team and a fair evaluation of the potential need to suspend the algorithm based on the scale, and correctability, of the harm. Since most algorithms are more complex than a checkers game (which we learned has been process-mapped to prevent algorithms from losing a match) the proper balance of accuracy and efficiency, a core value of algorithms, will almost always require more updates in the future. Data Security: Require a minimum standard for the protection of data created by the algorithm, stored by technical teams, and used by additional parties. To accomplish this goal, using the best hardware, firmware, and software will be expected, but most importantly, using the right technology to minimize the risk of a data breach, and any potential misuse of the data. Privacy is a right in some countries, protecting individual autonomy by limiting the unauthorized collection of data, as well as improper use. Most algorithms create more data than a human mind can monitor, so implementing the right tools – building them if necessary for special requirements – will protect data throughout the end-to-end process. Processes should be designed to protect data and all relevant work teams should be fully trained in data security. Midterm Assignment for CI 4311W at UMN
Introduction One of the greatest accomplishments of Artificial Intelligence has been Machine Learning Algorithms (MLAs). They are not the same as average, everyday algorithms with a ‘little a’. The biggest difference is that MLAs are semi-autonomous at least, sometimes entirely autonomous, and if either condition is true, that creates ethical complexities related to individual responsibility, attribution of virtues, vices, and beneficiaries, as well as the transparency necessary to understand and measure the effects of outcomes and outputs. Furthermore, the autonomy of MLAs puts them into a unique class with ambiguous rights and duties. To unravel some of these complexities, at least enough to engage in a basic ethical discussion about them, we need to examine a machine-oriented definition of intelligence and rationality. We will need to reflect on problems as old as the Wizard of Oz, the movie through which the world witnessed the transformation of moving pictures from black-and-white to color; a radical change. The transformation of algorithms to MLAs has a comparable scale, i.e., from two colors to infinite colors. A simple and well-known example of an MLA is the spam filter in email; semi-autonomous because we click buttons in unsolicited emails to help train it. On the other hand, the “k-means clustering” MLA learns and operates unsupervised, independently discovering answers to questions about observations that have not occurred in a human mind yet. These MLAs exceed the brain’s capacity to process information. I will use this type of algorithm as a reference point because I have worked directly with it to build global fraud prevention programs. On several occasions, my brain was the first step of the validation process to ensure that a newly coded k-means clustering analytic tool was accurate and useful. I am neither mathematically inclined, nor scientific – my input was the external reference point. It was important that I did not fully understand the mechanisms of MLAs, and that remains true today. Ethics of Evidence My human brain was the bulwark to filter inconclusive evidence, an epistemic ethical concern. Algorithms “encourage the practice of apophenia: ‘seeing patterns where none actually exist, simply because massive quantities of data can offer connections that radiate in all directions’ (Boyd and Crawford 2012, 668)” (Tsarnados et al., 2021). In ‘The ethics of algorithms: key problems and solutions’ a team of researchers provides two additional types of evidentiary concerns, both more ethically concerning: inconclusive evidence and misguided evidence. The latter creates risks to justice and fairness because neither the algorithm’s designers nor the MLA has been well prepared to assess the social context that the outputs affect. In addition, once the MLA has been implemented, the technical team will move to a new project or new jobs, creating a risk of context transfer bias should the new owners of the algorithm apply it to a novel problem. Ethical concerns with inconclusive evidence are compounded by challenges with tracing the decision process of machine intelligence. This transparency problem is another significant ethical concern related to the traceability of machines’ learning and subsequent decision-making. The ability to understand algorithms is limited to a relatively small audience, often affluent and educated, furthering social inequality as MLAs become more prolific and integrated into our basic tools for daily living. Documentation is too much for users, too difficult to produce by business, and slows innovation in return for ubiquitous benefits. A related problem is attributing responsibility for the actions of MLAs. “The technical complexity and dynamism of ML algorithms make them prone to concerns of ‘agency laundering’: a moral wrong which consists in distancing oneself from morally suspect actions, regardless of whether those actions were intended or not, by blaming the algorithm (Rubel et al. 2019)” (Tsarnados et al., 2021). The compound effect of a small group of knowledgeable owners of machine learning algorithms and the general difficulty in determining when an algorithmic output is the result of human design or independent and unsupervised machine learning leads to conflicts of moral duties. These conflicts are created by the black box in which the algorithm exists and the utilitarian difficulties created by challenges predicting consequences when only a small group of experts control the machine, or the machine makes decisions by itself based on the rules it has written, which are based on the learning in which it has engaged independently. It is important to note that transparency is a human problem too because most people cannot explain the ’what, how, and why’ for every action. Presuming that this human limitation is accurate, we would be treating MLAs unfairly by expecting it in their domain. Intelligence, Rationality, and the Ethics of Agency Two common philosophical definitions of Intelligence apply to our understanding of MLAs. First, intelligence is the capacity to achieve complex goals, and second, intelligence does the right thing at the right time. “The teleological framework of multiple goals, within which humans move about balancing reason and emotion, brings us back to Aristotle’s view of ethics as the pursuit of the good life through a well-orchestrated ethos. Aristotle highlights the capacity to listen as one of the key competencies of ethical-practical reasoning, phronesis, which he also calls a ‘sense’” (Holst, 2021). In my work validating MLAs’ k-means clustering analysis, it was what had been professionally labeled my “Spidey sense” for uncovering complex, well-hidden webs of fraud that was being compared to the outputs of the intelligent, unsupervised machine. It is important to note that my intuition was less transparent than the outputs of the machine that was being built to replace me. Predictability and explainability are both important aspects of virtuous decisions. ‘An FDA for Algorithms’ highlights “the analogy between complex algorithms and complex drugs. With respect to the operation of many drugs, the precise mechanisms by which they produce their benefits and harms are not well understood. The same will soon be true of the most important (and potentially dangerous) future algorithms” (Tutt, 2016). The prospect of becoming a full ethical agent is a human problem that applies to MLAs. For Plato, it is a two-part problem, first, “in his dialogues, the crucial capacity for rational thinking is called ‘logon didonai’ which means to give an account of something” (Holst, 2021). We have explored that problem of transparency already. The second part, developed further and adopted by Aristotle, includes desire and emotion – both mastery and the use of those irrational components of reason. How does one apply this second part of being a full ethical agent to a machine? This is the existential dilemma faced by the Tin Man in the Wizard of Oz: he does not have a heart and thus, cannot experience emotion. One can imagine a machine designed for this purpose, but the complexity of designing a problem-solving machine like an MLA with these components embedded seems unfeasible and impractical. “In contrast to reinforcement learning methods, both supervised and unsupervised learning methods are… poorly suited methods for the raising of ethical machines” (Kaas, 2021) Two topics mentioned in different articles in the book “Machine Law, Ethics, and Morality in the Age of Artificial Intelligence” can be combined to propose a solution. In a game called Cake or Death, the subject will be presented with two options, both of which lead to ethically appropriate rewards – bake a cake or kill three people. There is a third option, too, after which the program loops back to the original decision – ask a virtual entity for advice. Additional, more complex, games were outlined in Kaas’ article, but at the core of each one was a nudge of the machine to learn to ask for help when making decisions, and they present all decisions as benefitting from this ethical process through a reward mechanism. With this approach, ethical MLAs would be designed, implemented, and maintained for use by other MLAs that need this specific kind of advice, using common technological procedures like today’s APIs that already request and receive the high volumes of data needed to support frequent ethics checks. Conclusions and Implications These are just a few of the ethical concerns with machine learning algorithms. One simple solution addresses concerns with the ethical categories that are the broad focus of this paper (Rights, Utilitarian Good, Fairness, Common Good, and Virtue): ongoing maintenance. The technical teams that build these machines will transition to new roles and projects. Contexts in which these machines are used will change over time. The mathematics and science that created MLAs will produce new discoveries that will incrementally, sometimes fundamentally, change the technology. For these reasons, maintenance is an ethical requirement for the builders and owners of MLAs, even when these roles are taken over by the MLAs. Maintenance enables input from the participants, users, and social contracts. The environment in which technology develops is naturally iterative so the maintenance will not be a burden to development. Also, using the ethical guidelines of pharmaceuticals may inform the ethics of intelligent machine learning algorithms. AlphaGo and Deep Blue were successful in their domains because they used probability theory refined through deep self-reflection achieved through the machines’ observations of the games they played against themselves. A machine playing a game against itself appears to be an independent entity, which is why they call that machine learning process unsupervised. But the games these two MLAs play have minimal ethical concerns compared to the medical advice, fraud prevention, and language processing that the more well-integrated MLAs will produce. Finally, after his thorough discussion of the Black Box and Tin Man problems with artificial intelligence, he ends optimistically, and I would like to echo his thoughts as I conclude. “Due to a mismatch of our rational and emotional nature, many of our human ways of acting and thinking are flawed with incoherent, contradictory, and biased elements that still stand in the way of realizing fully ethical rationality. Should the future development of AI become truly ethical, it will also help us become more ethical.” (Holst, 2021) Because it is most likely that one day, we will rely on the outputs of MLAs in most aspects of our lives, assessing the ethics of our design decisions is an essential component of the MLA design process today. Successful, independent learning machines may help us overcome some of the human challenges that we all experience every day, most notably the struggle between emotional desire and our rational intelligence. It is my belief that these two constitutional aspects of humanity are not only the result of natural processes outside of our control but also the result of the learning processes that we engage throughout our lives. Because our machines have achieved the threshold of unsupervised learning, like their original makers, these infinite decisions from which we learn can be traced back to the black-and-white of ethically good or bad. References
Following a century during which the proverbial ‘hand of government’ replaced the fickle assistance meted by monarchs and oligarchic elites, US policy makers and advocates now have a robust dataset on which to perform impact analysis of social welfare benefits, income insurance policies, and the effect of poverty protection programs on work incentives. The white paper, “Why a Universal Basic Income Is a Terrible Idea” written by Oren Cass and originally published by National Review on June 15, 2016, presents a conservative response to all Universal Basic Income (UBI) proposals which are a predominantly liberal solutions to two significant exigencies in 21st century America – poverty and technological unemployment. Cass’s white paper implies a close relationship exists between both problems, but the following analysis will be focused on the latter issue as it is the topic about which I am writing policy response speeches.
The cost of any UBI policy will be significant, diminishing the wealth of a nation’s highest earners to redirect income to meet the basic needs of people who earn the least. Supporting arguments as well as those against such a policy attempt to influence that wealthy group who controls the money today, and historically, most often controls the levers of power necessary to implement any such policy change. Whether or not the possession of money is directly related to the control of power, and the moral or ethical implications of that relationship, has been set aside by this discourse because the focus is always on one side of the exigency, or the other. Supportive arguments that favor UBI provide historical analysis, data-driven rationales, and anecdotes that attempt to engage the audience through enthymeme. Oppositional arguments, for example Cass’s white paper, tend to leverage coercive language based on institutionalized fears of change, discussions of social class, and generalized examples of potential financial challenge. In this paper, I argue that Cass’s argument is not persuasive because he constitutes an audience distrustful of facts, he ignores basic math when estimating costs, and inaccurately labels everyone who is not part of his ideology as outsiders and less worthy. Ultimately, his terrible argument against UBI disconnects from reality, disengages from the exigency it claimed as its topic, and dissolves into false claims and racist tropes that demean American citizens. Literature Review Audience as the foundation of argument enables the rhetor to choose the right word, form the most influential argument, and present evidence based on what is required according to the expectations of the immediate reader as well as the many, virtual readers who the author cannot necessarily presuppose. “A reenergized citizenry committed to carrying on the fight” (Campbell et al., 2015, p.30) aptly describes the audience for Cass’s white paper as a group of citizens, historically connected to the society – in this case Americans – who have been enthused to action by progressive arguments that threaten their grip on power and wealth. They engage in the good fight because they believe their power means that they have been right all along. Skeptically identifying the best-known speakers, thought leaders, and actors in the movement in favor of UBI is how Cass builds his credibility with his audience. In her book, Rhetorical Criticism – Exploration and Practice, Sonja K. Foss devotes a chapter to what she labels Ideological Criticism, described as “the privileging of the ideology of one group over the ideologies of other groups… that represents experience in ways that support the interests of those with more power” (Foss, 2018 p. 239). This is an obviously accurate description of the methodology Cass’s white paper deploys to argue his position. Society’s belief in his version of social norms and values must be maintained or else his argument unravels into a selfish, greedy, and isolated position that reminds me of Shelley’s poem Ozymandias written in 1817 that describes a statue of the world’s most powerful leader centuries later, now decrepit as “two vast and trunkless legs of stone stand/in the desert” (Shelley, 2002). Only the legs remain, not the body, like an argument that needs a powerful body but has only the legs to move the once powerful assertions that have been diminished by time. In addition to using norms and values to build his arguments, Cass leverages two additional techniques Foss calls Hegemonic Ideology, “position and group relations” and “ultimate authority” (Foss, 2018 p. 238). The former concept focuses on the audience as “supporters of the group members” in relation to “their enemies or opponents” while the latter asks, “what is the sanctioning agent” with the power to arbitrate what is true, or should be excluded from the argument (Foss, 2018 p. 238). The implied racism and obvious classism inherent in several of Cass’s arguments attempt to redirect factual arguments presented by eminent scholars and productive businesspeople into a narrow discussion of unsubstantiated, idealized conservative opinions of history as it relates to current affairs today. To a liberal perspective, this redirection is a necessary function of conservative discourse because the basis of their arguments is a coercive misrepresentation of historical facts necessary to support the dominant ideology. Facts matter, but not to Cass. In the following pages, I will leverage these rhetorical techniques to unravel the prejudiced, classist, elitist, and wrong-headed essay of an archconservative who is far more concerned with maintaining his white privilege than he is with discussing the merits of the arguments about progressive proposals that attempt to solve the problem of unemployment, and the related issue of poverty, through an income redistribution program named UBI. Analysis The central idea of his essay has been effectively conveyed by the title, “Why Universal Basic Income is a Terrible Idea” (Cass, 2016, pg. 1) but it reflects the core weakness of his argument, which requires a sympathetic audience and as such, does not even attempt to engage in persuasive arguments based on analysis, anecdote, or data. Cass elaborates on his central idea using a flamboyant, one-line paragraph to begin the white paper, “UBI would only entrench our misconceptions about the relationship between the individual and the state” (Cass, 2016, pg. 1) previewing his ideologically focused argument against wage replacement and social welfare benefits. He does not write for an audience of policy makers who require analysis to make decisions, but for those who are motivated by ideology and their memory of the past. Unfortunately for his argument, those memories are often incomplete and extremely limited because they include only personal perspectives. Without the more complex and nuanced objective viewpoints, his arguments can be more readily consumed by an audience who observe the discourse but do not directly participate in creating the policy response it enacts. Cass establishes his pathos by presenting a roll call of eminent think-tank writers, politicians, and economists, positioning himself as credible because he has knowledge of the issue’s current arguments. After introducing of the idea’s promoters like the chorus in a Grecian drama, “Columnists,” followed by two more groups that he names inside quote marks as “’data journalists’” and “’explainers’” he ends the list by naming the group that stands to provide the most benefit to society while also being a symptom of the unemployment problem being solved by UBI, “technologists” (Cass, 2016, pg. 1). Cass intends his use of quotes around the categorical names of his opponents to instill a skepticism of those groups in his audience. It’s his way of implying that ‘so-called’ or ‘supposed’ should precede their names because they are not part of his, and his audience’s, group. He must believe that his group is disinterested in facts but intrigued by statements that sound impressive because he uses weak data to support his position that a UBI will be cost prohibitive. The second paragraph introduces an elementary math error when estimating the cost per individual for UBI, “a monthly check of $800 or $1,000 to cover basic needs… a couple would receive $20,000 per year” (Cass, 2016, p. 1). It’s obvious that monthly checks at those amounts would be $9,600 and $12,000, not $20K, per year but the intention of his argument is not to be objective, rational, or accurate. Instead, he intends to make arguments with which his audience is already familiar because they are part of a social class and ethnic group that already fears any benefit that may be given to someone other than themselves. I agree with Foss’s point about ideological rhetoric that states a “dominant ideology controls what participants see as natural or obvious by establishing the norm” (Foss, 2018 p. 239) and the sole data-driven example in Cass’s paper attempts to establish a false fact that will be meaningful to his audience because it sounds good, without any regard for the accuracy of the basic math. Following a cost estimate that has been inflated transparently, Cass transitions to the body of his argument through a ‘They Say/I Say’ that refutes a progressive labor secretary by implying that he has presented a false obligation. “Former labor secretary Robert Reich, plugging Stern’s effort, says, ‘America has no choice.’ Actually, we do have a choice — one that goes far beyond safety-net details to reach the very heart of state and society” (Cass, 2016, p. 2). The argument against Reich is not followed by justification for an alternative opinion, but instead, the enthymeme is the conservative reverence for a secular state with heart, based on the historical values of an oppressive Caucasian majority. He reminds his audience of everlasting American norms and values, while naming his opponents as defenders of a social safety-net that is detrimental to the current moral values embodied in American society, as he sees it. As Foss explained, hegemonic ideology privileges one group above another. Cass casts his opponents as breakers of the societal covenant that he believes had been established by the prior century. For Cass, values and norms should not change. He names government as becoming a provider if UBI is enacted, implying without evidence that it will be a new role for the government. Just a paragraph earlier though, he had referenced the “safety-net programs that would no longer be necessary” but now contradictorily states that government does not provide for “individuals, families, or communities” (Cass, 2016, p. 2). Like the elementary math error that initiates his argument, contradicting himself on the same page is not a concern while he reinforces an ideological distrust of facts and logic. Cass reminds his readers about the racist and classist theories of the past, positioning the United States as a country with two sides: “the Haves” and “the poor and the black” the latter being the beneficiaries of policies “that have absolved people of responsibility for themselves and one another… and thereby eroded the foundational institutions of family and community that give shape to society” (Cass, 2016, p. 2). Cass’s narrow point of view built on racist principles created to maintain wealth and power amongst a few citizens are his replacement for concrete evidence to support his central idea. Campbell’s points to this directly when she labels an audience as a “reenergized citizenry committed to carrying on the fight” (Campbell, 2013, p.30). By grouping those in poverty with the black community, he attempts to ostracize both groups with a norm-breaking label that implies that they have not taken responsibility for themselves or their society. Like his use of skeptical quotes to label his detractors as ‘so-called’ and unworthy, Cass uses blatantly racist tropes to define his opposition to social welfare programs while attempting to establish his argument and his group as the ultimate authority. He claims that “unfulfilling” work is nonetheless imbued with meaning, which supports a conservative view of family as male-dominated, which he emphasizes by referring to the worker as a “breadwinner” who provides essentials for their family, whereas UBI would eliminate work’s “essential role as the way to earn a living, work would instead be an activity one engaged in by choice, for enjoyment, or to afford nicer things” (Cass, 2016, pg. 4). Yet work is already these things for some people at all levels of the wage scale, especially those who have earned higher wages in the past. I am an example of this type of worker, who has worked hard and sacrificed to generate enough savings through income to be able to respond to unfulfilling employment by making a life change that I believe will lead to more rewarding work. Mark Zuckerberg and other billionaires talk about how a secure financial safety net would have enabled them to invest more energy and time into their big ideas, so that the benefits to society would have been delivered sooner. “We should have a society that measures progress not by economic metrics like GDP but by how many of us have a role we find meaningful” (Zuckerberg, 2017, 18:00). Wealth as the result of innovation is a net-positive to Cass, but his ideology refuses to accept the societal changes that may accompany it, most notably when there is a cost to his audience’s wealth. Zuckerberg’s speech, most progressive discourse, and my own experience all acknowledge that work is not only essential, but also enjoyable and a choice determined by bigger factors than a need for more expensive things. Finally, Cass tries to act as the agency that sanctions what is right or wrong in the personal lives of low-income workers. He restates the Luddite Fallacy that technological progress leads to more opportunity for work. He assumes that Farm labor transitioned into Service labor. That may or not be true, but this year we have seen how providing wage support funding during the pandemic has altered the loyalty of Service workers of America to their exploitative and now potentially health-damaging employment. As reported by the Chicago Tribune, “Americans quit their jobs at a record pace for the second straight month in September” (Rugaber, 2021). Workers who are changing jobs at the highest rate in the history of America most often cite dissatisfaction and risk to health as the reason. The relatively small additional wages provided to these workers to offset job loss during the pandemic has enabled them to make a choice to select a new job with better incentives for them to work. They are not selecting the option that Cass suggests will be the outcome of UBI – i.e., not working. As he presents final arguments against providing benefits that are not tied to work programs for low-income workers, Cass uses Charles Murray’s well-known classist, misogynistic argument about the difference in intact nuclear families between 1960 – a time in history idealized by 21st Century Conservatives – and the early 2000s. His data reports that in 1960 more than 95% of children lived with two parents when mom was 40 years old, but that has declined to 60% in this century. He quotes Murray several times, reminding his audience “that it calls into question the viability of white working-class communities” (Cass, 2016, pg. 6). The difference in what he labels as intact households is more likely related the fact that stress caused by unfulfilling work is a major contributor to domestic abuse according to The American Psychological Association (APA). Guidelines for domestic abuse noted that “individuals living with LIEM [Low Income and Economic Marginalization] may suffer from increased mental health symptoms and mental health disorders; limited opportunity for engaging in healthy behaviors; and a decreased capacity to manage stressors both cognitively, mentally and socially” (Parker, 2019). Cass avoids addressing these challenges faced by LIEM households and the finding by the APA, which is indicative of a fundamental problem with the Conservative mindset that a married woman is necessarily better off. In conclusion, Cass’s white paper “Why a Universal Basic Income Is a Terrible Idea” provides an example of a Conservative American pundits who knows his audience well. The fact-lessness of his, and his audience’s, hegemonic ideology leverages weak arguments, elementary math errors, generalized and counterfactual labels for opposing voices, and overt racism to convey it’s elitist argument. According to Foss (2018), this “constitutes a kind of social control, a means of coercion, or form of domination by more powerful groups over the ideologies of those with less power” (p. 239). The values and norms of the group in power – often defined as the group with the most wealth and the most wealth to lose via redistribution policies – demand an ideology that assesses their beliefs as good and the opinions of those who disagree as contrary to society. Asserting an ultimate authority, his statistics do not need to be corroborated, nor allow any broad context that would complicate the stated opinion of his group. Despite his reliance on the principle that a citizenry can aggressively defend its ideals in the face of overwhelmingly oppositional facts, I am grateful for the ability of a citizenry to mobilize into action, to innovate for the benefit of all society and all workers, for data and anecdotes that can be validated and thus used to make better decisions about how to support workers through times of unemployment; for example, an income insurance program like UBI. References
In our modern technological society, billions of humans devote hours a day to algorithmic paradises they have created with their friends and family, communication platforms such as Facebook, Yahoo, and Reddit. Every day, humans consume messages from pundits on these platforms stating that that same beloved technology is stealing our jobs, too. The white paper, “Mounting a Response to Technological Unemployment” written by Andrew Stettner and published by The Century Foundation on April 26, 2018, is a clarion call that responds to the tension between our love of technology and its power to negatively alter the lives of hard-working people. The danger technology presents to the social well-being of society has been a growing element in Western discourse for over a century, but until the last 34 years, technological advances have tended to favor improved working conditions for humans, productivity gains, and net increases in new jobs.
In 1987, the popularization of the personal computer that fits in our pocket and performs exponentially more tasks increased the risk to our social welfare and economic well-being. The exigency is obvious to many, especially to me because I have been a 10-year participant in the building of automation that causes technological unemployment. To me, every solution to this problem seemed improbable at best, but after investing in a deep analysis of Stettner’s paper about his proposal called Technology and Trade Adjustment Assistance (TTAA) it does not seem so complicated. His paper leverages an efficient structure, legislative persona, and addresses both sides of the policy discussion by utilizing well-documented rhetorical strategies that effectively manage the many dualities in the discourse regarding income replacement to mitigate the economic risk of technological unemployment. Campbell’s seven elements of descriptive analysis can be found in this white paper in support of the argument that some form of income replacement is a good solution to the problem created by technological unemployment. I will focus primarily on Structure, secondarily on Persona, as well as two frequently used rhetorical techniques: analogy and negating. In addition, I will analyze how the author effectively deploys the “They Say / I Say” argument to address criticisms of his proposal quickly, so that he can devote most of his attention to setting the stage and elaborating the solution. Structure, how “materials are organized to gain attention, develop a case, and provide emphasis” (Campbell et al., 2015, p. 28), is the primary focus of my analysis because Stettner’s document provides a useful template for the final policy speech that I will present at the end of this class. The exigency that is my focus is familiar to our policy-making audiences, without being well-known, and as such requires a greater investment in staging context, current research, and approaches attempted, before progressing to any presentation of a solution proposal. I believe that one of this document’s greatest strengths is how it resolves the complexity inherent in this exigency by allocating space (text) very effectively to the constituent objects of the argument. Stettner’s white paper was produced to represent the position of the well-respected think-tank that supported it’s publication. Written for a virtual audience of policy makers who will read it, he makes assumptions about the priorities and objections of his audience and addresses them with a persona that implies he is an experienced legislator even though he is not. Frequently, the text presumes objections and foresees legislative challenges, directly addressing them using the two rhetorical techniques that I will analyze in detail. Analogy and negation have a natural relationship because they are both dichotomous rhetoric techniques best suited to build similarities and eliminate the complexity created by multiple, opposing entities. The entities in my exigency frequently conflict because they are poorly understood, and because they are relatively new observations that lack clear direction for policy proposals. In his white paper, Stettner minimizes complexity to undercut our anxiety about the competing options to resolve the exigency while also insisting that it is real and resolvable. Any worthy presentation – written or spoken to a live audience – must begin immediately and effectively, and this white paper does it well. The title, “Mounting a Response to Technological Unemployment” is a clear call to action that immediately presents the central idea that will be first contextualized by scoping the problem through evidence, then related to existing inertia observed in parallel policies, and finally elaborated into a more detailed solution that targets the stated problem. The paper is structured simply: introduction, problem context with historical and current perspectives, alternative or parallel solutions, proposed solution with answers to rebuttals, and conclusion. I am extremely impressed that his introduction is a mere 425 words and his conclusion even less, 131 words from a total 15,000 words; only 3.7% of text surround the body of his argument. Because I believe that the structure of this paper provides a template for my policy speech, it is important to further this analysis. After the introduction, the next 4,200 words (28%) provide the context and history, as well as current research about the problem. Stettner’s effort to stage the context and history that favor his solution reference several of the same citations that I used in my Problem Speech, although I found that his argument about the impact of ATMs on the employment of bank tellers did not include the additional demographic information that I had identified using the US Labor Statistics information about the relevant time-period. He stated, “But in fact, they’ve allowed banks to increase the number of branches, each with fewer tellers focused more on customer service” (Stettner, 2018, p. 3) whereas my argument dug deeper. Though our arguments were initiated by the same article in the Journal of Economic Perspectives written by David H. Autor in 2015 my speech included the following, per the outline, “Seeing that example from Autor, I found the overall US employment statistics for the same period… US had 91M workers in 1980 and 139M workers in 2010, a 55% increase, compared to the 10% increase in bank tellers after ATMs… Negative on jobs at both the micro-level of bank [tellers] and across the industry at a macro-level” [italics added for emphasis] (Frazier, 2021, p. 6). As I learned first-hand, an important aspect of the policy discussion about technological unemployment is the Luddite Fallacy. I presented it in my speech as the counterargument to the position that robots will take away our jobs. It may have been an oversight, but in my opinion, Stettner included the incomplete data about the ATM example to include a reference to the Luddite Fallacy counterargument that rounds out the historical context for technological unemployment, even though it detracts from his thesis. After the context and history of the exigency has been presented, he transitions to the current research: a thorough index of alternative solutions to employment problems and a deep dive into a program implemented in the 1960s to protect workers from job losses created by employment migration out of the United States caused by the progressive trade policies at that time. Of the previously noted 4,200 words prior to the solution proposal, his description of current research utilizes 2,900 words (nearly 20% of the entire document). This is a long section, which he acknowledges when he writes that his “paper goes into depth about the possibility and merits of expanding TAA, providing more details to an idea that has been briefly mentioned in the literature but not fleshed out” avoiding the challenges of building something new, deftly titling his solution section, “Expanding TAA to Include Technology: The Extra T in TAA” (Stettner, 2018, p. 21). He suggests that the Trade Adjustment Assistance program legislated in 1962 can be modified to include compensation to workers for technology-related job losses in addition to the existing benefits for job losses due to trade migration. He positions this program as the best option among the many alternatives listed in his index of solutions partly because he is just fleshing out an idea that was already approved by the United States Congress. This understated language belies Stettner’s persona throughout his policy argument which aligns to the publisher’s stated mission to be a “progressive, independent think tank that conducts research, develops solutions, and drives policy change to make people’s lives better” (The Century Foundation, About Us, 2021) by focusing the audience of policy makers on those humans about which they may have minimal direct personal knowledge, the middle class and the low-skilled labor in the United States. The last paragraph of his introduction begins with an assumption that reflects the inertia his argument is trying to leverage using a legislative voice, “the growing consensus on the types of tasks that are vulnerable to displacement” that finishes with a list of policy jargon such as “displacement… trade-related job losses… adjustment assistance… elements of a policy response to this risk… pilot program… underlying TAA program” (Stettner, 2018, p. 1). I hear him signaling to his audience his credibility as a policy thought leader who writes like a legislative insider. Aside from this strong presence at the beginning of this argument, though, he steps back to allow the context, history, and current research to tell the story, and the data to provide the evidence that supports the central idea captured in the title of his white paper. To mount a response to technological unemployment, Stettner builds a linguistic analogy between the existing TAA program and his solution that uses semantic likeness as a technique to generate confidence in his proposal. Because this “analogy is a fresh comparison that strikes the audience as highly apt, it has considerable persuasive force” (Campbell et al., 2015, p. 99). By investing almost 1,700 words (11%) in a detailed description of the benefits, successes, and improvement opportunities related to the TAA program, I see a good example of how to utilize Campbell’s opinion of analogy, “[a]s a rule, a figurative analogy is more powerful when it is more comprehensive; that is, when there are many points of similarity” (Campbell et al., 2015, p. 99). Stettner has been comprehensive by including extensive historical and contextual perspective, as well as current research in the form of the existing TAA program, which enables him to use analogy to bridge to his solution proposal by simply adding an “Extra T in TAA” (Stettner, 2018, p. 21). After his effort to build many strong connections between the existing program and his proposed response to the problem of technological unemployment, Stettner dedicates the next 25% of his text to the solution. To reduce the apparent complexity of his solution proposal and in contrast to the detailed lists of policy requirements, goals, and mitigations, he refers to the “TTAA” program that is his proposed solution as “[t]his straightforward expansion would make the same mix of services available to both trade- or technology-certified workers” (Stettner, 2018, p. 25). This is a great example of negating as described in our textbook, “negation underlies all comparisons and contrasts, including those involved in literal and figurative analogies… but comparisons are also involved in definitions” (Campbell et al., 2015, p. 170) in this case trying to reset any anxiety that may have developed in his audience as they progressed through his solution proposal. Almost immediately after using “straightforward” he again uses negation “to characterize a person or thing” (Campbell et al., 2015, p. 188) by describing the benefits provided by the existing program he wants to mimic as a “current menu of standard options” (Stettner, 2018, p. 25) as if the audience were seated at a restaurant reviewing a list of entrees from which to choose their meal. This familiar language understates the complexity of his argument, effectively leveraging the analogy to simplify the policy argument. The solution can be further bisected into a closing rebuttal to what he believes are the two primary ambiguities that might stymie his plan. As a paper directed to policy makers, he addresses their first concern like a legislator, “many forms of artificial intelligence and automation, like autonomous vehicles, may need Congressional or regulatory permission to expand. These may provide political opportunities for enacting a program like TTAA in exchange for government approval of the technology” (Stettner, 2018, p. 26). Any concerns about how to qualify industries and workers for the program, he calls “a seemingly minor but important detail… unemployment records typically track the workers industry” (Stettner, 2018, p. 26) unifying both ends of the classification problem using those final two words cited and addressing the challenged relationship between workers and industry as minor but solvable through the well-known unemployment assistance program. I found a comparable semantic problem examined in a paper published by the MIT Technology Review that described Andrew Yang’s challenge with presenting Universal Basic Income to Americans as resolved when “he workshopped multiple options before landing on ‘freedom dividend’” qualified by the author’s opinion, “After all, capitalism has become synonymous with the American dream, and what’s more capitalistic than a dividend? And freedom ... well, that part speaks for itself” (Guo, 2021). Stettner’s and Yang’s figurative analogies, workers industry and freedom dividend, characterize the effect of the programs to create a definition of the solution and the impact it will have on people. I see this as a great example of Campbell’s definition of negation. Finally, to address several negative findings related to the existing TAA program, Stettner uses “They say/I say” as swift rebuttal, removing counterarguments quickly. For example, “This is not simply a failing of the program or its model, but rather reflects the fact that TAA recipients are laid off in communities with few good jobs,” (Stettner, 2018, p. 18) to further deepen our concern about the unemployment problems that relate to his argument. He goes further when he adds, “In short, it’s time for a closer look at the results of TAA, and to put to bed the idea that it’s an ‘ineffective’ program” (Stettner, 2018, p. 18). Like his use of “menu” cited earlier, and the use of “workers industry” and “freedom dividend” as well, phrasing opposition’s argument as being ready for bedtime diminishes their power. These semantic analogies link the concepts being discussed to familiar, non-threatening, aspects of daily life. He has heard the counterarguments to his proposal, which focus on perceived challenges to the existing TAA program’s success rate, and he quickly responds by adding that the challenges were already, and continued to be, in the affected communities. A great strength of this paper is that is does not try to reinvent the wheel to solve a problem that is roughly 34 years old. Often, new problems lead one to think deeply about new solutions. Instead, this argument eloquently relates the technological unemployment problem to an issue that has existed for almost twice as long, trade-related unemployment. This approach enables a frequent use of analogy, instead of mind-numbing explanations of complicated, new solutions, to support his argument and to focus on his central idea, mounting a response to technological unemployment as proclaimed by the title of his paper. There are so many risks with engaging the problem of technological unemployment in a way that will effectively mitigate it with an equitable solution that does not interfere with the natural capitalistic imperative to create new gadgets, bots, and algorithms that simplify our life while enriching the wealth of creators. As I have discussed above, this paper published by The Century Foundation avoids those risks by investing semantic resources in an effectively designed structure that efficiently allocates the most text where it is most needed, exposing and resolving the dualities inherent in this exigency with effective use of implicitly dichotomous rhetorical techniques, while quickly addressing disagreements and presumed challenges with “They Say / I Say” arguments. The solution to this problem has always seemed complex to me, as it does to most people, but maybe not so much anymore. The author’s Technology and Trade Adjustment Assistance (TTAA) program does not exist yet, but I can see it clearly now. That is the strength of analogy. Andrew Stettner’s proposed program addresses the risk to social cohesion created by rapid advances in technology without limiting the opportunities for wealth and convenience that are required by our capitalistic, freedom-loving American society. References
|
AuthorStudent of Education, English, and Learning Technology at UMN. Archives
May 2022
Categories
All
|