Module 4 – Intelligence Introduction and Testing
Having grown up in an era during which “learning styles” were the new fad in education, I remember hearing about different kinds of intelligence without understanding what that meant, exactly. This was an approach to student management in the classroom that would influence the types of groups that were created by teachers for shared assignments – mixing sometimes, while other times grouping by similar type. In my experience, the types discussed were applied inconsistently between classrooms, not applicable in all classrooms, and different teachers had different opinions about a student’s type. While this concept had always seemed ambiguous, there was a reasonableness to the idea that minimized skepticism. The brief section called “Multiple Intelligences” in the book by Stuart Ritchie helped me understand the concept, as well as doubt the authenticity of the idea. Intuitively, my experience has been that individuals with high intelligence tended to be top performers in most subjects, if not all of them. The contrary was also true. Based on this first finding, the concept is truly an educational fad that has not been supported with reliable evidence. This module’s deep dive into the components of intelligence testing provided many insights into the mental models used to evaluate intelligence based on multi-part sub-tests that analyze and measure specific skills that contribute to that broad definition provided at the start of the book – our understanding is currently strong enough to identify differences in learned knowledge and innate ability to problem solve. Like Aristotle’s Five Wits, the sub-divisions of intelligence being measured by tests tends to group into 5-7 categories that can be allocated to one of those two groups in the definition; Gc and Gf for learned and innate mental ability that can be measured, my second finding. The Positive Manifold’s high correlation was impressive. Having studied correlation in a prior Educational Psychology class focused on statistics, during which I learned the challenge with finding strong correlation, the high correlation between multiple samples provides strong evidence to support the data produced by general intelligence testing. This may be the result of the measurements relying too heavily on the same competencies being tested (generally 5-7 in any test, and often rolling up to two primary categories) or it may be what was suggested: that IQ testing is reliable over time. As we explore the types of testing in more detail during the upcoming modules, I will be paying close attention to this question. In this module, Guilford was the one example of a different approach with many more categories being tested (up to 120 factors!) though without information about his approach’s correlation to other tests. In the past, I have worked on “cohort analysis” for business needs – generally to segment groups of employees, customers, or vendors – so I was somewhat skeptical of a scoring system that factored test result by age to create the useful result. Another finding was Wechsler’s test, the most widely used according to the lecture notes, which made a sensible adjustment to that scoring factor. By dividing the score by the expected score based on the test-taker’s age, the chronological element within the grading scale becomes embedded neatly in the result, simplifying comparisons over time and across cohorts; an ingenious solution to reduce effort and waste in the scoring process.
0 Comments
In our modern technological society, billions of humans devote hours a day to algorithmic paradises they have created with their friends and family, communication platforms such as Facebook, Yahoo, and Reddit. Every day, humans consume messages from pundits on these platforms stating that that same beloved technology is stealing our jobs, too. The white paper, “Mounting a Response to Technological Unemployment” written by Andrew Stettner and published by The Century Foundation on April 26, 2018, is a clarion call that responds to the tension between our love of technology and its power to negatively alter the lives of hard-working people. The danger technology presents to the social well-being of society has been a growing element in Western discourse for over a century, but until the last 34 years, technological advances have tended to favor improved working conditions for humans, productivity gains, and net increases in new jobs.
In 1987, the popularization of the personal computer that fits in our pocket and performs exponentially more tasks increased the risk to our social welfare and economic well-being. The exigency is obvious to many, especially to me because I have been a 10-year participant in the building of automation that causes technological unemployment. To me, every solution to this problem seemed improbable at best, but after investing in a deep analysis of Stettner’s paper about his proposal called Technology and Trade Adjustment Assistance (TTAA) it does not seem so complicated. His paper leverages an efficient structure, legislative persona, and addresses both sides of the policy discussion by utilizing well-documented rhetorical strategies that effectively manage the many dualities in the discourse regarding income replacement to mitigate the economic risk of technological unemployment. Campbell’s seven elements of descriptive analysis can be found in this white paper in support of the argument that some form of income replacement is a good solution to the problem created by technological unemployment. I will focus primarily on Structure, secondarily on Persona, as well as two frequently used rhetorical techniques: analogy and negating. In addition, I will analyze how the author effectively deploys the “They Say / I Say” argument to address criticisms of his proposal quickly, so that he can devote most of his attention to setting the stage and elaborating the solution. Structure, how “materials are organized to gain attention, develop a case, and provide emphasis” (Campbell et al., 2015, p. 28), is the primary focus of my analysis because Stettner’s document provides a useful template for the final policy speech that I will present at the end of this class. The exigency that is my focus is familiar to our policy-making audiences, without being well-known, and as such requires a greater investment in staging context, current research, and approaches attempted, before progressing to any presentation of a solution proposal. I believe that one of this document’s greatest strengths is how it resolves the complexity inherent in this exigency by allocating space (text) very effectively to the constituent objects of the argument. Stettner’s white paper was produced to represent the position of the well-respected think-tank that supported it’s publication. Written for a virtual audience of policy makers who will read it, he makes assumptions about the priorities and objections of his audience and addresses them with a persona that implies he is an experienced legislator even though he is not. Frequently, the text presumes objections and foresees legislative challenges, directly addressing them using the two rhetorical techniques that I will analyze in detail. Analogy and negation have a natural relationship because they are both dichotomous rhetoric techniques best suited to build similarities and eliminate the complexity created by multiple, opposing entities. The entities in my exigency frequently conflict because they are poorly understood, and because they are relatively new observations that lack clear direction for policy proposals. In his white paper, Stettner minimizes complexity to undercut our anxiety about the competing options to resolve the exigency while also insisting that it is real and resolvable. Any worthy presentation – written or spoken to a live audience – must begin immediately and effectively, and this white paper does it well. The title, “Mounting a Response to Technological Unemployment” is a clear call to action that immediately presents the central idea that will be first contextualized by scoping the problem through evidence, then related to existing inertia observed in parallel policies, and finally elaborated into a more detailed solution that targets the stated problem. The paper is structured simply: introduction, problem context with historical and current perspectives, alternative or parallel solutions, proposed solution with answers to rebuttals, and conclusion. I am extremely impressed that his introduction is a mere 425 words and his conclusion even less, 131 words from a total 15,000 words; only 3.7% of text surround the body of his argument. Because I believe that the structure of this paper provides a template for my policy speech, it is important to further this analysis. After the introduction, the next 4,200 words (28%) provide the context and history, as well as current research about the problem. Stettner’s effort to stage the context and history that favor his solution reference several of the same citations that I used in my Problem Speech, although I found that his argument about the impact of ATMs on the employment of bank tellers did not include the additional demographic information that I had identified using the US Labor Statistics information about the relevant time-period. He stated, “But in fact, they’ve allowed banks to increase the number of branches, each with fewer tellers focused more on customer service” (Stettner, 2018, p. 3) whereas my argument dug deeper. Though our arguments were initiated by the same article in the Journal of Economic Perspectives written by David H. Autor in 2015 my speech included the following, per the outline, “Seeing that example from Autor, I found the overall US employment statistics for the same period… US had 91M workers in 1980 and 139M workers in 2010, a 55% increase, compared to the 10% increase in bank tellers after ATMs… Negative on jobs at both the micro-level of bank [tellers] and across the industry at a macro-level” [italics added for emphasis] (Frazier, 2021, p. 6). As I learned first-hand, an important aspect of the policy discussion about technological unemployment is the Luddite Fallacy. I presented it in my speech as the counterargument to the position that robots will take away our jobs. It may have been an oversight, but in my opinion, Stettner included the incomplete data about the ATM example to include a reference to the Luddite Fallacy counterargument that rounds out the historical context for technological unemployment, even though it detracts from his thesis. After the context and history of the exigency has been presented, he transitions to the current research: a thorough index of alternative solutions to employment problems and a deep dive into a program implemented in the 1960s to protect workers from job losses created by employment migration out of the United States caused by the progressive trade policies at that time. Of the previously noted 4,200 words prior to the solution proposal, his description of current research utilizes 2,900 words (nearly 20% of the entire document). This is a long section, which he acknowledges when he writes that his “paper goes into depth about the possibility and merits of expanding TAA, providing more details to an idea that has been briefly mentioned in the literature but not fleshed out” avoiding the challenges of building something new, deftly titling his solution section, “Expanding TAA to Include Technology: The Extra T in TAA” (Stettner, 2018, p. 21). He suggests that the Trade Adjustment Assistance program legislated in 1962 can be modified to include compensation to workers for technology-related job losses in addition to the existing benefits for job losses due to trade migration. He positions this program as the best option among the many alternatives listed in his index of solutions partly because he is just fleshing out an idea that was already approved by the United States Congress. This understated language belies Stettner’s persona throughout his policy argument which aligns to the publisher’s stated mission to be a “progressive, independent think tank that conducts research, develops solutions, and drives policy change to make people’s lives better” (The Century Foundation, About Us, 2021) by focusing the audience of policy makers on those humans about which they may have minimal direct personal knowledge, the middle class and the low-skilled labor in the United States. The last paragraph of his introduction begins with an assumption that reflects the inertia his argument is trying to leverage using a legislative voice, “the growing consensus on the types of tasks that are vulnerable to displacement” that finishes with a list of policy jargon such as “displacement… trade-related job losses… adjustment assistance… elements of a policy response to this risk… pilot program… underlying TAA program” (Stettner, 2018, p. 1). I hear him signaling to his audience his credibility as a policy thought leader who writes like a legislative insider. Aside from this strong presence at the beginning of this argument, though, he steps back to allow the context, history, and current research to tell the story, and the data to provide the evidence that supports the central idea captured in the title of his white paper. To mount a response to technological unemployment, Stettner builds a linguistic analogy between the existing TAA program and his solution that uses semantic likeness as a technique to generate confidence in his proposal. Because this “analogy is a fresh comparison that strikes the audience as highly apt, it has considerable persuasive force” (Campbell et al., 2015, p. 99). By investing almost 1,700 words (11%) in a detailed description of the benefits, successes, and improvement opportunities related to the TAA program, I see a good example of how to utilize Campbell’s opinion of analogy, “[a]s a rule, a figurative analogy is more powerful when it is more comprehensive; that is, when there are many points of similarity” (Campbell et al., 2015, p. 99). Stettner has been comprehensive by including extensive historical and contextual perspective, as well as current research in the form of the existing TAA program, which enables him to use analogy to bridge to his solution proposal by simply adding an “Extra T in TAA” (Stettner, 2018, p. 21). After his effort to build many strong connections between the existing program and his proposed response to the problem of technological unemployment, Stettner dedicates the next 25% of his text to the solution. To reduce the apparent complexity of his solution proposal and in contrast to the detailed lists of policy requirements, goals, and mitigations, he refers to the “TTAA” program that is his proposed solution as “[t]his straightforward expansion would make the same mix of services available to both trade- or technology-certified workers” (Stettner, 2018, p. 25). This is a great example of negating as described in our textbook, “negation underlies all comparisons and contrasts, including those involved in literal and figurative analogies… but comparisons are also involved in definitions” (Campbell et al., 2015, p. 170) in this case trying to reset any anxiety that may have developed in his audience as they progressed through his solution proposal. Almost immediately after using “straightforward” he again uses negation “to characterize a person or thing” (Campbell et al., 2015, p. 188) by describing the benefits provided by the existing program he wants to mimic as a “current menu of standard options” (Stettner, 2018, p. 25) as if the audience were seated at a restaurant reviewing a list of entrees from which to choose their meal. This familiar language understates the complexity of his argument, effectively leveraging the analogy to simplify the policy argument. The solution can be further bisected into a closing rebuttal to what he believes are the two primary ambiguities that might stymie his plan. As a paper directed to policy makers, he addresses their first concern like a legislator, “many forms of artificial intelligence and automation, like autonomous vehicles, may need Congressional or regulatory permission to expand. These may provide political opportunities for enacting a program like TTAA in exchange for government approval of the technology” (Stettner, 2018, p. 26). Any concerns about how to qualify industries and workers for the program, he calls “a seemingly minor but important detail… unemployment records typically track the workers industry” (Stettner, 2018, p. 26) unifying both ends of the classification problem using those final two words cited and addressing the challenged relationship between workers and industry as minor but solvable through the well-known unemployment assistance program. I found a comparable semantic problem examined in a paper published by the MIT Technology Review that described Andrew Yang’s challenge with presenting Universal Basic Income to Americans as resolved when “he workshopped multiple options before landing on ‘freedom dividend’” qualified by the author’s opinion, “After all, capitalism has become synonymous with the American dream, and what’s more capitalistic than a dividend? And freedom ... well, that part speaks for itself” (Guo, 2021). Stettner’s and Yang’s figurative analogies, workers industry and freedom dividend, characterize the effect of the programs to create a definition of the solution and the impact it will have on people. I see this as a great example of Campbell’s definition of negation. Finally, to address several negative findings related to the existing TAA program, Stettner uses “They say/I say” as swift rebuttal, removing counterarguments quickly. For example, “This is not simply a failing of the program or its model, but rather reflects the fact that TAA recipients are laid off in communities with few good jobs,” (Stettner, 2018, p. 18) to further deepen our concern about the unemployment problems that relate to his argument. He goes further when he adds, “In short, it’s time for a closer look at the results of TAA, and to put to bed the idea that it’s an ‘ineffective’ program” (Stettner, 2018, p. 18). Like his use of “menu” cited earlier, and the use of “workers industry” and “freedom dividend” as well, phrasing opposition’s argument as being ready for bedtime diminishes their power. These semantic analogies link the concepts being discussed to familiar, non-threatening, aspects of daily life. He has heard the counterarguments to his proposal, which focus on perceived challenges to the existing TAA program’s success rate, and he quickly responds by adding that the challenges were already, and continued to be, in the affected communities. A great strength of this paper is that is does not try to reinvent the wheel to solve a problem that is roughly 34 years old. Often, new problems lead one to think deeply about new solutions. Instead, this argument eloquently relates the technological unemployment problem to an issue that has existed for almost twice as long, trade-related unemployment. This approach enables a frequent use of analogy, instead of mind-numbing explanations of complicated, new solutions, to support his argument and to focus on his central idea, mounting a response to technological unemployment as proclaimed by the title of his paper. There are so many risks with engaging the problem of technological unemployment in a way that will effectively mitigate it with an equitable solution that does not interfere with the natural capitalistic imperative to create new gadgets, bots, and algorithms that simplify our life while enriching the wealth of creators. As I have discussed above, this paper published by The Century Foundation avoids those risks by investing semantic resources in an effectively designed structure that efficiently allocates the most text where it is most needed, exposing and resolving the dualities inherent in this exigency with effective use of implicitly dichotomous rhetorical techniques, while quickly addressing disagreements and presumed challenges with “They Say / I Say” arguments. The solution to this problem has always seemed complex to me, as it does to most people, but maybe not so much anymore. The author’s Technology and Trade Adjustment Assistance (TTAA) program does not exist yet, but I can see it clearly now. That is the strength of analogy. Andrew Stettner’s proposed program addresses the risk to social cohesion created by rapid advances in technology without limiting the opportunities for wealth and convenience that are required by our capitalistic, freedom-loving American society. References
Module 3 – Creativity and Personality, Knowledge, Alternative viewpoints
During the first few modules, I felt skeptical about the value of eminence assigned to the opinions of judges, though it was easy to overlook while exploring the individualist version of creativity. In this module, the counterpoint to that approach was exemplified by the point of view of collectivist cultures that value accommodation of precedent and deny the measurable differences that individualists would have emphasized as representational of the depth of their creativity. The historical oral tradition of poetry was the starting point for my studies in that domain. Reading about the Serbo-Croatians who refused to acknowledge “that they represented significant differences” (Sawyer, 2021, p 275) when presented with measurable differences in their presentations of the same traditional material illustrated the contrast between individualism and collectivist cultures. Another great example was how the value of patents may be the same across the world, but the attribution of the success will be shared by more contributors in a collectivist culture, whereas a more individualist culture would reclassify those contributors as participants, allocating the recognition to a smaller group of people. In short, a first finding was that eminence is understood so differently by cultures that prioritize adherence to tradition, valuing age and experience such as Asmat the wood carvers (Sawyer, 2021, p 276) and the example of markets for creative facades that spread south along the Nile, measurably, during the 20th century. This mental model of creativity through sharing and repetition, still original but differently understood than the “originality” being measured by the standardized tests in Module 2, was well described by the evolution of the PC – as email processor and through the development of the mouse. Today, we may not have any equally valuable creative products as the mouse and email, but neither require eminence to judge. Both are so widely adopted, frequently used, and required by so many people. Because “tracing sole authorship is nearly impossible” (Sawyer, 2021, p 251) these extremely creative solutions to communication challenges (interpersonal and technological) are ontologically valuable creative artifacts of the 20th century, though not in the same was as the decorative facades. The Sawyer reading began with a quote about Edison’s 14-man team who helped him realize his inventions. It reminded me of the artist Andy Warhol, and the evolution of “fine arts” in the last few hundred years because they are additional examples of creative collectives – even though they existed in an individualistic culture and sometimes to the much greater benefit of one person. I have been deeply concerned with the need “to organize groups so that they’ll be maximally creative” (Sawyer, 2021, p 232) in my work life for the last twenty years. Both effects described in Chapter 12 – Input/Output and Process – elaborate mechanisms to inspect the work being done by groups and the larger organization. A balanced approach to diversity leads to better results. Superstars are a necessary challenge that require transparency across the group, an awareness that I have tried to establish by sharing with my management teams a wonderful TED Talk by Margaret Heffernan about a “Super Chickens” experiment. I have extensive first-hand experience of the benefit of the nominal group prior to larger brainstorming sessions. Individual inputs to the group activity are greater and more independently developed, which create intuitive gaps that are filled when the group shares their ideas. The approach to group design and to brainstorming described by Sawyer are familiar tools that I plan to transplant to my new toolbox when I am a teacher; a solid second finding among many in this module. References
Module 2 – Creativity Models and Research
Incubation effect, first described by the Sawyer reading as “unguided and unconscious” (Sawyer, 2012, p.97) quickly developed in a more active process when observed through a focus on good creators. The dual process theory of cognitive psychologists indicate that it is a combination of the unconscious and conscious that incubates the creative idea, which was shown to be well supported by the evidence. This approach is a good fit for my plans to create a social-cognitive classroom after university. The six theories of incubation, especially the second and third concepts called Rest and Selective Forgetting, were well supported by the experimental evidence reported by the teams that created the opportunistic assimilation theory. I plan to explore the idea that “being interrupted and forced to work on an unrelated task” may be disruptive but may lead to an “increase[d] solution rate for creativity-related problems” (Sawyer, 2012, p 100) in my classroom. Externalization – the second finding – is one of the most exciting and productive stages of the individualistic creative process. As a leader of software development teams for the last 15 years, I learned first-hand the importance of working in 2- and 4-week Sprints, during which programmers would be assigned to multiple problems. We met briefly each day to discuss challenges then again at the end of the Sprint for to the most important, large review discussion, during which each individual presented their results. Often, we invited external stakeholders to this review so they could observe and provide feedback. Many of my teams identified a snowball effect during review meetings, described in this reading as “an idea often results in other ideas and follow-on ideas” (Sawyer, 2012, p. 134). After individuals on my teams collaborated for a while, they developed the shared “inner short-hand” as a team and used it, along with visualizations of their results, to simplify follow-up iterations on projects, much like Einstein’s approach. Also, being assigned to multiple projects produced better results, as noted by Simonton’s evidence showing that for great creators “the most productive periods were the times when a creator was most likely to have generated a significant work,” (Sawyer, 2112, p.131) i.e., any multiple assignment burden was offset by cross-domain creativity This relates to a third finding, the structure mapping concept that utilizes domain knowledge in a different domain to solve or restructure a concept or problem at hand (Sawyer, 2112, p.119). In my first response last week, I mentioned that when hiring onto my team, we prioritized talents that were outside the scope of the role as a positive indicator for a candidate. The “conceptual combination” and “enhancing creatively of combinations” sections provided strong evidence that that was a strong hiring approach because we were likely to find better, and more creative, problem solvers. Just like Externalization provides the opportunity for feedback from others that will improve the result, “emergence” (Sawyer, 2021, p 117) during the sixth stage when creative ideas are being combined strengthens an individual’s creative solution. The outcome of the emergence can be enhanced in individuals with knowledge of an outside domain. Interestingly though, it is not the trait of being the outsider, but the outsider with external domain knowledge, that improves creative problem solving. Both my second finding with Externalization, and this sixth stage named Combine Ideas builds on Wallas’s model’s final stage, Verification. Over time, it seems that Wallas’s end state was further defined to include an individual’s end state (emergence) as well as a more social or public end state (externalization). Module 1 – Creativity Introduction
Because Creativity and Intelligence have been primary benchmarks that I have used to hire and develop the managers needed to deliver on my responsibilities at work for the last decade, I chose this class. In addition to core competencies for the job, we hired for “talent” with the rule of thumb being a “software developer who is also a spelling bee champion” (Bezos, 1998, #3). Despite relying on it to hire, I have never studied Creativity beyond reflecting on my experiences. With many preconceived ideas about Creativity to test and disconfirm, I hope to build a foundation that will be useful as I transition to teach high school English, Technology, and Gifted programs in the future. As a poet, I have many other ideas about Creativity that do not align to what I needed to accomplish in the business world. The chiasmus of both concepts of Creativity – the artistic vs. the pragmatic, utilitarian, corporate – overlap in my belief that consistent effort applied to problems, inspiration, and ambiguity leads to individual experiences of creativity that lead to solutions, artistic objects, and mechanisms that minimize or eliminate ambiguity; big-C and little-c “C/creativity” in the text. My first finding was that an historical definition of creativity aligned with my intuitions and experiences described above. “In the 18th century, the term genius was first used to describe creative individuals… associated with rational, conscious processes (Gerard, 1774/1966; Tonelli, 1973)” (Sawyer, 2012, p. 23). Gerard’s pragmatic definition of creativity was overwhelmingly influenced by the Enlightenment concept of imagination. To his simple, “rational” definition of creativity the element of “generating novelty” (Sawyer, 2012, p. 23) was added by imagination. In my opinion, despite the intent of many of tests for creativity detailed in chapter three, “novelty” may be a red herring. Evidence for this was provided by Sawyer’s example of photography. Photography existed for decades before it was recognized as a creative art form. Even though photography did not change, the social “valuation of originality… the system of galleries… supporting network of experts” elevated the media to creative art form status. Prior to that recognition, photography had been perceived as the equivalent of a legal process, like birth records and property deeds. “[P]hotographers themselves did not change at all; rather, the sociocultural system around them changed” (Sawyer, 2012, p. 28). In chapter four, Darwin represents another example of “extended activity” leading to a creative breakthrough (Sawyer, 2012, p. 75) which may have been the result of him being in a flow state, my second finding. In my final reflection paper for another EPSY class (Learning and Cognition), I wrote that Vygotsky’s zone of proximal development was a paradigm that most aligned to my aspirations for the classroom. Characteristics of flow state such as “balance between level of ability… sense of personal control” (Sawyer, 2012, p. 78) closely parallel Vygotsky’s methodology of cognitive development. In the lecture notes, the Geneplore model and Sternberg’s triangulation of analytic effort, synthetic ability and practical contextual awareness, also complement Vygotsky’s model in which teachers present a task then observe an individual’s progress, engaging only when they struggle or are not being challenged. Self-efficacy is the desired outcome. Sawyer’s section about self-efficacy ends with Gist & Mitchell’s idea that “self-efficacy can be enhanced through training” and I agree that those findings are interesting and look forward to exploring “the potential role of creative self-efficacy in creative performance” (Sawyer, 2012, p. 82) to apply what I learn here to help my students reach a flow state while doing their schoolwork. |
AuthorStudent of Education, English, and Learning Technology at UMN. Archives
May 2022
Categories
All
|