AI is automated decision-making, and it accelerates century-old algorithmic methods

Abstract: Artificial intelligence (AI) is automated decision-making, and it builds on quantitative methods which have been pervasive in our society for at least a hundred years. This essay reviews the historical record of quantitative and automated decision-making in three areas of our lives: access to consumer financial credit, sentencing and parole guidelines, and college admissions. In all cases, so-called “scientific” or “empirical” approaches have been in use for decades or longer. Only in recent years have we as a society recognized that these “objective” approaches reinforce and perpetuate injustices from the past into the future. Use of AI poses new challenges, but we now have new cultural and technical tools to combat old ways of thinking.

Introduction

Recently, concerns about the use of Artificial Intelligence (AI) have taken center stage. Many are worried about the impact of AI on our society.

AI is the subject of much science fiction and fantasy, but simply put, AI is automated decision-making. A bunch of inputs go into an AI system, and the AI algorithm declares an answer, judgment, or result.

This seems new, but quantitative and automated decision-making has been part of our culture for a long time—100 years, or more. While it may seem surprising now, the original intent in many cases was to eliminate human bias and create opportunities for disenfranchised groups. Only recently are we recognizing that these “objective” and “scientific” methods actually result in reinforcing the structural barriers that underrepresented groups actually face.

This essay reviews our history in three areas in which automated decision-making has been pervasive for many years: decisions for awarding consumer credit, recommendations for sentencing or parole in criminal cases, and college admissions decisions.

Consumer credit

The Equal Credit Opportunity Act, passed by the U.S. Congress in 1974, made it unlawful for any creditor to discriminate against any applicant on the basis of “race, color, religion, national origin, sex, marital status, or age” (ECOA 1974).

As described by Capon (1982), “The federal legislation was directed largely at abuses in judgmental methods of granting credit. However, at that time judgmental methods that involved the exercise of individual judgment by a credit officer on a case-by-case basis were increasingly being replaced by a new methodology, credit scoring.”

As recounted by Capon, credit scoring systems were first introduced in the 1930s to extend credit to customers as part of the burgeoning mail order industry. With the availability of computers in the 1960s, these quantitative approaches accelerated. The “credit scoring systems” used anywhere from 50 to 300 “predictor characteristics,” including features such as the applicant’s zip code of residence, status as a homeowner or renter, length of time at present address, occupation, and duration of employment. The features were processed using state-of-the-art statistical techniques to optimize their predictive power, and make go/no-go decisions on offering credit.

As Capon explains, in the years immediately after passage of the ECOA, creditors successfully argued to Congress that “adherence to the law would be improved” if these credit scoring systems were used. They contended that “credit decisions in judgmental systems were subject to arbitrary and capricious decisions” whereas decisions made with a credit scoring system were “objective and free from such problems.”

As a result, Congress amended the law with “Regulation B” which allowed the use of credit scoring systems on the condition that they were they were “statistically sound and empirically derived.”

This endorsed companies’ existing use of actuarial practices to indicate which predictor characteristics had predictive power in determining credit risk. Per Capon: “For example, although age is a proscribed characteristic under the Act, if the system is statistically sound and empirically derived, it can be used as a predictive characteristic.” Similarly, zip code, a strong proxy for race and ethnicity, could also be used in credit scoring systems.

In essence, the law of the United States ratified the use of credit scoring algorithms that discriminated, so long as the as the algorithms were “empirically derived and statistically sound”—subverting the original intent of the 1974 ECOA law. You can read the details yourself—it does actually say this (ECOA Regulation B, Part 1002, 1977).

Of course, denying credit, or offering only expensive credit, to groups that historically have had trouble obtaining credit is a sure way to propagate the past into the future.

Recommendations for sentencing and parole

In a deeply troubling, in-depth analysis, ProPublica, an investigative research organization, showed how a commercial and proprietary software system is being used to make parole recommendations to judges for persons who have been arrested is biased (Angwin et al., 2016).

As ProPublica reported, even though a person’s race/ethnicity is not part of the inputs provided to the software, the commercial software (called COMPAS, as part of the Northpointe suite)  is more likely to predict a high risk of recidivism for black people. In a less well-publicized finding, their work also found that COMPAS was more likely to over-predict recidivism for women than men.

What was not evident in the press surrounding the ProPublica’s work is that the US has been using standardized algorithms to make predictions on recidivism for nearly a century. According to Frank (1970), an early and classic work is a 1931 study by G. B. Vold, which “isolated those factors whose presence or absence defined a group of releasees with a high (or low) recidivism rate.”

Contemporary instruments include the Post Conviction Risk Assessment, which is “a scientifically based instrument developed by the Administrative Office of the U.S. Courts to improve the effectiveness and efficiency of post-conviction supervision” (PCRA, 2018); the Level of Service (LS) scales, which “have become the most frequently used risk assessment tools on the planet” (Olver et al., 2013); and Static-99, “the most commonly used risk tool with adult sexual offenders” (Hanson and Morton-Bourgon, 2009).

These instruments have undergone substantial and ongoing research and development, with their efficacy and limitations studied and reported upon in the research literature, and it is profoundly disturbing that commercial software that is closed, proprietary, and not based on peer-reviewed studies is now in widespread use.

It is important to note that Equivant, the company behind COMPAS, published a technical rebuttal of ProPublica’s findings, raising issues with their assumptions and methodology. According to their report, “We strongly reject the conclusion that the COMPAS risk scales are racially biased against blacks” (Dieterich et al., 2016).

Wherever the truth may lie, the fact that the COMPAS software is closed source prevents an unbiased review, and this is a problem.

College admissions decisions

At nearly one hundred years old, the SAT exam (originally known as the “Scholastic Aptitude Test”) is a de facto national exam in the United States used for college admission decisions. In short, it “automates” some (or much) of the college admissions process.

What is less well-known is that the original developers of the exam intended it to “level the playing field”:

When the test was introduced in 1926, proponents maintained that requiring the exam would level the playing field and reduce the importance of social origins for access to college. Its creators saw it as a tool for elite colleges such as Harvard to use in selecting deserving students, regardless of ascribed characteristics and family background (Buchmann et al., 2010).

Of course, we all know what happened. Families with access to financial resources hired tutors to prep their children for the SAT, and whole industry of test prep centers was born. The College Board (publisher of the SAT) responded in 1990 by renaming the test to be the Scholastic Assessment Test, reflecting the growing consensus that “aptitude” is not innate, but something that can be developed with practice. Now, the test is simply called the SAT—a change which the New York Times reported on with the headline “Insisting it’s nothing” (Applebome, 1997).

Meanwhile, contemporary research continues to demonstrate that children’s SAT scores correlate tightly with their parent’s socioeconomic status and education levels (“These four charts show how the SAT favors rich, educated families,” Goldfarb, 2014).

The good news is that many universities now allow students to apply for admission as “test-optional”; that is, without needing to submit SAT scores or those from similar standardized tests. Students are evaluated using other metrics, like high school GPA, and a portfolio of their accomplishments. This approach allows universities to admit a more diverse set of students while evaluating they are academically qualified and college-ready.

What are the takeaways?

There are three main lessons here:

1. Automated decision-making has been part of our society for a long time, under the guise of it being a “scientific” and “empirical” method that produces “rational” decisions.

It’s only recently that we are recognizing that this approach does not produce fair outcomes. Quite to the contrary: these approaches perpetuate historical inequities.

2. Thus today’s use of AI is a natural evolution of our cultural proclivities to believe that actuarial systems are inherently fair. But there are differences: (a) AI systems are becoming pervasive in all aspects of decision-making; (b) AI systems use machine learning to evolve their models (decision-making algorithms), and if those decision-making systems are seeded with historical data, the result will necessarily be to reinforce the structural inequities of the past; and (c) many or most AI models are opaque—we can’t see the logic inside of them used to generate decisions.

It’s not that people are intentionally designing AI algorithms to be biased. Instead, it’s a predictable outcome of any model that’s trained on historical data.

3. Now that we are realizing this, we can have an intentional conversation about the impact of automated decision-making. We can create explicit definitions of fairness—ones that don’t blindly extend past injustices into the future.

In general, I am an optimist. Broadly, technology has vastly improved our world and lifted many millions of people out of poverty. Artificial Intelligence is presently being used in many ways that create profound social good. Real-world AI systems perform early, non-invasive detection of cancer, improve crop yields, achieve substantial savings of energy, and many other wonderful things.

There are many initiatives underway to address fairness in AI systems. With continued social pressure, we will develop technologies and and a social contract that together creates the world we want to live in.

Acknowledgments: I am part of the AI4K12 Initiative (ai4k12.org), a joint project of the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA), and funded by National Science Foundation award DRL-1846073. We are developing guidelines for teaching artificial intelligence in K-12. With my collaborators, I have had many conversations that have contributed to my understanding of this field. I most especially thank David Touretzky, Christina Gardner-McCune, Deborah Seehorn, Irene Lee, and Hal Abelson, and all members of our team. Thank you to Irene and Hal for feedback on a draft of this essay. Any errors in this essay are mine alone.

head shot of Fred Martin, chair of board of directors
Fred Martin, Chair of Board of Directors

References

Applebome, P. (1997). Insisting it’s nothing, creator says SAT, not S.A.T. The New York Times, April 2. Retrieved from https://www.nytimes.com/1997/04/02/us/insisting-it-s-nothing-creator-says-sat-not-sat.html.

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, May 23. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Buchmann, C., Condron, D. J., & Roscigno, V. J. (2010). Shadow education, American style: Test preparation, the SAT and college enrollment. Social forces, 89(2), 435–461.

Capon, N. (1982). Credit scoring systems: A critical analysis. Journal of Marketing, 46(2), 82–91.

Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on privacy enhancing technologies, 2015(1), 92–112.

Dieterich, W., Mendoza, C., & Brennan, T. (2016). COMPAS risk scales: Demonstrating accuracy equity and predictive parity. Northpoint Inc. Retrieved from http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf.

ECOA (1974). Equal Credit Opportunity Act, 15 U.S. Code § 1691. Retrieved from https://www.law.cornell.edu/uscode/text/15/1691.

Frank, C. H. (1970). The prediction of recidivism among young adult offenders by the recidivism-rehabilitation scale and index (Doctoral dissertation, The University of Oklahoma).

Goldfarb, Z. A. (2014). These four charts show how the SAT favors rich, educated families. The Washington Post, March 5. Retrieved from https://www.washingtonpost.com/news/wonk/wp/2014/03/05/these-four-charts-show-how-the-sat-favors-the-rich-educated-families/.

Hanson, R. K., & Morton-Bourgon, K. E. (2009). The accuracy of recidivism risk assessments for sexual offenders: a meta-analysis of 118 prediction studies. Psychological assessment, 21(1), 1.

PCRA (2018). Post Conviction Risk Assessment. Retrieved from https://www.uscourts.gov/services-forms/probation-and-pretrial-services/supervision/post-conviction-risk-assessment.

Introducing Cybersecurity Concepts in the K-12 Classroom

As a Career and Technical Educator, equipping students with career-readiness skills, like communication, problem-solving, and collaboration, is my first-order priority in the classroom. While these skills focus on preparing students to be successful in the workforce, we as educators have an increasing responsibility to prepare our students to be safe, respectful, and responsible digital citizens. Digital citizenship can be broadly understood as membership and participation in an online community, such as the internet or its various sub area. In this way, being a “good” digital citizen means, as the Digital Citizenship Institute defines it, having “norms of appropriate, responsible behavior with regard to technology use” [1].

One key behavior in the set of good digital citizen norms involves taking sufficient precautions to foster strong personal and community digital security. This goes far beyond telling your students to not talk to strangers online or to not share their personal information on social media sites. Students need to understand the kind of information that is being passively collected from them when they visit or create accounts on websites and what value it has to them, those that want to collect it, and potentially others if it gets leaked or released. Understanding the potential threats that they might face when sharing personal information on any website, including social media sites, is also important. As an example, I’ve taught many students that didn’t know that their photos contained geotags (longitude and latitude numbers) that could be used by attackers to figure out where they live or places where they frequent. Finally, equipping students with the skills they need to be able to identify potential attacks and avoid being a victim of scams, such as phishing and identity theft, is also paramount.

Even if you see the value of digital citizenship preparation in your classroom, you may feel like you don’t know where to start or how to tie topics like security and online safety into your existing curricula. Don’t worry! There are many online resources that can help. First decide what cybersecurity concepts you want to teach in your classroom. You can find lists of topics online ranging from social media safety to types of malware to password complexity. The bottom line is there are plenty of lessons and curriculum to choose from. You can even choose to integrate a single lesson, a module made up of several lessons, or even a whole semester or year-long curriculum. To help you move forward, I have listed some of the resources that have helped me along the way as I have integrated more cybersecurity concepts into my classroom.

Cybersecurity Curriculum

This curriculum was designed by a friend of mine for a high school computer science course with a focus on cybersecurity. I really like how his curriculum design is customizable. The activities that he provides can be single one-day lesson or a complete semester course. You can take a look at https://derekbabb.github.io/CyberSecurity/

Common Sense Media

Commons Sense Media provides a complete K-12 Digital Citizenship Scope and Sequence. Privacy & Security is one of the topics they focus on and there are a variety of lessons on various cybersecurity topics. I really like how topics are introduced in the K-2 grade band and then expanded on in higher grade bands. Find more at: https://www.commonsense.org/education/scope-and-sequence

UNO GenCyber Modules

I had the opportunity last summer to teach at a GenCyber Camp hosted by University of Nebraska at Omaha. This camp provided several modules that span a variety of cybersecurity topics. The modules are available online at www.nebraskagencyber.com and have a creative commons license. (Side note) If you’ve never attended a GenCyber Teacher Camp, you should check to see if one is being offered in your state.

Other Resources:

CodeHS Cybersecurity Course – This entirely web-based curriculum is made up of a series of learning modules that cover the fundamentals of cybersecurity. You can take a look at https://codehs.com/info/curriculum/cybersecurity

Cybersecurity Nova Labs – This Cybersecurity Lab is a game that allows players to discover how they can keep their digital lives safe and develop an understanding of cyber threats and defenses. You can take a look at https://www.pbs.org/wgbh/nova/labs/lab/cyber/

CyberPatriot – ​​​​The National Youth Cyber Education Program created by the Air Force Association (AFA) to inspire K-12 students toward careers in cybersecurity. You can look at it https://www.uscyberpatriot.org.

Citations:
[1] http://www.digitalcitizenship.net/nine-elements.html

I

This image has an empty alt attribute; its file name is headshot-264x300.png

Kristeen Shabram
K-8 representative


Situated Computational Thinking

Intro


The research group that I’m a part of, Re-Making STEM, of is looking at ways that computational thinking (CT) practices intersect with creative, collaborative human activities.  This has led to some really interesting explorations in computing, cognition, and culture. Our practical goals include: discovering ways that teachers and their students can engage with and learn CT, and discovering design principles for learning and applying CT in interesting ways.  In this post, we’ll look at some of those explorations and hopefully leave you with some things to think about.

Computational thinking

I think this definition of CT is as good a starting point as any:

Computational Thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent (Cuny, Snyder, Wing, 2010).

Wing (2010) says she’s not just using problem / solution to refer to mathematically well-defined problems but also to complex real-world problems.  She also says that the solutions can be carried out by humans, computers, or combinations of humans and computers. This definition places the emphasis on representation, but begs the question, what are forms that can be effectively “carried out” by information-processing agents?  What does “carried out” mean anyway?

Let’s pin these down for the sake of discussion.  We might say that the forms we’re talking about are abstract representations (abstractions, the noun).  Indeed, abstraction (the verb) is widely recognized as an essential component of CT (Grover and Pea, 2013).  Let’s say abstractions are formal representation (e.g. formal logic, mathematical equations, computer code), and “carry out,” means execute.  So we’re talking about executing algorithms. And let’s be real – we are only going to write formal algorithms if we intend to automate them with a computer.  

So if CT in practice is, “writing algorithms that can be executed by computers,” then we are really talking about programming.  This contradicts Wing’s clarifications about “problems” and “agents,” described above. Furthermore, the field is saying loud and clear that CT is not just programming.  Since 2013, the concept of CT has expanded (e.g. Weintrop et. al., 2015), and for most people it is certainly not limited to executing algorithms on computers.

Opening it up

Let’s look at this piece by piece, starting with the “carrying out.”  Even if we’re talking about formal representations and computers, CT involves formulating data as well.  Data is not “carried out,” or executed, like an algorithm – it is structured, processed, analyzed, synthesized, and interpreted (by humans and computers).  

Now let’s look at formality and agents as computers / humans.  We already saw what happens when we are strict about formality and computers.  If we loosen the restriction on formality, but still think of agents as computers (or virtual agents), then we allow pretty much any human-computer interaction.  If we keep formality strict, but allow for people as agents, then we allow for things like math to count. The latter might work for some, but I would ask: do we care about distinguishing between CT and mathematical thinking?  Is CT == mathematical thinking + computers? Do we want to allow for less formal expressions of CT?

Let’s put these two axes (more or less formal, extent of computer use) on a table.


We in the CS community might have a tendency to think about CT as living in the upper-left corner of the table (formal, tied to computer use).  In reality, creative collaborative human activity blends all of these types of communication, and CT (whatever it is) intersects with all of these other areas.  Authentic computational practice also involves multiple people and computers working together – there are more than two agents in the system. So, as a general case, we have systems with: agents (humans, computers, and virtual agents), situated in environments (physical, social / cultural, virtual), interacting using systems of representation (sounds, images, diagrams, natural and formal languages, etc.).  

One CT, many CTs

What are the implications of this?  I think there are two clear options for how we define CT:

  • (A) Restrict what we mean by CT.  This is perfectly reasonable and probably necessary for most practical purposes.  However, this has the inevitable consequence of fragmenting our understanding of CT.  There will be different CTs in different disciplines / fields. We will do this, but we should try to understand the restrictions that we are imposing, and the consequences of imposing them.
  • (B) Break our concept of CT wide open.  I think the scientific community (at least, those who are studying the construct of CT and how it plays out in real cultural contexts) should do this, so that we can explore how CT is understood and practiced in a variety of contexts and for a wide range of purposes.  

This is not a binary choice that we need to make, individually or collectively, once and for all.  The processes of imposing structures and breaking them apart will enrich our understandings of CT. In closing, I ask you to consider how you construct CT with your students and colleagues, and what effects this might have on who engages with and learns CT at your school.

These ideas in this post are part of a collaborative research effort with the Re-Making STEM PIs, Brian Gravel, Eli Tucker-Raymond, Maria Olivares, Amon Millner, Tim Atherton, and James Adler, and the dedicated research team, Ada Ren, Dionne Champion, Ezra Gouvea, Kyle Browne, and Aditi Wagh. 
This material is based upon work supported by the National Science Foundation under Grant Numbers DRL-1742369, DRL-1742091. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

References and further reading:

David Benedetto, At-Large Representative


Contexts and Roles in CS Education

To make the case for computer science and to develop an effective program, educators must understand the context and the roles people play.

First, a few clarifications:

  • Computer science (CS) here refers to educational experiences where the primary objective is to develop computing skills and knowledge.
  • Computational thinking (CT) refers to using computational practices in CS and in other disciplines.  For example, using computer modeling and simulation in science and engineering courses.

In this post I’ll focus on the subjects and teachers with an especially strong affinity with CS / CT.  These are people that might be already teaching CS, and most definitely should be incorporating CT. They include:

  • Digital Literacy / technology integration
  • STEM  (math, science, engineering, and computer science).
  • Career & technical education (e.g. IT, engineering, business)

Other teachers, for example social studies and humanities teachers, are less likely to teach CS – it’s certainly possible, but would be a more significant departure from their regular teaching duties than the above.  However, there are ample opportunities to incorporate CT practices.

Digital literacy, educational technology, digital citizenship.

These areas, in general, are about the safe and effective use of technology for a variety of uses.  Business teachers will focus on using technology for business purposes. Technology integration specialists or library / media specialists tend to support the integration of technology across the curriculum.  While these areas are not strictly CS, these educators are excellent candidates to become CS teachers. They are also great candidates to support the integration of CT.

Note:  A comparison between these areas and computer science is available at the K12 CS Framework.  (K12CS, Defining Computer Science.)

STEM education.

In a nutshell: science is about systematically exploring phenomena; engineering is about designing and developing technologies.  Mathematics and computing are tools that are used to do science and engineering.

Computer science is fundamentally mathematical, rooted in formal logic.  Math educators are great candidates to teach CS, but it’s important to consider the primary objective.  CS can have a strong mathematical focus, for example, when you are working with data and analysis.

Computer science is a science – it is the study of the principles and use of computers.  CS is also an engineering discipline – it includes the design and development of computer hardware and software.  It’s important to remember that using computing tools to advance other science and engineering disciplines is not exactly CS – this is computational science and engineering, which is very much in the realm of CT.

I think it’s important to remember that teaching CS and incorporating CT into other subjects are different things, albeit both very important.

Note: The K12 CS Framework also includes great information on computational thinking, including a venn diagram that connects CS / CT practices with math, science, and engineering practices.  (K12 CS, Computational Thinking).

Career & Technical Education

Career & Technical Education (CTE) is an umbrella term that includes a number of career clusters (occupation groups) and pathways (leading to specific occupations).  CTE programs are generally two-year programs that students may take towards the end of K-12.

The Information Technology (IT) cluster in CTE​ includes the following pathways:

  • Network Systems Pathway
  • Information Support & Services Pathway
  • Web & Digital Communications Pathway
  • Programming & Software Development Pathway

While the IT pathways are clearly closely related to CS, there are a few important points worth making.  CTE programs are upper-HS level, and focus on specific skills for a certain career pathway. This is different from a K-12 CS program, which focuses on core CS skills and knowledge that are applicable in many careers not only in IT, but also in other clusters such as engineering, business, etc.

The bottom line is that students should learn fundamental CS skills and knowledge earlier in K-12 so that they can apply them to whatever pathway they pursue, whether or not they participate in a CTE program.

For more information about career clusters, please visit: Advance CTE, Career Clusters.

The social sciences and humanities

While I focused on strong affinity groups here, I don’t want to entirely leave out other groups.  Like other sciences, the social sciences are increasingly data-driven and rely on computational methods.  The humanities are a great place to explore the impacts of technology on society. Arts educators employ practices that are related to engineering design and development processes.  The discussion could go on into other disciplines.

Aside from systemic constraints, the degree of CT incorporation into these areas is limited only by the knowledge and imagination of the educators, and can be strongly aided by effective collaboration.  

So… where does CS / CT go?!?

Educators need to determine how to build a CS program that fits their need.  Schools need a strong CS program, and they also need to incorporate CT across the curriculum.  There is no magic bullet, but any solution starts with a solid understanding of the context and possibilities, and a concrete plan to move forward.

David Benedetto

David Benedetto, At-Large Representative

Rethinking Computational Thinking

Over the past 18 months, I’ve had the opportunity to be part a team led by Joyce Malyn-Smith of EDC for her NSF grant, Computational Thinking from a Disciplinary Perspective. The project was inspired by earlier work that Joyce conducted with Irene Lee. (Irene is the creator of the Project GUTS curriculum for learning science and computational thinking via modeling and simulation).

In their work, Joyce and Irene interviewed a variety of practicing scientists to reveal how they used computing to do science. Through these interviews, they elaborated a variety of practices which include profound and creative uses of computing, often invented by the scientists themselves.

Since the publication of Jeannette Wing’s 2006 paper on computational thinking, our community has been engaged in a sense-making process: what exactly is it? The initial description of “thinking like a computer scientist” is a bit tautological—and not terribly helpful for someone who isn’t already a computer scientist.

I have personally been struggling with understanding the relationships among the broad categories of computer science, programming, and computational thinking. For example:

Q. Can you do computer science without programming?
A: Yes of course; we can analyze the complexity of a search algorithm, realize the need to use hashing to speed a table-lookup, etc.

Q. Can you do programming without computer science?
A. Probably. Beginners’ spaghetti code might be an example. “Hacking” in general suggests building things without an underlying theory (though there may be an implicit one). But let’s say yes to this too.

So, where does CT fit in? Is it in the intersection? Many people think you can do CT without doing programming, so perhaps not. How is CT not just another word for computer science then?

Venn diagram of programming and CS. Where does CT fit?

Venn diagram of programming and CS. Where does CT fit?

Jeannette Wing’s more recent paper (2011) provided this definition of CT: “Computational thinking is the [human] thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent [a computer].”

To me, this still sounds like “thinking like a computer scientist.” This is what we do! We formulate problems and their solutions so that a computer can carry them out!

So what’s the difference between doing CT and doing computer science?

Thanks to my collaboration with Joyce and Irene (and our whole team), I now see an answer.

Computational thinking is about connecting computing to things in the real world.

Here are some examples.

A starter program we may often have our students write is to model a checking account. Our students will use a variable to represent the bank balance, and build transactions like deposits and withdrawals. Maybe they’ll represent the idea of an overdraft, or insufficient funds.

Let me argue that this simple example captures the essence of computational thinking.

What makes it so is that we are connecting a concept in the world—money in a bank account—to its representation in a computational system. This sounds pretty simple. But there is surprising complexity. What sort of numerics should we use—e.g., should we represent fractional pennies? For a beginning student, we could ignore this. But in a more elaborated solution, this intersection of computational considerations and real-world concerns is crucial—and this is computational thinking.

Here is another example. Consider how we usually represent colors. We use three bytes of information: 0 to 255 amounts of red, green, and blue (RGB) light. For web HTML, we’d use the hexadecimal notation. For example, #8020C0 is 128 (decimal) of red, 32 (decimal) of green, and 192 (decimal) of blue, or this color:

A purple swatch which is #8020C0.

A purple swatch which is #8020C0.

This RGB representation was created at the intersection of the neurophysiology of human vision, the physics of how we build displays, and practical considerations of computing. Why do we mix only these three wavelengths of light? Because the way our eyes and brains work, we can mimic practically any color with just these three. Why use just one byte of information for each color intensity? It turns out the ~16 million colors which can be represented this way is quite powerful—and good enough—for how we use computers now.

So the whole notion of the RGB representation of color is computational thinking in action.

For a more elaborated example, let’s consider the JPEG file format—of the Joint Photographic Experts Group. This team included computer scientists, neurophysiologists, and artists. Their insight was that we could compress images by a factor of ten or more by discarding information that the human eye doesn’t see anyway. What a fabulous insight—and the very essence of computational thinking, because it connects concepts in computing (like compression algorithms) to understandings of our physical and perceptual worlds.

To revise our illustration, now CT is the “connecting tissue” between the world of computer science / programming expertise and the world of disciplinary knowledge:

Visualization of CT as “connecting tissue” between CS/programming and disciplinary knowledge of the world

Visualization of CT as “connecting tissue” between CS/programming and disciplinary knowledge of the world

To “do CT,” you need to know about both worlds. You need to know how to create solutions using computing. You need to know something about a domain in the world. And CT is the knowledge, skills set, and disposition of intermediating between these two.

Now, Jeannette Wing’s 2011 definition makes perfect sense: “Computational thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent.”

Yes! The key is recognizing that there is a non-computational domain—something in the world that we care about—which is being transformed (represented computationally) in this process.

To close the loop back to Joyce’s project: In addition to myself and Irene Lee, Joyce’s team had project advisers Michael Evans and Shuchi Grover, her EDC colleagues Paul Goldenberg, Lynn Goldsmith, Marian Pasquale, Sarita Pillai, and Kevin Waterman, and project evaluator David Reider.

In a series of planning meetings and then a pair of 2-day workshops with K-12 CS practitioners and researchers from around the country, we developed the idea of how computational thinking is transformed by connecting it to scientific disciplinary practice.

We created a framework with a set of five “elements” which illustrate the integration of computational thinking into disciplinary understanding.

Please stay tuned for work to come from our group, presenting this idea of “Computational Thinking From a Disciplinary Perspective.”

It’s given me a whole new way to think about what computational thinking can mean.

It’s about connecting computing to the world.

head shot of Fred Martin, chair of board of directors

Fred Martin, chair of board of directors

Just released: Video interviews on computational thinking

What is computational thinking?

How is computational thinking distinct from other thinking skills?

How can teachers assess computational thinking skills?

Have you ever wanted to ask an expert these questions? The CSTA Computational Thinking Task Force is creating a series of video interviews in which we do just that!

Listen in on our conversation with Chris Stephenson, Director of Computer Science Education Programs at Google, as she answers our questions and describes cross curricular computational thinking applications in the task of preserving native languages (https://youtu.be/FuN6g8NmuHc).

Listen to our conversation with Eric Snow, Education Researcher in the Center for Technology in Learning at SRI International as he answers our questions and describes his research in assessing computational thinking (https://youtu.be/92pv8dPItjE).

We have several more interviews with experts in the field planned for later this fall.

All of the interviews are archived here: csteachers.org/page/CompThinkInterviews.

Computational Thinking — What does it mean to you?

How do you integrate computational thinking (CT) concepts and strategies into your teaching? Have you heard your colleagues talk about it and wondered if they have accurate and useful understandings of how CT can be used across the curriculum? Are you curious about how other schools, or even other countries, are implementing CT strategies? Wondering where you can get more information?
Well, consider the March issue of the CSTA Voice as your CT 101 Primer! Take a look and then let us know what you’re thinking about the topic.

  • Take a step back to the conceptual foundations of CT with a review of the “roots of CT” with Irene Lee, Co-chair of the CSTA CT Task Force.
  • Discover how England is embedding CT into the national computing curriculum with John Woollard, leading member of Computing At School (CAS).
  • Compare the problem-framing strategies that help students connect math to everyday problems with MEAs (Model-Eliciting Activity) to CT strategies with Fred G. Martin, Co-chair of the CSTA CT Task Force.
  • Explore the list of CT resources gathered by Joe Kmoch, CS consultant and retired educator.

AND OF UTMOST IMPORTANCE…

  • VOTE! Read the statements from the 10 candidates running for the 5 open seats on the CSTA Board of Directors in the March Voice. The affairs and property of the Organization are managed, controlled, and directed by a Board of Directors elected by you. A huge amount of work through committees and task forces is also completed by these Board members.
  • REGISTER for the 2016 CSTA Conference. Read more about the plans for “Making Waves in San Diego” in the March Voice.

Pat Phillips, Editor
CSTA Voice

A review of Google’s Exploring Computational Thinking resources

By Joe Kmoch

In Spring 2015, Google began work on revamping their CT website. Their materials are available at a website called Exploring Computational Thinking:

https://www.google.com/edu/resources/programs/exploring-computational-thinking/

A team at Google developed a template for lessons which would be made available on their site. They took the large number of lessons that were already on their site and rewrote them into this new lesson plan format. They hired a group of educators to review all of those lessons and now have about 130 lessons and other materials available.

These lessons have specific plans, are interactive and inquiry-based, and include additional resources. There are lessons in 17 subject areas mostly in math and the sciences. These lessons are also cross-referenced to various sets of international standards (Common Core, NGSS, CSTA K-12, and standards from UK, Australia, New Zealand and Israel).

At about the same time, another Google team developed a group of six videos called CT@Google which focus on the Seven Big Ideas from the CS Principles course, and how Google uses them in their work.

Finally, Google developed an interactive, online course, CT for Educators, where teachers learn what CT is and how it can be integrated into a variety of subject areas. It is quite good and can help a teacher work CT concepts into their regular lessons:

All of these are quality resources.

 

Disclosure: the author was compensated by Google for assistance in editing the collection of lesson plans mentioned in this article.

Designing Thinking in K-12

During my recent trip to India, I visited the American Embassy School (AES) in New Delhi. During my visit, I was able to talk to the members of the technology integration team and how they are combining design thinking, computational thinking, and maker space ideas to allow students to become creative users of computing technologies. More on AES tech vision can be read here. While computational thinking in K-12 schools has gotten a lot of attention, design thinking has the potential to further enhance students’ creative problem solving.

The Institute of Design (d.school) at Stanford University offers a virtual crash course that exposes learners to the five aspects of design thinking: empathize, define, ideate, prototype, and test. Teachers interested in learning more about how to embed design thinking in their K-12 classroom can find more resources on the d.school’s K12 lab network wiki.

CSTA Computational Thinking (CT) Task Force

Why was the Computational Thinking (CT) Task Force formed?

One of the primary purposes of the CSTA is to support K-12 CS educators. Thus, it’s important that the CSTA be aware of current developments in computer science education, including Computational Thinking (CT), so we can take advantage of new opportunities and new partnerships. The CT Task Force was formed to advise the organization about how to connect with and respond to new Computational Thinking initiatives.

Who are the members of the CT Task Force?

In July 2014, the CT Task Force re-assembled with these members:

Irene Lee, Chair (Santa Fe Institute, Project GUTS)
Fred Martin, Co-Chair (University of Massachusetts Lowell)
J. Philip East (University of Northern Iowa)
Diana Franklin (University of California, Santa Barbara)
Shuchi Grover (Stanford University)
Roxana Hadad (Northeastern Illinois University)
Joe Kmoch (University of Wisconsin-Milwaukee)
Michelle Lagos (American School of Tegucigalpa)
Eric Snow (SRI International)

What does the CT Task Force do?

This year, we are focusing on CT in K-8 teaching and learning. This is a pressing need, and we would like to understand the scope of what is being called “computational thinking” in K-8: how it is being defined, what tools and curricula are being used to teach computational thinking, and how it is being assessed. Task Force members also participate on related efforts, such as developing proposals for providing professional development in CT through the CSTA.

How does the CT Task Force serve the CSTA membership?

We serve the membership by:

1) Writing, publishing and disseminating papers on CT

2) Coordinating efforts to inform K-8 educators about CT

3) Making presentations on CT at educational conferences

4) Updating the CT webpage on the CSTA website

We welcome suggestions and contributions from the CSTA membership on ways the CT Task Force can better serve you.