Now that I’m retired (and busier than ever!), I often reflect on how I learned to love science and computing. Back then, then computing didn’t really exist – it was math. I remember a middle school math class where I had to figure out how to turn on and off red, yellow and green lights. That was probably my first programming experience but they called it logic. I remember other similar activities in middle school and high school such as when we acted out directions EXACTLY as someone had written. The thinking concisely and with order was great fun and challenging! I loved it before I went to college and through a variety of twists and turns in my academic life, I ended back at (now known as) computer science.
The recent exciting news of being able to visualize a black hole because of an algorithm developed by a team lead by computer scientist Katie Bouman has certainly captured my imagination. If you’ve had the chance to read more about Katie, she credits her love of computing from her high school experiences. In graduate school, she didn’t even know what a black hole was but once she got involved, she was hooked on figuring out how computing could capture all of the data and integrate this information from the many different telescopes to produce an image. “If you study things like computer science and electrical engineering, it’s not just building circuits in your lab,” she says. “You can go out to a telescope at 15,000 feet above sea level, and you can use those skills to do something that no one’s ever done before.” (https://www.cnbc.com/2019/04/12/katie-bouman-helped-generate-the-first-ever-photo-of-a-black- hole.html)
Encouraging more students to try computing is one of the reasons I volunteer my time working with CSTA. I believe it’s K-12 that guides us to identifying what we find challenging and rewarding. Another organization that I have worked with extensively is National Center for Women & Information Technology (NCWIT). While their primary focus is on girls and women in computing, many of their resources are applicable and valuable to all. For the K-12 audience, they have whitepapers with research references, podcast that are appropriate for high school students, tool kits which can help you organize events, great information in language that everyone can understand, etc.
Computer Science-in-a-Box: Unplug Your Curriculum (2018 Update) – Computer Science-in-a- Box: Unplug Your Curriculum introduces fundamental building blocks of computer science — without using computers. Use it with students ages 9 to 14 to teach lessons about how computers work, while addressing critical mathematics and science concepts such as number systems, algorithms, and manipulating variables and logic.
There are many many more resources of all forms that target the many facets of the K-12 world. The NCWIT website (NCWIT.org) has an easily searchable K-12 resource section. Take some time and take a look. I’ll bet you’ll find some interesting things.
We are lucky to be living in a time where computing plays such an important role in our daily lives. We’re even luckier to be able to help student learn just how cool computing can be!
In the past, I have typically used my blog space as a Computer Science
Teachers Association (CSTA) Board Member as a place to advance policy or focus
on initiative ideas. With this blog I will focus on the main purpose of CSTA,
supporting computer science educators. On December 6, 2018 as part of the 2018
CS Education Week Announcements, Gov. Asa Hutchinson announced the creation of
the Arkansas Computer Science Educator of the Year (CS-EOY) Award. During the
planning and development of this award, we wanted this award to be on-par with
the state’s Teacher of the Year award in terms of prestige and recognition.
My office launched the application request system on February 4, 2019
and over the next month we received 30 completed applications. The state’s #CSforAR
/ #ARKidsCanCode Computer Science Specialists, Jim Furniss, Tammy Glass,
Kelly Griffin, Lori Kagebein, Eli McRae, Jigish Patel, Leslie Savell, and
Zack Spink, under my
facilitation completed the first level review. This review process, which
focused on the overall quality of applications, each of which included a
resume, letters of recommendation, and an applicant selected artifact; the
applicant’s vision for and understanding of the value of computer science
education for the current and future generations of Arkansas students; the
applicant’s understanding of how their implementation of computer science
education exemplifies quality teaching; and the applicant’s current and
long-term impact on computer science education locally, statewide, and nationally,
resulted in the selection of the five CS-EOY State Finalists:
Carl Frank; Computer Science Teacher –
Arkansas School for Mathematics, Sciences, and the Arts; Hot Springs, AR
Josefina Perez; Business/Computer
Science Teacher – Springdale High School; Springdale, AR
Brenda Qualls; Computer Science Teacher
– Bryant High School; Bryant, AR
Kimberly Raup; Computer Science Teacher
– Conway High School; Conway, AR
Karma Turner; Computer Science Teacher
– Lake Hamilton High School; Pearcy, AR
Many
of you probably recognize these names, as they have been significant members of
the CSTA and greater computer science education community for some time both in
Arkansas and nationally.
The
second round review focused on the same criteria and was conducted by Anthony
Owen, Arkansas State Director of Computer Science Education; Don Benton, ADE
Assistant Commissioner of Technology;
G.B. Cazes, Metova Executive Vice President; Jake Baskin, Executive
Director of Computer Science Teachers Association; Dr. Sarah Moore, Arkansas
State Board of Education; and Sheila Boyington, Thinking Media/Learning Blade
President/CEO.
On
Thursday, May 2, 2019, Gov. Hutchinson held a press conference to recognize the
work and selection of these five finalists. In addition, Gov. Hutchinson
recognized Ms. Karma Turner as the 2018-2019 Arkansas Computer Science Educator of the
Year. During the press conference, each of the finalists received $2,500 and
recognition plaque. Ms. Turner received an additional $12,500 and a 2019
Computer Science Educator of the year trophy from Gov. Hutchinson. These awards
were provided through funding from the ADE Office of Computer Science, which is
a Special Project Unit formed to implement Gov. Hutchinson’s visionary Computer
Science Education initiative. Arkansas is recognized nationally and
internationally as leading the computer science for all education movement
through Gov. Hutchinson’s #CSforAR / #ARKidsCanCode initiative.
Abstract: Artificial intelligence (AI) is automated decision-making, and it builds on quantitative methods which have been pervasive in our society for at least a hundred years. This essay reviews the historical record of quantitative and automated decision-making in three areas of our lives: access to consumer financial credit, sentencing and parole guidelines, and college admissions. In all cases, so-called “scientific” or “empirical” approaches have been in use for decades or longer. Only in recent years have we as a society recognized that these “objective” approaches reinforce and perpetuate injustices from the past into the future. Use of AI poses new challenges, but we now have new cultural and technical tools to combat old ways of thinking.
Introduction
Recently, concerns about the use of Artificial Intelligence (AI) have taken center stage. Many are worried about the impact of AI on our society.
AI is the subject of much science fiction and fantasy, but simply put, AI is automated decision-making. A bunch of inputs go into an AI system, and the AI algorithm declares an answer, judgment, or result.
This seems new, but quantitative and automated decision-making has been part of our culture for a long time—100 years, or more. While it may seem surprising now, the original intent in many cases was to eliminate human bias and create opportunities for disenfranchised groups.Only recently are we recognizing that these “objective” and “scientific” methods actually result in reinforcing the structural barriers that underrepresented groups actually face.
This essay reviews our history in three areas in which automated decision-making has been pervasive for many years: decisions for awarding consumer credit, recommendations for sentencing or parole in criminal cases, and college admissions decisions.
Consumer credit
The Equal Credit Opportunity Act, passed by the U.S. Congress in 1974, made it unlawful for any creditor to discriminate against any applicant on the basis of “race, color, religion, national origin, sex, marital status, or age” (ECOA 1974).
As described by Capon (1982), “The federal legislation was directed largely at abuses in judgmental methods of granting credit. However, at that time judgmental methods that involved the exercise of individual judgment by a credit officer on a case-by-case basis were increasingly being replaced by a new methodology, credit scoring.”
As recounted by Capon, credit scoring systems were first introduced in the 1930s to extend credit to customers as part of the burgeoning mail order industry. With the availability of computers in the 1960s, these quantitative approaches accelerated. The “credit scoring systems” used anywhere from 50 to 300 “predictor characteristics,” including features such as the applicant’s zip code of residence, status as a homeowner or renter, length of time at present address, occupation, and duration of employment. The features were processed using state-of-the-art statistical techniques to optimize their predictive power, and make go/no-go decisions on offering credit.
As Capon explains, in the years immediately after passage of the ECOA, creditors successfully argued to Congress that “adherence to the law would be improved” if these credit scoring systems were used. They contended that “credit decisions in judgmental systems were subject to arbitrary and capricious decisions” whereas decisions made with a credit scoring system were “objective and free from such problems.”
As a result, Congress amended the law with “Regulation B” which allowed the use of credit scoring systems on the condition that they were they were “statistically sound and empirically derived.”
This endorsed companies’ existing use of actuarial practices to indicate which predictor characteristics had predictive power in determining credit risk. Per Capon: “For example, although age is a proscribed characteristic under the Act, if the system is statistically sound and empirically derived, it can be used as a predictive characteristic.” Similarly, zip code, a strong proxy for race and ethnicity, could also be used in credit scoring systems.
In essence, the law of the United States ratified the use of credit scoring algorithms that discriminated, so long as the as the algorithms were “empirically derived and statistically sound”—subverting the original intent of the 1974 ECOA law. You can read the details yourself—it does actually say this (ECOA Regulation B, Part 1002, 1977).
Of course, denying credit, or offering only expensive credit, to groups that historically have had trouble obtaining credit is a sure way to propagate the past into the future.
Recommendations for sentencing and parole
In a deeply troubling, in-depth analysis, ProPublica, an investigative research organization, showed how a commercial and proprietary software system is being used to make parole recommendations to judges for persons who have been arrested is biased (Angwin et al., 2016).
As ProPublica reported, even though a person’s race/ethnicity is not part of the inputs provided to the software, the commercial software (called COMPAS, as part of the Northpointe suite) is more likely to predict a high risk of recidivism for black people. In a less well-publicized finding, their work also found that COMPAS was more likely to over-predict recidivism for women than men.
What was not evident in the press surrounding the ProPublica’s work is that the US has been using standardized algorithms to make predictions on recidivism for nearly a century. According to Frank (1970), an early and classic work is a 1931 study by G. B. Vold, which “isolated those factors whose presence or absence defined a group of releasees with a high (or low) recidivism rate.”
Contemporary instruments include the Post Conviction Risk Assessment, which is “a scientifically based instrument developed by the Administrative Office of the U.S. Courts to improve the effectiveness and efficiency of post-conviction supervision” (PCRA, 2018); the Level of Service (LS) scales, which “have become the most frequently used risk assessment tools on the planet” (Olver et al., 2013); and Static-99, “the most commonly used risk tool with adult sexual offenders” (Hanson and Morton-Bourgon, 2009).
These instruments have undergone substantial and ongoing research and development, with their efficacy and limitations studied and reported upon in the research literature, and it is profoundly disturbing that commercial software that is closed, proprietary, and not based on peer-reviewed studies is now in widespread use.
It is important to note that Equivant, the company behind COMPAS, published a technical rebuttal of ProPublica’s findings, raising issues with their assumptions and methodology. According to their report, “We strongly reject the conclusion that the COMPAS risk scales are racially biased against blacks” (Dieterich et al., 2016).
Wherever the truth may lie, the fact that the COMPAS software is closed source prevents an unbiased review, and this is a problem.
College admissions decisions
At nearly one hundred years old, the SAT exam (originally known as the “Scholastic Aptitude Test”) is a de facto national exam in the United States used for college admission decisions. In short, it “automates” some (or much) of the college admissions process.
What is less well-known is that the original developers of the exam intended it to “level the playing field”:
When the test was introduced in 1926, proponents maintained that requiring the exam would level the playing field and reduce the importance of social origins for access to college. Its creators saw it as a tool for elite colleges such as Harvard to use in selecting deserving students, regardless of ascribed characteristics and family background (Buchmann et al., 2010).
Of course, we all know what happened. Families with access to financial resources hired tutors to prep their children for the SAT, and whole industry of test prep centers was born. The College Board (publisher of the SAT) responded in 1990 by renaming the test to be the Scholastic Assessment Test, reflecting the growing consensus that “aptitude” is not innate, but something that can be developed with practice. Now, the test is simply called the SAT—a change which the New York Times reported on with the headline “Insisting it’s nothing” (Applebome, 1997).
Meanwhile, contemporary research continues to demonstrate that children’s SAT scores correlate tightly with their parent’s socioeconomic status and education levels (“These four charts show how the SAT favors rich, educated families,” Goldfarb, 2014).
The good news is that many universities now allow students to apply for admission as “test-optional”; that is, without needing to submit SAT scores or those from similar standardized tests. Students are evaluated using other metrics, like high school GPA, and a portfolio of their accomplishments. This approach allows universities to admit a more diverse set of students while evaluating they are academically qualified and college-ready.
What are the takeaways?
There are three main lessons here:
1. Automated decision-making has been part of our society for a long time, under the guise of it being a “scientific” and “empirical” method that produces “rational” decisions.
It’s only recently that we are recognizing that this approach does not produce fair outcomes. Quite to the contrary: these approaches perpetuate historical inequities.
2. Thus today’s use of AI is a natural evolution of our cultural proclivities to believe that actuarial systems are inherently fair. But there are differences: (a) AI systems are becoming pervasive in all aspects of decision-making; (b) AI systems use machine learning to evolve their models (decision-making algorithms), and if those decision-making systems are seeded with historical data, the result will necessarily be to reinforce the structural inequities of the past; and (c) many or most AI models are opaque—we can’t see the logic inside of them used to generate decisions.
It’s not that people are intentionally designing AI algorithms to be biased. Instead, it’s a predictable outcome of any model that’s trained on historical data.
3. Now that we are realizing this, we can have an intentional conversation about the impact of automated decision-making. We can create explicit definitions of fairness—ones that don’t blindly extend past injustices into the future.
In general, I am an optimist. Broadly, technology has vastly improved our world and lifted many millions of people out of poverty. Artificial Intelligence is presently being used in many ways that create profound social good. Real-world AI systems perform early, non-invasive detection of cancer, improve crop yields, achieve substantial savings of energy, and many other wonderful things.
There are many initiatives underway to address fairness in AI systems. With continued social pressure, we will develop technologies and and a social contract that together creates the world we want to live in.
Acknowledgments: I am part of the AI4K12 Initiative (ai4k12.org), a joint project of the Association for the Advancement of Artificial Intelligence (AAAI) and the Computer Science Teachers Association (CSTA), and funded by National Science Foundation award DRL-1846073. We are developing guidelines for teaching artificial intelligence in K-12. With my collaborators, I have had many conversations that have contributed to my understanding of this field. I most especially thank David Touretzky, Christina Gardner-McCune, Deborah Seehorn, Irene Lee, and Hal Abelson, and all members of our team. Thank you to Irene and Hal for feedback on a draft of this essay. Any errors in this essay are mine alone.
Buchmann, C., Condron, D. J., & Roscigno, V. J. (2010). Shadow education, American style: Test preparation, the SAT and college enrollment. Social forces, 89(2), 435–461.
Capon, N. (1982). Credit scoring systems: A critical analysis. Journal of Marketing, 46(2), 82–91.
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on privacy enhancing technologies, 2015(1), 92–112.
Frank, C. H. (1970). The prediction of recidivism among young adult offenders by the recidivism-rehabilitation scale and index (Doctoral dissertation, The University of Oklahoma).
Hanson, R. K., & Morton-Bourgon, K. E. (2009). The accuracy of recidivism risk assessments for sexual offenders: a meta-analysis of 118 prediction studies. Psychological assessment, 21(1), 1.
These past weeks I have been
thinking about how Computer Science education and the way to teach it has
evolved. I have been a teacher for about 19 years now, and most of the time my
students make the most interesting questions that get me thinking and researching
about certain topics. That is how this blog was conceived. I am currently
teaching my 9th graders how to work with BBC Microbits. (By the way,
Microbits are awesome!) To introduce them I start giving them instructions that
are very detailed about how the Microbits work and to get acquainted with the
Make Code interface. When I say detailed, it is very detailed. I give them a
step by step guide including screenshots of where to find the necessary blocks,
how to save, download the program and upload it to the Microbit. How to use the
Microbit simulator included in the Make code interface. Once we do several
projects in which we learn how to make the Microbit sing, how to work with the
LED screen and how to connect alligator clips, I assign a project in which they
have to come up with a character and incorporate the Microbit as part of it
adding at least 2 actions with it. That’s when it all goes south!!!!
Many kids seem lost. It’s like
they have never used a Microbit before. That got me thinking. When I started
learning programming, I learned using Pascal with a green and black screen and
all programming was text based. It was hard!!! But I also remember a professor
telling us that if we learn the hard way then after any programming language
should not be as hard to learn as we had the base and logic to programming. At
the time I really hated that comment as any student would’ve but today as a
teacher I wonder if I am up to something here. Am I, as a teacher, allowing my
students to really think on their own? To really grasp the logic of creating a program.
Or are they just little robots following my instructions?
I decided to analyze the progression of my students to get to ninth grade Computer Science. Throughout their early years we want to engage them and get them to like and be interested in Computer Science and all the possibilities they have with it. As we introduce them to all the wonderful things that we can achieve with Computer Science, we look for tools that are engaging and fun. Many companies have helped produce such introductory tools, which make it so easy for kids to learn that they start enjoying programming. However, they get so used to it that then the progression to more complex programming seems harder. Emphasis on “seems”. Making the transition from block programming to text programming is set by many of these tools, including the Microbit. The Microbit can be programmed using blocks, JavaScript or Python so that is covered. But there is an element that only teachers can do and it is to facilitate the transition between just giving guidelines that are so specific that it seems students are only copying a program while truncating their creativity and promoting the ability to create and discover on their own or by giving a task for them to solve on their own. I realize that although I am teaching Computational Thinking skills my kids are used to getting very specific instructions for programming. This is not bad it’s just that the transition is not as seamless as it seems. So how should the transition take place? I believe a good starting point is to be cutting on the screenshots on the instructions guide and limit them to the instructional part of the lesson, by going through the steps with them and let them take their own notes. Then when a project is assigned, they can take a look back at their notes as a reference. Another tip is to include videos as additional help but getting away from giving too detailed step by step instructions starting in the Middle School area so that when presented with these kinds of projects in High School, they have a base on how to solve them. Let the instructions be a guide and not a solved problem for them to copy.
As a Career and Technical Educator, equipping students with career-readiness skills, like communication, problem-solving, and collaboration, is my first-order priority in the classroom. While these skills focus on preparing students to be successful in the workforce, we as educators have an increasing responsibility to prepare our students to be safe, respectful, and responsible digital citizens. Digital citizenship can be broadly understood as membership and participation in an online community, such as the internet or its various sub area. In this way, being a “good” digital citizen means, as the Digital Citizenship Institute defines it, having “norms of appropriate, responsible behavior with regard to technology use” [1].
One key behavior in the set of good digital citizen norms involves taking sufficient precautions to foster strong personal and community digital security. This goes far beyond telling your students to not talk to strangers online or to not share their personal information on social media sites. Students need to understand the kind of information that is being passively collected from them when they visit or create accounts on websites and what value it has to them, those that want to collect it, and potentially others if it gets leaked or released. Understanding the potential threats that they might face when sharing personal information on any website, including social media sites, is also important. As an example, I’ve taught many students that didn’t know that their photos contained geotags (longitude and latitude numbers) that could be used by attackers to figure out where they live or places where they frequent. Finally, equipping students with the skills they need to be able to identify potential attacks and avoid being a victim of scams, such as phishing and identity theft, is also paramount.
Even if you see the value of digital citizenship preparation in your classroom, you may feel like you don’t know where to start or how to tie topics like security and online safety into your existing curricula. Don’t worry! There are many online resources that can help. First decide what cybersecurity concepts you want to teach in your classroom. You can find lists of topics online ranging from social media safety to types of malware to password complexity. The bottom line is there are plenty of lessons and curriculum to choose from. You can even choose to integrate a single lesson, a module made up of several lessons, or even a whole semester or year-long curriculum. To help you move forward, I have listed some of the resources that have helped me along the way as I have integrated more cybersecurity concepts into my classroom.
Cybersecurity Curriculum
This curriculum was designed by a friend of mine for a high school computer science course with a focus on cybersecurity. I really like how his curriculum design is customizable. The activities that he provides can be single one-day lesson or a complete semester course. You can take a look at https://derekbabb.github.io/CyberSecurity/
Common Sense Media
Commons Sense Media provides a complete K-12 Digital Citizenship Scope and Sequence. Privacy & Security is one of the topics they focus on and there are a variety of lessons on various cybersecurity topics. I really like how topics are introduced in the K-2 grade band and then expanded on in higher grade bands. Find more at: https://www.commonsense.org/education/scope-and-sequence
UNO GenCyber Modules
I had the opportunity last summer to teach at a GenCyber Camp hosted by University of Nebraska at Omaha. This camp provided several modules that span a variety of cybersecurity topics. The modules are available online at www.nebraskagencyber.com and have a creative commons license. (Side note) If you’ve never attended a GenCyber Teacher Camp, you should check to see if one is being offered in your state.
Other Resources:
CodeHS Cybersecurity Course – This entirely web-based curriculum is made up of a series of learning modules that cover the fundamentals of cybersecurity. You can take a look at https://codehs.com/info/curriculum/cybersecurity
Cybersecurity Nova Labs – This Cybersecurity Lab is a game that allows players to discover how they can keep their digital lives safe and develop an understanding of cyber threats and defenses. You can take a look at https://www.pbs.org/wgbh/nova/labs/lab/cyber/
CyberPatriot – The National Youth Cyber Education Program created by the Air Force Association (AFA) to inspire K-12 students toward careers in cybersecurity. You can look at it https://www.uscyberpatriot.org.
These are quotes from our 2018 CSTA / Infosys Foundation USA teaching excellence award winners. A group of teachers that have not only made an outstanding impact within their own classrooms but also started new district wide programs; built engaging, strident led, inter-school partnerships; and lead the team revising the AP CS A exam! The truth is that even the most effective teachers find themselves facing doubt. Teaching is a HARD job, especially as a computer science teacher.
CSTA is here to make sure we take time to recognize the amazing work that’s happening in computer science classrooms across the country. This week we launched the application for the 2019 CSTA / Infosys Foundation USA Teaching Excellence Award with a few updates:
The application is split into two parts, making it easier to apply, and only requiring additional steps, like letters of recommendation after an initial review. We hope this will encourage more teachers to apply before that self doubt we all have creeps in.
We’ve doubled the number of awards, because there are so many outstanding teachers and we want to acknowledge them all. Starting this year there will be five winning teachers and five honorable mentions.
You can now nominate a great teacher, encouraging them to complete the application and letting them know that you think they are an excellent computer science teacher.
The first round of the application is open through April 14 and shouldn’t take more than 45 minutes to complete. For more information and to apply now visit the award page.
The SIGCSE (the ACM Special Interest Group for Computer Science Education) Technical Symposium is the largest computing education conference worldwide. While the majority of sessions target higher education, there is a growing focus on K-12 education. I’m excited to share some learnings and research nuggets relevant to K-12 CS teachers from SIGCSE 2019.
EFFECTIVE TEACHING PRACTICES
In his keynote, Mark Guzdial made several recommendations for improving computing education:
Teach CS in other courses/contexts. Mark used an analogy of visiting a foreign country: how much language do you need to know to get by? It’s better to know more, but you don’t need to be fluent to enjoy your time. There is amazing learning power even knowing a small subset of CS.
Ask students to make predictions during live code demos.Get them to explicitly commit to a prediction, then test, and prompt reflection.
You don’t have to write code to learn from code.
Subgoal labeling improves understanding, retention, and transfer, in both blocks- and text-based programming, for both high school and undergraduate students. In fact, just adding text labels to video tutorials makes a significant difference.
Do what works: pair programming, worked examples, Parsons problems, media computation.
Helen Hu presented a POGIL (process oriented guided inquiry learning) lesson that guides teams of students in constructing their own style conventions for naming variables and writing expressions. See full activity and role cards. See also additional POGIL activities for CS Principles courses.
David Weintrop and colleagues presented research comparing high school students’ performance on blocks-based and text-based questions (similar to the formats used on the AP CS Principles exam). Students across all racial and gender groups performed better on the questions presented in blocks-based form, for all of the concepts studied.
Reading and tracing code is useful in understanding how program code actually works. PRIMM is an approach to planning programming lessons and activities and includes the following stages: Predict, Run, Investigate, Modify, and Make. See sample PRIMM activity sheets.
INCLUSION
In her keynote, Marie desJardin identified five pernicious myths that impede diversity in CS:
“Anybody can be a computer scientist – girls just don’t want to”
“It’s just a joke – don’t you have a sense of humor?”
“ ‘Diversity programs’ are just political correctness”
Colleen Lewis created an Apples to Apples-like game for teachers to identify opportunities for inclusive teaching strategies and practice responding to microaggressions. View the printable cards and instructions. See also thecritical listening guide from NCWIT (National Center for Women in Information Technology).
The 2018 National Survey of Science and Mathematics Education (NSSME+) surveyed over 2,000 U.S. schools and asked targeted questions about computer science for the first time. A key finding is that most current PD efforts focus on deepening teachers’ CS content knowledge, and there needs to be a greater focus on pedagogy and supporting students from diverse backgrounds. See detailed report and slide deck.
DEBUGGING
An interesting panel on debugging included several useful tidbits:
Deborah Fields suggested that teachers celebrate a “favorite mistake of the day” to create in-time teaching moments and encourage students to ask questions and share their mistakes. This can lower the stakes of failure and normalize mistakes as part of the process.
Colleen Lewis encouraged educators to live code in front of classes and explain their thinking, testing, and debugging processes. Model immediate and frequent testing, and promote growth mindset by learning from mistakes. See CS Teaching Tips for debugging.
Gary Lewandowski synthesized common types of bugs in programs:
The Everyday Computing team presented their newest K-8 learning trajectory on debugging. (See other learning progressions on sequence, repetition, conditionals, and decomposition).
Zack Butler and Ivona Bezakova have curated many different pencil puzzle types and ideas that can be used as context for many high school CS concepts such as arrays, loops, recursion, GUIs, inheritance, and graph traversal. View a sample of puzzles.
TeachingSecurity.org introduces foundational ideas of cybersecurity, built on threat modeling and the human-centered nature of authentication. The lessons are designed to meet the cybersecurity learning objectives in the AP CS Principles (CSP) framework, but they are flexible enough to be used in any high school CS class.
Shuchi Grover and SRI developed a series of unplugged and non-programming, computer-based activities to develop conceptual strong understanding of variables, expressions, loops, and abstraction.
PROGRAMMING ENVIRONMENTS & CURRICULA
p5.js is a Processing JavaScript library and web editor. Processing is a programing language developed specifically for visual artists; p5.js enables web-based programming in Processing. The New York City Department of Education has developed an introduction to media computation course using p5.js.
MYR is an online editor for editing and viewing virtual 3-dimensional worlds. The Engaging Computing Group’s goal is to make programming virtual reality (VR) accessible to beginners. Real-time sync allows users to program and enjoy their work almost instantaneously on a VR headset.
EarSketch is a programming environment that teaches (JavaScript or Python) coding through composing and remixing music in a format similar to Garage Band. The environment enables students to create studio-quality music using over 4,000 samples created by professionals (including Jay Z’s DJ!).
MakeCode from Microsoft is an online, blocks- and text-based programming environment for micro:bits. It has an ever-increasing number of tutorials and course, including a new set of science experimentsdesigned by Carl Lyman to help middle and early high school grade students better understand the forces and behavior of the physical world. Another course uses micro:bits to teach the basics of computer networks.
BlockPyis a web-based, blocks- and text-based Python environment designed for data science and to allow users to authentically solve real-world problems.
The Exploring Computer Science (ECS) team recently published a new e-textiles unit and resources called Stitching the Loop. Students learn to create paper circuits, wristbands, a collaborative mural, and wearables with sensors.
ARTIFICIAL INTELLIGENCE (AI)
The AI4K12 Initiative is joint project of CSTA and AAAI (Association for the Advancement of Artificial Intelligence) to develop national guidelines for teaching AI in K-12. The working group has developed five big ideas in AI and has begun developing a curated AI resource directory for K-12 teachers. See slide deck.
One example of an 11th/12th grade resource in the directory: TensorFlow allows users to tinker with neural networks in the browser.
Of course, this is only a small glimpse of the content presented at SIGCSE 2019. If you want to learn more, view the ACM Digital Library and consider joining SIGCSE in Portland next year.
In recognition of Women’s History month, I’ve been reflecting on the teachers who work tirelessly to bring computer science education to their students. In particular, I wanted to acknowledge and appreciate the important role of the women who teach computer science in schools and in communities around the world.
We know that research tells us that mentors and role models are a key ingredient for success – as they say – “you can’t be it if you can’t see it”. Having a strong female role model teaching computer science – whether that is in school or out of school – is one way to help girls dispel myths about who belongs in computer science – and helps them clearly see that they do belong in this field.
Another great way to continue to build inclusive computer science education and help girls – and all students – see and grow the impact of women in computer science is to share the stories and impact of women who’ve pioneered the way. Women’s History Month is the perfect time to do this since there are so many great resources created, shared and highlighted.
I’m sharing a few resources that I found interesting and hope you will
add to this list. While I know that by sharing a short list, I risk of leaving
things out. But with the goal to start somewhere… here we go! I’m sure you have
some you want to share. Please do! Post them on Twitter, tagging @csteachersorg
with the hashtag #CSforAll so others can see them too.
NCWIT has so many great resources! The
Pioneers in Tech Award
announced their newest recipient – and you can check out past recipients for even
more inspiration!
Want to inspire your students in person? Check out opportunities to
attend The Grace Hopper Celebration, the
largest gathering of women technologists, which is a part of the Anita Borg
Institute.
The IEEE
Computing Society has a range of resources to both promote and support women
in computing as well as links to other great programs and resources.
Check out the list of women
led, women focused computer science organizations created by Ruthe Farmer,
Chief Evangelist for CSforAll. Find out
who is operating in your community and see how you can partner!
San Francisco Unified School District’s Celebration
of Women in Computing shares a GREAT list of resources (including lesson
plans and posters!) they’ve compiled. (Thanks to my fellow CSTA board member Bryan
Twarek for sharing!)
Through inspirational student interviews with a range of diverse female
and male CS professionals, Roadtrip Nation’s Code
Trip shows students that there are many pathways they can follow in pursuit
of computer science education and computing.
And finally, help inspire the women we see on these lists, posters and history books in the future! Help make sure more girls have strong female role models by nominating a female teacher you know to receive a scholarship to attend a code.org training!
The research group that I’m a part of, Re-Making STEM, of is looking at ways that computational thinking (CT) practices intersect with creative, collaborative human activities. This has led to some really interesting explorations in computing, cognition, and culture. Our practical goals include: discovering ways that teachers and their students can engage with and learn CT, and discovering design principles for learning and applying CT in interesting ways. In this post, we’ll look at some of those explorations and hopefully leave you with some things to think about.
Computational thinking
I think this definition of CT is as good a starting point as any:
Computational Thinkingis the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent (Cuny, Snyder, Wing, 2010).
Wing (2010) says she’s not just using problem / solution to refer to mathematically well-defined problems but also to complex real-world problems. She also says that the solutions can be carried out by humans, computers, or combinations of humans and computers. This definition places the emphasis on representation, but begs the question, what are forms that can be effectively “carried out” by information-processing agents? What does “carried out” mean anyway?
Let’s pin these down for the sake of discussion. We might say that the forms we’re talking about are abstract representations (abstractions, the noun). Indeed, abstraction (the verb) is widely recognized as an essential component of CT (Grover and Pea, 2013). Let’s say abstractions are formal representation (e.g. formal logic, mathematical equations, computer code), and “carry out,” means execute. So we’re talking about executing algorithms. And let’s be real – we are only going to write formal algorithms if we intend to automate them with a computer.
So if CT in practice is, “writing algorithms that can be executed by computers,” then we are really talking about programming. This contradicts Wing’s clarifications about “problems” and “agents,” described above. Furthermore, the field is saying loud and clear that CT is not just programming. Since 2013, the concept of CT has expanded (e.g. Weintrop et. al., 2015), and for most people it is certainly not limited to executing algorithms on computers.
Opening it up
Let’s look at this piece by piece, starting with the “carrying out.” Even if we’re talking about formal representations and computers, CT involves formulating data as well. Data is not “carried out,” or executed, like an algorithm – it is structured, processed, analyzed, synthesized, and interpreted (by humans and computers).
Now let’s look at formality and agents as computers / humans. We already saw what happens when we are strict about formality and computers. If we loosen the restriction on formality, but still think of agents as computers (or virtual agents), then we allow pretty much any human-computer interaction. If we keep formality strict, but allow for people as agents, then we allow for things like math to count. The latter might work for some, but I would ask: do we care about distinguishing between CT and mathematical thinking? Is CT == mathematical thinking + computers? Do we want to allow for less formal expressions of CT?
Let’s put these two axes (more or less formal, extent of computer use) on a table.
We in the CS community might have a tendency to think about CT as living in the upper-left corner of the table (formal, tied to computer use). In reality, creative collaborative human activity blends all of these types of communication, and CT (whatever it is) intersects with all of these other areas. Authentic computational practice also involves multiple people and computers working together – there are more than two agents in the system. So, as a general case, we have systems with: agents (humans, computers, and virtual agents), situated in environments (physical, social / cultural, virtual), interacting using systems of representation (sounds, images, diagrams, natural and formal languages, etc.).
One CT, many CTs
What are the implications of this? I think there are two clear options for how we define CT:
(A) Restrict what we mean by CT. This is perfectly reasonable and probably necessary for most practical purposes. However, this has the inevitable consequence of fragmenting our understanding of CT. There will be different CTs in different disciplines / fields. We will do this, but we should try to understand the restrictions that we are imposing, and the consequences of imposing them.
(B) Break our concept of CT wide open. I think the scientific community (at least, those who are studying the construct of CT and how it plays out in real cultural contexts) should do this, so that we can explore how CT is understood and practiced in a variety of contexts and for a wide range of purposes.
This is not a binary choice that we need to make, individually or collectively, once and for all. The processes of imposing structures and breaking them apart will enrich our understandings of CT. In closing, I ask you to consider how you construct CT with your students and colleagues, and what effects this might have on who engages with and learns CT at your school.
These ideas in this post are part of a collaborative research effort with the Re-Making STEM PIs, Brian Gravel, Eli Tucker-Raymond, Maria Olivares, Amon Millner, Tim Atherton, and James Adler, and the dedicated research team, Ada Ren, Dionne Champion, Ezra Gouvea, Kyle Browne, and Aditi Wagh. This material is based upon work supported by the National Science Foundation under Grant Numbers DRL-1742369, DRL-1742091. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
References and further reading:
Grover and Pea 2013. Computational Thinking in K–12 A Review of the State of the Field. https://goo.gl/MQKG4F
According to the World Economic Forum’s (WEF) highly recommended meta-study “21st Century Skills”, schools need to prepare students to have a “future-based mindset” with skills such as collaboration, creativity, and adaptability. Their answer: project-based learning (PBL). While PBL is gaining much speed in schools, how to manage projects can be a challenge: who is doing most of the work? who isn’t participating fully? how do you assess who has done what?
In the computer science field, one means of project management is the Agile software development paradigm which, among other aspects, implements Scrum, a methodology for dividing work that needs to be completed into sprints, or stories. In the Scrum environment, the team is considered capable of completing the task on their own. While the team is self-directed and is encouraged to problem-solve independently, there are two clearly defined roles that facilitate the process. The first is the Scrum Master (in the classroom, this is the teacher), and the Product Owner (the students). The role of the Scrum master is to help the team when there is some impediment to their completion of a task, such as a bug or a design flaw. The product owner’s/students job (in schools) is to keep the vision of the solution and manage the daily tasks. Scrum has recently been adopted in schools as a way to manage projects in both computer science and non-computer science classrooms.
Scrum meetings, which are short meetings occurring each day the class meets, consist of asking three essential questions: What did you accomplish since the last scrum? What do you expect to accomplish before the next? and, Is anything blocking you (blocks are solved outside the scrum meeting)? This level of accountability for students is essential for setting goals, prioritizing project tasks, assigning roles and jobs for team members, and keeping students on track for project completion. In 2016, The University of the Pacific conducted a study on using scrum in three computer science courses. Their conclusion was that, overall, students found the above benefits to be true and helpful, while a few found the Scrum process to be cumbersome.
I have been using Scrum in my own classroom for several years now with great success. Students know what they are expected to do and are held accountable to not only me, but to each other. There are two components that stand out as key to the process. The first is student articulation and presentation of their project status. This forces them to really pay attention to what they are doing, how their code is working, and gain an understanding of what they need to do next and with what they are struggling. These are essential skills for their future as software programmers and engineers. The second aspect is teacher feedback. The daily feedback is essential for keeping them on track for successful project completion and for addressing problems quickly.
While there are many ways to manage project based learning in an educational setting, it makes sense that in a software development course, learning to work in an environment that mimics the “real world” teaches valuable skills, in addition to preparing students for their future.
Jimenez, Osvaldo, and Daniel Cliburn. “Scrum in the Undergraduate Computer Science Curriculum.” Journal of Computing Sciences in Colleges, Volume 31, Issue 4, April 2016, pp. 108-114.