My SPLASH 2016 Keynote

Above is a practice version of my SPLASH 2016 keynote. If you don’t want to watch the whole 40 minutes, you can read my slides (100 MB of images!). If you don’t want to read the slides, here’s a super condensed version of my argument:

  • The mathematical view of programming languages is powerful and productive
  • However, that view is also narrow, limiting the research questions that we ask
  • Moreover, it limits who participates in computing, because the dominant culture of computing only projects interest the mathematical view.
  • If embracing other views is important for discovery and equity, what other views exist and how can we explore them?
  • These views including PL as power, interface, design, notation, language, communication, but also other surprising lenses such as glue, legalese, infrastructure, and even a path out of poverty.
  • These views, however, also embody values, meaning that by investigating these metaphors for programming languages, we also embrace new values.
  • My work has explored many of these metaphors, including interface, notation, and communication, to great effect.
  • I suggest that every PL researcher consider these new views, but also accept them as valid alternative perspectives for PL research.

The response to the keynote was quite positive! People found the ideas interesting, provoking, thoughtful, and in a few cases, brilliant. It was such a privilege to have the attention of so many great programming language and software engineering researchers. I hope I’ve given them a few tools and ideas about how to consider their future work, and perhaps reconsider their past work.

What does $600K in NSF research funding buy?

Eight years in to my career as a professor I finally wrote my first NSF final report (this would have come earlier, but taking two years of leave to do a startup led to several no cost extensions). Because science and technology has been under attack for quite a while now politically in the United States, this seemed like a great time to step back and consider what NSF funding actually buys America.

The project was funded out of the now canceled “Computing Education for the 21st Century” program at NSF. With my collaborators Margaret Burnett and Catherine Law, I wrote a proposal to investigate whether framing programming as a game would equitably engage learners in more productive learning than other approaches to learning to code. We were one of several teams to be funded, in the amount of $600,000 for three years of research.

Here’s what we did with that money:

  • We designed, implemented, and deployed the Gidget game
  • We designed, built, and deployed the Idea Garden into the Gidget game
  • We designed and built the Idea Garden for JavaScript and the Cloud9 IDE
  • We designed, built, and evaluated a Problem Solving Tutor (not yet published)
  • We designed, built, and evaluated a Programming Language Tutor (not yet published)
  • We studied the role of data representation on engagement
  • We studied the effect of in­-game assessments on engagement
  • We studied the effect of Gidget on attitudes toward learning to code
  • We studied the role of the design principles incorporated into the game on learning
  • We studied the learning gains in the game relative to two other learning paradigms
  • We studied the role of the Idea Garden in engagement and learning
  • We studied the role of self­-regulation in programming problem solving
  • We studied the effect of self-­regulation instruction on programming problem solving
  • We studied the effect of the problem solving tutor on problem solving productivity (not yet published)
  • We conducted a pedagogical analysis of online coding tutorials (to appear)
  • We held four summer camps in Oregon and Washington based on Gidget for high school students, reaching over 80 rural and female high school teens.
  • We held four Gidget open house sessions during CS Education Week (2013­-2016)
  • We held a one day workshop with 25 Native American teen girls.
  • We taught an Upward Bound Web Design course to 11 diverse high school students.
  • We disseminated the results to several Microsoft product teams building computing education products.
  • We served on advisory boards to two NSF­-funded computing education teams
  • We served on a panel on Women in Computing at the Educause conference.
  • We served on a panel on Women in HCI at CHI.
  • We held a workshop at CHI about gender-­inclusiveness issues in software.
  • We disseminated results to
  • We created an extensive Computing Education Research FAQ.
  • We attended a Dagstuhl workshop on assessment in computing education.

Across all of these activities, we taught over 10,000 people how to code via Gidget, ranging from ages 13-80, half of them girls and women. We trained 4 post docs, 6 Ph.D. students, 2 masters students, 18 undergraduates and 6 high schoolers in how to do computer science and computing education research. We produced Gidget, an online game for learning to code, that will be available for at least the next decade as a public resource. We published about 20 research papers, contributing an evidence base for better teaching and learning of computing through online learning technologies.

Is all of this worth $600K? One way to judge this is to quantify how much all of this education cost. If we look just at the 36 students we mentored, the public spent an average of $16K per student to each them rigorous research skills. If we include the 10,000 players of Gidget, plus the future players of Gidget for the next decade growth, the public spent an average of $2 per player to teach them a bit about computer programming and potentially engage them in future learning. From this perspective, the grant was one big education subsidy, promoting the development of highly-skilled STEM workers.

Another way to judge this is to anticipate the future impact of this training. Many of the Gidget players will be more likely to pursue STEM education because of playing the game (according to our research), which may have a net impact on the growth of the economy. Of the 36 students we trained in research that have graduated, many are faculty, UX researchers, and software engineers, filling much needed jobs in industry. The downstream impact of all of this training may be to fill unmet needs in the economy, allowing it to grow more efficiently.

Of course, the other import way to assess the return on investment of the work is to predict the long-term impact of the knowledge we produced. We’ve already disseminated the work to, which is reaching tens of millions of learners in high school through their curriculum. Our research essentially serves to ensure that the learning those students are already doing is more effective than it would have been otherwise. That ultimately means a better, smarter, more effective STEM workforce in the future, which ultimately impacts the growth and productivity of the U.S. economy.

Across the 243 million U.S. taxpayers, each contributed a median of about 1/10th of one penny for this research to happen. What’s the return of that fraction of a penny? Will every American be more than 1/10th of a penny richer in 20 years because of our work?

Clearly, this exercise in trying to model and predict the impact of science funding is hopelessly fraught with reductive ideas about science. It even plays into the framing that house Republicans have used to attack science, accepting the premise that NSF is an investment in America, as opposed to something more idealistic, such as the betterment and survival of humanity. But in reflecting on all of these activities and the actual impacts they’ve had on the world already, I find the sheer scale of potential impact to be compelling in its own right and well worth the price. I can’t wait to see in 10 years what these impacts might be!

The invisibility of prior knowledge

When you watch an Olympic sprinter run 50 yards in 5 seconds, what’s your first thought?

  1. That must have taken an incredible amount of practice, or
  2. Wow, that is some incredible DNA.

Now, we know both nature and nurture matter. But in watching sprinters, we see nurture matter because we can see sprinters practice. Olympics broadcasts show us hours of practice. We see their coach. We know that Nike has sponsored their thousands and thousands of hours and hundreds of pairs of shoes. We know that as much as someone might be born with a genetic head start, the only way to really get to the top is to practice more and better than anyone else in the world.

And yet, for other kinds of human ability, few people, if ever, consider the role of practice, instead attributing ability to genetics. This is especially true in software. People assume Bill Gates must have been a genius. The news frames Mark Zuckerburg like a boy prodigy. Hacker culture of the 90’s, and to a large extent still today, divides people up into “real” coders and posers, treating computing as if it is something natural, innate, inborn, and gifted to a privileged few.

The reality, of course, is that the majority of variation in ability in computing and every other field is due to practice, not genetics. As K. Anders Ericsson studied for years, most of the variation in expert performance is explained by how well and how much people practice a skill. Coding (clearly) isn’t something people are born knowing how to do, nor is it likely something people are born with a predisposition for. It is something people learn, and it is our experiences, our other skills, and our environment that develop and sustain the motivation to learn, and likely are predispositions to learn things.

Why do people gravitate so easy to theories of ability grounded in genetics rather than practice? I think it’s because practice, and in particular, the prior knowledge that practice produces, is invisible. When you meet someone, you can’t see what they know, how well they know it, how many years they’ve been practicing it, or how well they’ve been practicing it. In fact, even when scientists like myself try reallyreally hard to measure what someone knows, we struggle immensely to reliably and accurately capture ability. It’s really only in a narrow set of domains, like sports, where we’ve created elaborate systems of structured measurement in order to quantify ability.

This invisibility of prior knowledge, and the attribution of ability to innate qualities rather than practiced skill, has many consequences throughout software engineering and computing education. When a company tries to hire someone, they have very weak measurements of what an engineer knows, and have to rely on pretty pathetic coding tests that likely correlate with little of actual skill. Worse yet, when a CS1 teacher gets a classroom of new students, they often attribute success in the class not to the quality of the practice they have provided to students, or to the vast differences in practice that students engaged in prior to class, but instead, divide students up into those who “get it” and those who don’t. Because hiring managers and teachers can’t see into each person’s mind, they can’t comprehend the vast complexity of prior knowledge that shapes each individual’s behavior.

Because of these challenges, measuring knowledge of computing really is one of the most pressing and important endeavors of computing education research. We need robust, precise instruments that tell us what someone knows and can do. We need the decathlon of coding, helping us observe ability across a range of skills, each event finely tuned to reveal the practice that lurks beneath someone’s cognition. Only then will we have any clue how to support and develop skills in computing and know that we’re doing it successfully.

So far, the closest thing to this in computing education research are the series of language independent assessments of CS1 program tracing skills that have come out of Mark Guzdial‘s lab. These are great progress, but we need so much more. My former Ph.D. student Paul Li did a study of software engineering expertise, finding dozens of attributes of engineering skill, none of which we know how to adequately measure. Just as lenses revolutionized astronomy and biology, we need instruments that allow us to peer into people’s computational thinking to revolutionize the learning of computing.

Ready to help? Come do a Ph.D. with me or the dozens of other faculty in the world trying to see invisible things in people’s heads. Let’s figure out how to transform humanity’s understanding and utilization of computing by seeing what they know about it.

A defense of sabbatical

This is my last day of sabbatical. I should be preparing for class next week, drafting the sections of that grant I’m helping on, writing meta reviews for that conference, and finding an instructor for that masters program I chair. But the only thing really on my mind is how wonderfully pivotal the last 6 months of paid professional leave have been to my role as a researcher and a teacher.

Sabbatical in academia has a long and turbulent history. It’s not really in great shape right now. Many private universities still guarantee a full year of sabbatical to tenure track faculty, but others have abandoned it entirely, or have eroded it so heavily that it’s given out sparingly. My university is somewhere in the middle: I can get 2/3rd’s of my salary for 9 months every 7 years if I’m in good standing. That’s pretty solid, and our university is committed to keeping it that way.

But is it really worth it? Isn’t sabbatical just a year of paid vacation? Why should a university, especially a public university, release professors from teaching and service, only to have them sit on a beach for 9 months pretending to think hard about research? What does the world get for this investment?

There might be a few beach sabbaticals, but most of my colleagues don’t do that. Mine certainly wasn’t like that. Instead, mine was open time to chart the course of the next 7 years of my professional life in research, teaching, and even service. This is a huge privilege—who in industry gets to do that?—but I also think it’s essential to the job.

Here’s why. As a scholar, my job is to think about the coming decades of human civilization. My discoveries should stand the test of time. Some of them might not even be relevant for a couple of decades. And the discoveries that I teach students should be robust to change as well, preparing students not just for today, but for the next two to three decades of their careers. Even service, which is about running the university and the global research enterprise, can be about the future, investing in new academic programs, improvements to peer review, the efficiency of federal funding, and the political forces that cause it to rise and fall. Every single one of these jobs has a time horizon of at least 1 year out, if not 5, 10, or even 100 years out.

Most of faculty life leaves no room for this type of thinking. Getting that grant, publishing that paper, advising that doctoral student, teaching that class: there is so much detail in each of these, the long view on discovery and learning is a mere backdrop to the daily work. Sabbatical is a time to deal directly with these longer term concerns, framing and structuring a professor’s next 6 years of work.

When I started my sabbatical this past January, that’s how I set out to use it. My highest level goal was to figure out what my mission would be for the next six years and how I would accomplish it. My lower level goals involved choosing the work I’d do this year to help me prepare me to accomplish my mission. I took 6 months of leave at 75% salary, plus I had my three months of summer. I funded the rest of my salary at out NSF grants.

The mission I chose was this: advance and mature what we know about computing education to prepare the world to understand the disruptive changes coming from computing. I wrote about this on this blog, so I won’t go into detail about it here. Instead, I want to share the results of my sabbatical year, to give you a sense of how I spent all of this tuition and taxpayer subsidy. Here we go:

  • I wrote a retrospective on my three years as a startup CTO and cofounder (in review soon!). This is has transformed how I understand the software industry, how I teach about it, and how I see my efforts in computing education relating to industry.
  • I wrote an NSF proposal, framing my lab’s work on how to teach programming problem solving. Even though it was rejected, it’s helped us map fundamental questions about the nature of problem solving in programming and how to teach it.
  • I attended a week long Dagstuhl workshop on computing education research, deepening my relationships with researchers in this growing field. That week was pivotal in connecting me to the small world of computing education researchers, which has empowered me to become an advocate for the field in other areas of HCI, software engineering, and policy making.
  • I started several new student projects ranging from evaluations of coding tutorials, programming language tutors, problem solving tutors, investigations into coding bootcamps, and studies of equity in computing education. I think we’re doing some really exciting, powerful work, and can’t wait to share it with the world in the coming years.
  • I recruited new Ph.D. students to my lab from computer science, information science and education, developing a new pipeline.
  • I wrote a now widely trafficked FAQ on computing education research, which has been important in helping not only students see a participating in research, but has also helped my colleagues in HCI, Software Engineering, and Programming Languages understand it’s importance.
  • I recruited three undergraduate researchers to my lab to support the new projects, mentoring them on graduate school, research, and software engineering careers.
  • I redesigned my faculty website to better convey my focus, contributions, and impact in research for the next seven years.
  • I developed new collaborations with our new iSchool faculty Jason Yip, Katie Davis, and Negin Dahya, learning about new research areas of learning science, identity, and education. This has stretched my expertise, teaching me not only about new foundations of learning and education, but it has made me a more effective and inclusive teacher.
  • I partnered with a large group of computing education researchers to plan big NSF project on advanced learning technologies for computing education. Here’s to hoping we get the funding to fuel those disruptive innovations I mentioned above.
  • I started working with Richard Ladner‘s AccessComputing project to further equity in access to computing education amongst people with diverse physical and cognitive abilities. This connected me to yet another research community of accessibility researchers, and led to some exciting work by my student Amanda Swearngin on web accessibility that has the power to bring the web to everyone regardless of physical ability.
  • I read and wrote a lot about privilege, searching for the underlying privileges in my own life that allowed me to succeed academically. This has been emotionally draining, but empowering, helping me to see my role as a teacher from a more structural, systems view, and making me more excited about the leadership roles I’ll inevitably take on as senior faculty.
  • I attended CHI 2016 in San Jose, reconnecting with the research community after two years of startup life, reminding me of how wonderfully diverse and interdisciplinary our community is.
  • I taught my first high school computer science course to understand more about the challenges of teaching CS electives in schools, and made 11 mentoring relationships with South Seattle teens that I hope will reshape their paths toward college.
  • I went to the Snowbird conference to advocate for computing education research, connecting with hundreds of chairs and deans. This connected me to dozens of leaders at universities around North America, but also gave me a larger view of the barriers that computing education research will face in maturing.
  • I went to ICER 2016 and strengthened my ties with the growing computing education research community, finding partners in my long-term mission.
  • I connected with, Microsoft, and Google efforts in computing education, disseminating my research and others’ to their product design and policy efforts.
  • I redesigned my design thinking course, INFO 360, to incorporate everything I learned about learning this year. It’ll be a better class, while taking less time to teach. It will also be a solid foundation for the other sections of the course, and perhaps HCI courses around the world.
  • I got married, went on a beautiful honeymoon to Croatia and Slovenia, and bought a house in Seattle’s crazy housing market. Don’t worry, no public dollars in any of those.

What’s the ROI of my six months of 2/3rd’s salary to the university and the public? Was all of the above worth ~$80K in salary, benefits and guest faculty in my absence? I think so. In the short term, I mentored and taught dozens of students and made important discoveries that have already impacted efforts at, Microsoft, and Google. And like I claimed above, this is a long term investment. Because I had this sabbatical, in 10 years, you’ll start to see more effective and inclusive teaching of computer science, which means a more computing literate humanity and a more effective workforce of software engineers. I hope this will mean a better, safer world that creates a computing infrastructures and institutions that reflect all of us, rather than just the privileged few. I predict that the $80,000 the university spent on my time will easily return at least 10x in economic productivity over the next decade.

Yes, not every sabbatical is like this. There are some faculty that have extended vacations and get paid a small portion of their salary to do so. Sometimes, rest is what busy professors need to be great researchers and teachers. That said, I think that every sabbatical has the potential to be extremely valuable to society, and it’s a professor’s responsibility to make it so.

Disagree? I want to here from you!

ICER 2016 trip report

ICER 2016 (the ACM International Computing Education Research conference) just ended a few hours ago and I’m enjoying a quiet Sunday afternoon in Melbourne, reflecting on what I learned. Since you probably didn’t get to attend, here’s my synthesis of what I found notable.

First, a meta comment. As I’ve noted in past years, I still find ICER to consistently be the most inclusive, participatory, rigorous, and constructive computer science research conferences I attend (specifically relative to my multiple experiences at CHI, UIST, CSCW, ICSE, FSE, OOPSLA, SIGCSE, and VL/HCC). There are a few specific norms that I think are responsible for this:

  • Attendees sit at round tables and after each talk, discuss the presentation with each other for 5 min. After, people ask questions that emerged. This filters out nasty and boring questions, but can also lead to powerful exchange of expertise and interesting new ideas. It also forces attendees to pay attention, lest they lose social capital from having nothing to say.
  • Session chars explicitly celebrate new attendees, first time speakers, first time authors publicly, creating a welcoming spirit to many new attendee
  • The program committee regularly accepts excellent replications, creating a tone of scientific progress rather than cult of identity
  • The conference gives two awards: one for rigor and one for provocative risk taking, incentivizing both kinds of research necessary for progress
  • The conference has a culture of senior faculty as mentors to the community, not to just their students. All doctoral consortium all the time.
  • The end of each conference is an open session for getting feedback on research designs for ongoing work and new ideas, creating a community feeling to discovery.

There are always things to improve about a conference, but most of the things I’d improve about ICER are things that other conferences should do too: shorter talks, more networking, move to a revise and resubmit journal-style model, allow authors of journal papers to present, and find ways to include authors of rejected work.

Now, to the content. The program was diverse and rigorous. A number of papers further reinforced that perceptions, stereotypes, and beliefs are the powerful forces not only in engagement in computing education, but also learning. Shitanshu Mishra showed that this is true in India as well, where culture creates even stronger stereotypes around the primacy of computer science as the “best” degree to pursue. Kara Behnke provided some nice evidence that AP CS Principles, with its focus on the relationship between CS and the world, powerfully changed these perceptions, and reshaped the conceptions of computing that students took into their post-secondary studies, whereas Sabastian Ericson talked about the role of 1-year industry experiences during college on students reframing of the skills they were acquiring in school. This year’s best paper award was by Alex Lishinki et al., providing convincing evidence that self-efficacy is reshaped by course performance, and that this reshaping occurs differently among men and women and exams and projects.

Many papers investigated problem solving through different lenses. Our own paper explored student self-regulation, which we found was infrequent and shallow, but predictive of problem solving success. Efthimia Aivaloglou presented an investigation into Scratch programs, replicating prior work that showed that most Scratch programs are simple, lack much use of programming constructs, and lack conditionals almost entirely, suggesting that the majority of use lacks many of the features of programming found in other learning contexts. Several also presented more evidence that students don’t understand programming language semantics very well, learn programming language constructs at very different rates (and sometimes not at all), but that subgoal labeled examples and peer assistance can be powerful methods for increasing problem solving success and learning.

My favorite paper, and the paper that won the John Henry award for provocative risk-taking in research, was a study by Elizabeth Patitsas et al., which gathered convincing evidence from almost two decades of intro CS grades at UBC that despite broad beliefs among faculty that grades are bimodal because of the existence of a “geek gene,” there is virtually no empirical evidence of this. In fact, grades were quite normally distributed. Elizabeth also found evidence that the more teachers believed in a “geek gene”, the more they were label distributions as bimodal (even when those distributions were not bimodal, statistically). This is strong evidence that not only do students’ prior beliefs powerfully shape learning, but teachers’ prior beliefs (and confirmation bias in particular) powerfully shape teachers attitudes toward their students.

The conference continues to raise the bar on rigor, and continues to grow in submissions and accepted papers. It’s exciting to be part of a such a dedicated, talented community. If you’re thinking about doing some computing education research, or already are and are publishing it elsewhere, consider attending ICER 2017 in Tacoma next year, August 17-20. My lab is planning several submissions, and UW is likely to bring a large number of students and faculty. I hope to see you there!

Textbooks are awesome

I’ve been writing a lot about big ideas like research, policy, and expertise lately. Today, I’d like to take a step down from the big ideas and lightheartedly discuss something real, tangible, and ubiquitous that I love: textbooks.

Yeah, those big, heavy, out of date, expensive printed books that contain a substantial portion of all of the human knowledge ever discovered. Not those e-books, not these e-textbooks, and definitely not websites masquerading as textbooks. I’m talking about the pile of books in millions of students bags, desks, and bookshelves.

Awesome? How are textbooks anything but terrible, let alone awesome? Let me count the ways:

  • You can read at your own pace. Slides, videos, and other time-based media out of your control are a pain to navigate. Textbooks move at exactly the pace you want to move.
  • You can see everything all at once. All the content is there, always. There are no animations, transitions, or other segmentation of content that you have to wait for, and so you can browse it.
  • You can access any piece of content at any time. Want to skip ahead to chapter 5? Go for it! No need to wait for the professor to get to week 7 of class, for the e-book to “unlock” the next section, or for that slide transition to finish animating. Satisfy your curiosity now.
  • You can memorize the location of any content. No need to recall which day a teacher discussed an idea and ask for their slides. No need to search for that video you remember on Kahn Academy and try to find that segment that was particularly instructive. In fact, there’s probably still a stain from that meatball sub on the page that will give you a subtle cue of the places you’ve already been and what content it was related to.
  • The screens are HUGE. Some of these buggers are up to 16″, which is basically like having a large laptop to view your content.
  • The screens very high contrast. You can read in pretty much any light except for no light. There’s no glare. And that brings me to…
  • The battery life is infinite. Crank that screen brightness up all the way, cuz this thing will last forever. (Unless you drop it in the toilet).
  • The information density is off the charts when compared to slides, handouts, and whiteboards. You can fit text, images, diagrams, commentaries, citations, and a million other kinds of content that these other media are terrible at supporting.
  • You can zoom into content by just moving your head. No buttons, no keyboard shortcuts to memorize, no awkwardly standing up in the middle of class to get closer to the projector screen. It’s like a VR headset: you just move your body and it works.

Yes, the Internet has some advantages. It has a lot more information and you can carry it around more easily. It can also be updated more easily, which is handy, since science and knowledge are always evolving. But isn’t a textbook plus a smartphone even better? Imagine an augmented reality textbook application that allows you to see edits from newer editions or interactive content in diagrams. Imagine social commentary on textbooks for your current page. Imagine scanning a citation and seeing the research paper the statement was based on.

Who does research on these things? Where’s my augmented reality textbook app? Do any of these features better promote learning? Where is my textbook hoverboard?

Snowbird trip report: automation, education, and academia


This past Sunday, Monday, and Tuesday I attended the biennial Snowbird Conference, which is sponsored by the Computing Research Association and brings together chairs, deans, directors, and other leaders of North American computer and information science units, as well as leaders of industry research labs. I was very fortunate to have been invited to speak on a panel about computing education research (which I’ll discuss shortly). This also meant that I had the privilege of interacting with hundreds of the world’s most powerful leaders in computer and information science. The trip was more than enlightening, it was pivotal: I learned just how small a world CS is in practice, especially relative to the massive impact that computing is having on society. The trip will change how I conduct my research, how I view my academic community, and how I view myself as a scholar and a leader.

Let’s dig in to some of the experiences I had. First and foremost was the peculiar dynamic of being one of the few people at the conference who was not a current or former chair or dean. Imposter syndrome has been on my mind a lot lately in my teaching, and so I decided before coming that I’d view myself (and present myself) as what I want to be: an eager early-career faculty member excited to learn and contribute and passionate about teaching computing leaders about computing education research (rather than a quiet, intimidated fish out of water that I’m prone to being). By now I’m used to my baby face subverting any effort to appear authoritative on anything, but I was really pleased with how open and supportive the group was, and how receptive they were to my contributions and my expertise. After all, why would I be there if I had nothing to contribute?

Throughout the three days, I had conversations with about 40 chairs and deans, ranging from those at prestigious institutions such as Harvard, Yale, Princeton, and Northwestern, to the long tail of excellent CS workhorses such as Texas A&M, Purdue, Harvey Mudd, CU Boulder, and UC Santa Cruz. I also connected with chairs more local to the University of Washington, including Joe Sventek at the University of Oregon and James Hook at Portland State University. This was a networking fantasy for anyone interested in the infrastructure of academia and research policy, not to mention an incredible chance to have substantive research and policy conversations across computing with some of the best researchers in the world.

I learned much about the life of computing leaders. For example, most chairs and deans, at the most basic level, are necessarily concerned with the slow building of academic infrastructure. The problems they solve are complex, over-constrained, and long term, including issues of space, faculty renewal, the future of the field, and the politics of federal research funding, along with internal faculty governance and conflict resolution. Second, the majority of chairs and deans that I spoke to did not describe starting their roles with a particular passion or interest in solving these problems, but many found unexpected joys in their ability to have big impact on their campus the broader academic community. Some described the long three, often four year struggle to even understand the scope of the job they’d accepted, and often felt that once they were truly competent at the jobs, they were often at the end of their terms, passing the baton to the next person. These are positions that require constant learning, just like faculty positions, but with bigger, often permanent consequences.

It’s difficult to discuss the experience of chairs and deans without mentioning some of the stark differences between those as top ranked universities and everyone else. Leaders as these premier institutions tackled big, complex, often global initiatives, whereas those at lower ranked institutions were focused on faculty recruiting, retention, and student enrollment issues. These as sounded to me to be substantially different roles, one facing outward to the world and the other facing inward toward operations. I actually found it emotionally difficult to watch these differences manifest in the social context of a conference, with hallway conversations often segregated along these different responsibilities and university reputations. The irony of rankings reified spatially, just before a session on the flaws of university rankings, was too much for me.

The conference was not all networking. The content of the prepared panels and talks was explicitly focused on three big topics: 1) the future of computing in academia, 2) the political and policy factors that might affect computing in academia, 3) and disruptive perspectives on computing.

Let’s start with the future of computing. Speakers across the conference discussed two big trends: the middle class is disappearing because of automation and the world wants to learn computing. These two factors are related to each other: as more jobs are displaced by automation, more people are eager to learn computing so that they don’t get displaced, and this is leading to massive increases in enrollment in CS programs at the undergraduate and graduate levels (not to mention bootcamps, summer camps, etc.). This has also caused many other disciplines to begin to require computer science classes, which has dramatically increased the number of non-majors in computer science classes across the curriculum. All of this upstream activity is leading to serious downstream pressure on teaching load and class sizes, which is leading to fears about lower student retention and decreased student diversity. Simultaneously, many CS departments are engaging in new initiatives with the goal of increasing enrollment further, such as UIUC CS+X initiative that creates a whole collection of mixed computing degrees with other disciplines, and dozens of new professional data science programs.

At the same time, trends in federal funding are putting additional pressure on CS departments. Peter Harsha from CRA explained that federal funding is flat and likely to stay flat given the U.S. political climate in congress. A university relations rep from Microsoft Research read a statement that basically suggested Microsoft and the rest of industry is unlikely to compensate. There are opportunities to raise more money from NIH and foundations, but these often require much more applied, problem-driven research than the type of basic research most faculty prefer. This status quo would be survivable, but because computer science departments are growing to respond to the massive increases in enrollment, and much of this growth is in tenure-track faculty. This means that there is going to be increased pressure on everyone to raise money from a pie that isn’t growing. This increased competition means that faculty will simply be raising less money, funding fewer Ph.D. students, and getting less research done, while also spending more of their time teaching and fundraising. Martha Pollock strongly recommended that chairs and deans start to carefully reconsider what constitutes a tenurable record, that they sensitize their faculty to these new potentially lower standards, and that we all start writing tenure and promotion evaluation letters with these systemic effects in mind.

Many people at the conference believed that the engine for all of this change is actually computing itself. Tom Mitchell from Carnegie Mellon presented convincing evidence from labor economists (among others) that the stereotypes about computing killing jobs is most certainly true: it does not appear that for every job automated, another is created. This is redistributing wealth more than ever to the wealthiest in the world, leaving only jobs that require extremely rarified skills and thus pay extremely high salaries, or service jobs that are simply to difficult to computing to yet perform, but are not particularly valuable, and therefore not well paid. The middle class jobs that we still have are all likely to be displaced by automation in the coming decades. In a circular way, this erosion of the middle class is leading to the boom in CS enrollment, which in turn, fuels further erosion. The panel discussing these trends admitted that no one knows the end game. It may be that the invention of new jobs replacement is laggy (as it been with past technological change), and that it will happen eventually after a long period of social strife. To survive this social strife, we may face the necessity of raising the education bar for society again, which will involve transforming both K-12 and higher education for this new reality. Alternatively, education may only exacerbate the problem further, and we’re going to have to reinvent how we live, exploring minimum incomes and other new forms of public safety nets to compensate for the net decrease in demand for human labor. Beth Mynatt of Georgia Tech suggested a testable hypothesis that fully automated systems, rather than mixed initiative systems with humans in the loop, may be more prone to eliminating jobs, instead of augmenting the jobs that already exist.

As an explanation for these trends, Kentaro Toyama gave a rousing, witty talk on an “amplification” (a theory I believe is from science and technology studies). The basic claim is that technology is rarely the cause of social change, but rather an amplifier for social change, whether that change happens to be positive or negative. For example, if a democratic uprising is fomenting in an autocratic country, Twitter is not the cause of the uprising, but rather an amplifier, augmenting human ability to communicate, organize, and effect the change they desire. According to amplification theory, all of this organizing would have happened without Twitter, but it may have happened on a smaller scale and been less visible and effective. Similarly, if educational technology like MOOCs or some of my recent research on coding tutorials advances human ability to learn, it will primarily enhance those who have access and opportunity to learn, amplifying the structural inequities that already exist in society, rather than disrupting them. Kentaro’s argument, therefore, was that computing needs to move past its belief in technology as an unqualified agent of social change, and start approaching its research with more sober, realistic expectations of possible impact. He suggested this might start with collaborations with those in other discipline who better understand social change.

All of this discussion of social change, enrollment, and education was perfectly complemented by a broad undercurrent of discussion on computing education and computing education research (my reason for attending). I spent much of my three days at Snowbird advocating for the research area, for policy change, and explicitly for the hiring of tenure-track computing education research faculty. On Monday, I sat on a panel with Mark Guzdial (Georgia Tech), Scott Klemmer (UCSD), Ben Shapiro (CU Boulder), and Diana Franklin (Chicago). To a full room of nearly eighty chairs and deans we made a systematic argument that computing education research is a new, exciting, fundable, fundamental, rigorous, and essential area of computer science. We focused on more tactical reasons to hire tenure-track faculty (e.g., invest while a rising stock is cheap, claim research area territory while you can, take global leadership on an issue of significant public interest), while the rest of the themes at the conference bolstered the more fundamental concerns in social change that demand an increased focus on computing education excellence and scale. To my surprise, the audience was not skeptical, but rather curious and eager to envision what a computing education researcher might bring to their department.

Complementing our panel and my many one on one discussions was a reception organized by Jan Cuny of NSF helping to educate chairs and deans about the White House’s CS For All budget proposal. This initiative aims to train tens of thousands of high school CS teachers, giving access to computing education to every public school in the U.S. Each table at the reception had an expert and a topic and attendees sat and cookies while the expert discussed what the funding would mean to K-12 education, how this change would necessarily transform college-level computing education for the better, and what chairs and deans could do to ensure the funding and the policy are passed and successful. I had a table full of mostly skeptics, but found that a few key points about computing education research were persuasive: 1) basic research is vastly more rigorous than it used to be, and 2) there are fundamentally interesting computer science research questions to answer. I think most left convinced that, with the right candidate, it might be a smart move for their department to hire at least one tenure-track junior professor in the next decade years.

I was heartened by the reactions I received about computing education research. Most of the chairs and deans I met at top departments were surprisingly supportive, and some even said they could see hiring in the area if it starts regularly attracting and producing world class research talent. Most of the other chairs and deans I met ranged from curious to aggressively interested, seeing clear opportunities to collaborate with colleges of education to create joint positions, or even full positions in CS as a way of differentiating themselves from their peer institutions. Most were connecting the dots between the next decade of social change discussed at the conference, the impending policy changes in K-12, and the severe lack of expertise that most CS departments have in dealing with these coming changes.

These conversations led to a clear call to action for myself and my computing education research peers. The conditions are right for computing education research to take its next big leap in maturity, but this only happens if there are Ph.D. students who can show that computing education research is exciting, groundbreaking, and rigorous. It’s up to me and the many computing education research faculty in the world to show what the field is capable of in the next five years for computer science to permanently invest in the field, so it can have stable leadership that steers the world in its effort to effectively retrain the world in computing.

Stepping back, I can’t imagine a more productive three days in my professional life. I have an entirely new image of academia, I made (hopefully good) impressions with hundreds of leaders in computing, and I have a much greater clarity of my research and service goals for the next decade. What a perfect way to end my exhilarating return to academic life from industry and sabbatical!

My sabbatical stretch goal: teaching high school CS

Sabbaticals are usually a time for faculty to escape from the daily grind of teaching and service to read, write, and discover new perspectives on their scholarship. Some people travel to other universities to immerse themselves in other cultures and ideas. Others go to industry to find new collaborators. Others still forego a full salary and just use it as time to recover from six exhausting years of modern tenure-track faculty life.

I decided it to use it to teach high school students computer science. Eek!

Now, I knew this would be crazy. I knew it wouldn’t be recovery time. And I knew it could be exhausting. At some earlier point in my career, I thought it might be fun to do it for a whole year of high school teaching, just to really stretch my mind, and support my pivot to computing education research. I eventually decided to reduce my ambitions, and instead teach a short six-week summer elective for the University of Washington’s Upward Bound program.

Upward Bound is a federally-funded program that helps first-generation college students prepare for and gain admission to college. Our program primarily serves south Seattle, which tends to be a lower socioeconomic class and higher racial diversity than the mostly white Seattle. This also fit with some of my sabbatical forays into privilege, giving me a chance to encounter students without most of the privileges I had as a high school student.

I won’t go into too much detail about the course itself, as I’m planning on publishing some work on data I’m gathering as part of the students’ experience. But I will say that it’s a web design course and I’m investigating the benefits of explicitly bringing ideas of community of practice and identity development into the classroom, teaching the students about privilege, imposter syndrome, stereotype threat, and prior knowledge. As I write this, I’ve just finished our third week, with three more weeks to go.

I’d like to share some observations I’ve made as a first time high school CS teacher. Most of these are going to read as pretty obvious to any one who’s done this before. The notable thing to me was how salient these issues have become in my mind.

  • There is a massive amount of prior knowledge required to successfully learn HTML and CSS. To name some of it: English, touch typing, spelling, case, the concept of files and file extensions, typography, the idea that browsers request files, what “quotes” are, the idea that content can be represented symbolically, the idea that content can retrieved by name, the concept of separating content from presentation, etc.
  • There’s nothing natural or obvious about any of this required prior knowledge. It’s all artificial and if students haven’t picked it up in classes or at home, they’re going to have to pick it up while they learn.
  • Some students need to spin in a chair to pay attention. Others need to make eye contact with the teacher. Most won’t pay attention unless they’re extremely motivated or extremely compliant. Using a student’s name gets their attention for a minute, but not much longer. The Gates Foundation was right: teacher classroom management skills are a fundamental prerequisite for effective teaching.
  • Most concepts in computer science and programming languages are incredibly boring and arbitrary to students, even when they’re placed in personally meaningful contexts. One of my students is deeply passionate about shoes and is therefore making a website about shoes, but even this isn’t enough to make the difference between a “div” and a “p” tag interesting.
  • Showing students a rich, authentic picture of the larger world and connecting it to their knowledge seems to be a really powerful way of motivating students to learn boring things. When I can draw a line between CSS class selector rules and the path to a job that might bring a students’ family out of poverty, that student will actively engage in learning.
  • Teachers are powerful. When I genuinely care about each student and their learning, and I show this to them, students respond with engagement, respect, and trust. When I can find the single statement of positive feedback that a student needs to change their self-concept, I can pivot their interests and passions. That’s a scary amount of power.
  • Everything above breaks down considerably when there are more than about 10 students in class. I have 11, and when someone is absent, my ability to attend to each individual feels profoundly different.
  • It’s important that I have 15 years of experience with everything that can possibly go wrong in web development. It allows me to debug any problem a student encounters quite quickly, and for me to formulate, on demand, an explanation that builds new, more robust knowledge in a student’s head that helps them fix the problem in the future, but also prevent it with that new knowledge.

I’ve worked many jobs in my life time, from blue collar manual labor, to service, to engineering, and management. Teaching high schoolers computer science is by far the most difficult thing I have ever done, despite my vast experience as a computer scientist and a software developer.

This has changed some of my perspectives in computing education research. For example, I’ve actually become more skeptical of the feasibility of training high school computer science teachers. I think there’s too much domain knowledge required to teach it well and at the scale of most 25-30 student classrooms. I also think that teaching software engineers to be CS teachers (even if they were willing to take a massive pay cut) seems hard too. Teaching is such a difficult skill, I think an engineer would need to make a serious pivot to become a great teacher. Even if we get 25,000 teachers, I suspect they’re going to have a very difficult time teaching well. We’ll have a lot of ineffective teaching. (Which is I suppose what we have in other disciplines too).

Evidence-based, interactive, highly intelligent curriculum may help, where the instructional materials themselves have the domain knowledge, and teachers can focus on classroom management. I’m investigating some of these opportunities in my lab. But it’s going to be a long road.

Software and globalism sitting in a tree, k-i-s-s-i-n-g

Immigration, globalism, and nationalism have been in the news a lot lately. Now, this blog is usually about code and so I wouldn’t normally talk about politics, but there are interesting intersections between code and the global economy that I think are worth surfacing. I haven’t thought through them very carefully yet, so forgive the fuzziness of some of these ideas, and help me sharpen them if you can.

First, code is what makes much of our global economy possible. The ships, trains, boats, planes and trucks that move goods and planes between our cities, countries, and continents are only possible because of the optimization possible from digital logistics. I’ve learned about a lot of countless local Seattle companies that either manage shipping logistics or create software to manage and streamline shipping logistics. Software is at the heart of accelerating the movement of goods and people, making it possible to have large volumes of products and people move through multiple countries, creating those jobs in developing countries that so many people are sorry to see move elsewhere, and moving people to those jobs. This means that the rise of information technology as a key infrastructure places at the center of migration and global competition.

Another trend is the dislocation of work, home, and commerce made possible by software and the internet. Many people no longer have to live where they work, working 100% remotely, as many teams are already distributed across the globe. Many people can work where they live, writing, editing, designing, and developing at a distance, where 100% of the value they provide can be shoved through fiber optics. With businesses like Airbnb, some people even have property that they don’t live or work in, renting it out without ever meeting the renters face to face. My mom recently started hosting an Airbnb, and she set up a digital lock that she can update over the internet. She wants to meet her guests, but she doesn’t have to: they can slip in and out without ever meeting her, allowing her to move anywhere in the world and still collect her rental income. If we eventually see driverless cars (and buses, etc.), people will have the same ability to rent out their vehicle (or fleet of vehicles) and live anywhere, just as services like Car2Go and ReachNow do in the United States as German companies.

That people still migrate to jobs is a sign that not every kind of work (yet) be digitized. Otherwise, why would they leave their home? Simultaneously, the number of job types that aren’t being digitized is rapidly dwindling (relative to the speed of history), meaning that migration will likely slow as well. This is going to force an interesting paradox on pay: should people be compensated for information work based on the work, or based on the local economy in which they reside? Should a Mexican who owns a driverless Uber car in San Francisco be paid the same rate as someone in San Francisco who owns the same kind of driverless Uber car, or should they be paid less, because their cost of living is so much lower?

What is the end game with all of this? I predict fewer jobs (because of automation), lower migration (because fewer jobs and a higher proportion of digitized jobs), and even greater income inequality as those with neither physical jobs or digital jobs can find work. Those with physical jobs will face even fiercer competition for those jobs as the world population increases, migration becomes easier, and the number of those jobs decline, whereas those with information jobs will face increased competition as people realize that education is the only path to getting those information jobs. There will be a large number of those in the middle class who can’t get lower paid jobs (because they’re elsewhere in the world) and can’t get higher paid jobs (because they don’t have the education).

What will these people do? Some have suggested that humanity will continue its path toward leisure investment. But does the world really have infinite capacity for new television, games, and books? It seems that globalization has only led to a consolidation of attention, with 80% of the world watching the same three television shows, plus a long tail of niche art that doesn’t sustain most people’s income.

My prediction? The end game of software-driven globalization is a return to local economies. Politically, I’m not at all against globalization and see huge benefits for increasing the global quality of life, but I think the only possible path is that most people start local small businesses offering goods and services that are geographically relevant. Software will also make this more feasible, reducing barriers to starting and sustaining those local businesses. Think of Portland, OR, for example, with its abundance of local restaurants, cafes, bike maintenance shops, and boutique local manufacturing shops. These are businesses that work because supply and demand is scoped to a city; there’s nothing someone in Frankfurt can do to compete with Screen Door. The end game could easily end up improving local communities, bringing people closer to where they live, and increasing the cost of moving anywhere else.

If this is right, the upheaval we see in politics is more due to the friction of transition, rather than some permanent instability. I do believe that software is at the heart of this transition, and that it’s just going to take a century before the implications of the above trends work their way through everyday life. The industrial revolution took 50 or 60 years; there’s no reason to think that the revolution from information technology won’t take just as long or longer, given its far reaching applicability. Also, given the slow change of culture and human behavior, there’s also no reason to believe we can accelerate this change. After all, human psychology is the key driver of culture change; technology is just a catalyst.

So let’s start the clock at 1980 and count. I predict that by the time I’m 80 (in 2060), things will be stable. Until then, we’re going to continue to see rapid change, political upheaval, and profound transformation of how we live. It’ll be an exciting, but also frightening time to live. I hope that by the time I die, we’ll see the end of the tunnel, and a glimpse of the next phase of human civilization.

What makes a great software engineer?

A few years ago my Ph.D. student Paul Li was looking for a dissertation topic. At the time, he was fascinated by the decisions that software engineers have to make and the data use to make them. At the same time, I was starting to manage a team of six engineers myself at AnswerDash and thinking about the broader issues of developer productivity and software team management. As it turns out, few researchers had holistically investigated what skills a software engineer needs, beyond vaguely talking about “good communication skills” and the ability to write “good code quickly”. Paul decided that this would make a great dissertation topic, and I agreed, and he set out to investigate what makes a great software engineer?

Paul defended his dissertation last week and just put the finishing touches on his dissertation document, describing three major studies of software engineering expertise, including an interview study of 59 senior engineers and architects at Microsoft, a survey of almost 2,000 Microsoft engineers, and another interview study of 46 non-engineers on their views of software engineers and their required skills. What did he discover?

While it’s hard to summarize the incredible nuance in his 250 pages of discovery, several key findings jumped out to me:

  • Across all three studies, it was clear that both software engineers and non-software engineers viewed the ability to quickly and correctly write code that met requirements was viewed as a baseline requirement. It wasn’t enough to qualify someone as a “great” engineer, but rather a necessary but insufficient condition for working as an engineer.
  • Engineers need to be pliable in their beliefs and their skills. In their view, the software industry moves fast enough that they quickly have to discard old knowledge and replace it with new knowledge, constantly learning not only about new languages, libraries, and architectures, but also new business ideas, market priorities and other workflows. An engineer that struggled to learn, learn quickly, and learn enthusiastically (rather than reluctantly) could not be a great engineer.
  • Great engineers can reason about multiple levels of abstraction seamlessly and lucidly, linking the lowest levels of implementation to the highest levels of product impact in a marketplace. This ability was considered rare but extremely valuable, because it supported the central decision-making tasks of engineers, which required this multi-level abstract reasoning. Engineers who got lost in the details and couldn’t connect them to the larger system of concerns in the business were not considered great.
  • Because software engineering was an inherently collaborative endeavor, engineers viewed the constructive facilitation of the work of others as critical. This was not only in a technical sense (providing information and updates to other engineers who depended on code an engineer maintained), but in a social sense (creating a psychologically safe environment in which ideas could be shared, helping others to learn productivity, etc.).
  • Non-software engineers described many of the same desirable attributes for software engineers as software engineers did, but they also raised the issue of respect. They observed a clear ranking of different types of expertise in terms of value to the organization, with engineers of all kinds viewed as essential to shipping a product, but other types of expertise as peripheral. This revealed a bias toward functionality as value, overlooking other types of UX and product marketing value.
  • Across the diverse organizations of Microsoft, there was broad agreement on the above, suggesting that either these are fundamental skills to software engineering, or fundamental skills to the culture of Microsoft.

Importantly, many of the attributes of great engineers that Paul discovered are not attributes that we know how to train for, measure, or relate to actual software engineering outcomes. Much of his work raises questions about how to use these discoveries in education, hiring, promotion, and research.

Of course, because the work was focused on Microsoft, there are a number of questions about it’s generalizability. It’s possible that these notions of engineering greatness are particular to Microsoft and might be different in other companies, particularly in non-software companies that have in-house software teams. It’s also possible that there are cultural differences in settings with different norms and values on interpersonal relationships. We need more research on other types of software companies to know just how fundamental these attributes are to software engineering expertise.

Paul’s work has already had significant impact, reaching tens of thousands of engineers through ACM Learning Webinar and impacting the design of the Academy for Software Engineering curriculum in New York. Microsoft has also been eager to apply some of his findings to hiring and recruiting.

Do these findings seem consistent for software companies you work in or are familiar with? If not, what’s different? We’re eager to hear your reactions!