My sabbatical stretch goal: teaching high school CS

Sabbaticals are usually a time for faculty to escape from the daily grind of teaching and service to read, write, and discover new perspectives on their scholarship. Some people travel to other universities to immerse themselves in other cultures and ideas. Others go to industry to find new collaborators. Others still forego a full salary and just use it as time to recover from six exhausting years of modern tenure-track faculty life.

I decided it to use it to teach high school students computer science. Eek!

Now, I knew this would be crazy. I knew it wouldn’t be recovery time. And I knew it could be exhausting. At some earlier point in my career, I thought it might be fun to do it for a whole year of high school teaching, just to really stretch my mind, and support my pivot to computing education research. I eventually decided to reduce my ambitions, and instead teach a short six-week summer elective for the University of Washington’s Upward Bound program.

Upward Bound is a federally-funded program that helps first-generation college students prepare for and gain admission to college. Our program primarily serves south Seattle, which tends to be a lower socioeconomic class and higher racial diversity than the mostly white Seattle. This also fit with some of my sabbatical forays into privilege, giving me a chance to encounter students without most of the privileges I had as a high school student.

I won’t go into too much detail about the course itself, as I’m planning on publishing some work on data I’m gathering as part of the students’ experience. But I will say that it’s a web design course and I’m investigating the benefits of explicitly bringing ideas of community of practice and identity development into the classroom, teaching the students about privilege, imposter syndrome, stereotype threat, and prior knowledge. As I write this, I’ve just finished our third week, with three more weeks to go.

I’d like to share some observations I’ve made as a first time high school CS teacher. Most of these are going to read as pretty obvious to any one who’s done this before. The notable thing to me was how salient these issues have become in my mind.

  • There is a massive amount of prior knowledge required to successfully learn HTML and CSS. To name some of it: English, touch typing, spelling, case, the concept of files and file extensions, typography, the idea that browsers request files, what “quotes” are, the idea that content can be represented symbolically, the idea that content can retrieved by name, the concept of separating content from presentation, etc.
  • There’s nothing natural or obvious about any of this required prior knowledge. It’s all artificial and if students haven’t picked it up in classes or at home, they’re going to have to pick it up while they learn.
  • Some students need to spin in a chair to pay attention. Others need to make eye contact with the teacher. Most won’t pay attention unless they’re extremely motivated or extremely compliant. Using a student’s name gets their attention for a minute, but not much longer. The Gates Foundation was right: teacher classroom management skills are a fundamental prerequisite for effective teaching.
  • Most concepts in computer science and programming languages are incredibly boring and arbitrary to students, even when they’re placed in personally meaningful contexts. One of my students is deeply passionate about shoes and is therefore making a website about shoes, but even this isn’t enough to make the difference between a “div” and a “p” tag interesting.
  • Showing students a rich, authentic picture of the larger world and connecting it to their knowledge seems to be a really powerful way of motivating students to learn boring things. When I can draw a line between CSS class selector rules and the path to a job that might bring a students’ family out of poverty, that student will actively engage in learning.
  • Teachers are powerful. When I genuinely care about each student and their learning, and I show this to them, students respond with engagement, respect, and trust. When I can find the single statement of positive feedback that a student needs to change their self-concept, I can pivot their interests and passions. That’s a scary amount of power.
  • Everything above breaks down considerably when there are more than about 10 students in class. I have 11, and when someone is absent, my ability to attend to each individual feels profoundly different.
  • It’s important that I have 15 years of experience with everything that can possibly go wrong in web development. It allows me to debug any problem a student encounters quite quickly, and for me to formulate, on demand, an explanation that builds new, more robust knowledge in a student’s head that helps them fix the problem in the future, but also prevent it with that new knowledge.

I’ve worked many jobs in my life time, from blue collar manual labor, to service, to engineering, and management. Teaching high schoolers computer science is by far the most difficult thing I have ever done, despite my vast experience as a computer scientist and a software developer.

This has changed some of my perspectives in computing education research. For example, I’ve actually become more skeptical of the feasibility of training high school computer science teachers. I think there’s too much domain knowledge required to teach it well and at the scale of most 25-30 student classrooms. I also think that teaching software engineers to be CS teachers (even if they were willing to take a massive pay cut) seems hard too. Teaching is such a difficult skill, I think an engineer would need to make a serious pivot to become a great teacher. Even if we get 25,000 teachers, I suspect they’re going to have a very difficult time teaching well. We’ll have a lot of ineffective teaching. (Which is I suppose what we have in other disciplines too).

Evidence-based, interactive, highly intelligent curriculum may help, where the instructional materials themselves have the domain knowledge, and teachers can focus on classroom management. I’m investigating some of these opportunities in my lab. But it’s going to be a long road.

My sabbatical research pivot

Since I started research back in 1999 as an undergraduate, I’ve always been intrigued by the goal of helping people write code more productively. Sometimes I ran studies that tried to identify barriers to their productivity. Sometimes I made tools that helped them navigate faster, debug faster, test faster, and communicate faster. Every one of my research efforts was aimed explicitly at speed: the more a developer can do per unit time, the better off the world is, right?

Then something changed. I founded a software startup, and led it’s technical and product endeavors. And I watched: how much did developer productivity matter? What influenced the quality of the software? What was the real value of faster developers?

In my experience, faster development didn’t matter much. Developers’ speed mattered somewhat, but only to the extent that we made effective product design choices based on the valid understanding of customer, user, and stakeholder needs. It didn’t matter how fast they were moving if they were moving in the wrong direction. And developer tools—whether a language, API, framework, IDE, process, or practice—mattered only to the extent that developers could learn to use these tools and techniques effectively. Many times they could not. Rather than faster developers, I needed better developers, and better developers came through better and faster learning.

Furthermore, I couldn’t help but wonder: what part of this job is fulfilling to them? It’s certainly wasn’t writing a lot of code. There was always more code to write. In fact, it was the moments they weren’t coding—when they were reading about a new framework, picking up a new language, trying a new process—that they enjoyed most. These were moments of empowerment and moments of discovery. These were moments of learning. Around the same time, my student Paul Li was beginning to investigating software engineering expertise, and finding that, much as I had experienced, it was the quality of a developer’s code, and their ability to learn new coding skills, that were critical facets of great engineers. Better learning allows developers to not only be more productive and more effective, but also more fulfilled, Better learning makes better developers, who envision better tools, better processes, and better ideas. As obvious as it should have been to someone with a Ph.D. in HCI, it was the human in this equation that was the source of productivity, not the computer. Like most things computational, developer tools are garbage in, garbage out.

After I stepped down as AnswerDash CTO and begin my post-tenure sabbatical, it became clear I had to pivot my research focus. No more developer tools. No more studies of productivity. I’m now much less interested in accelerating developers’ work, and much more interested shaping how developers (and developers-in-training) learn and shape their behavior.

This pivot from productivity to learning has already had profound consequences to my research career. For a long time, I’ve published in software engineering venues that are much more concerned with productivity than learning. That might mean I have less to say to that community, or that I start contributing discoveries that they’re not used to reading about, evaluating, or prioritizing. It means that I’ll be publishing more in computing education conferences (like ACM’s International Computing Education Research conference). It means I’ll be looking for students that are less interested in designing tools that help them code faster, and more interested in designing tools to help developers of all skill levels code better. And it means that my measures of success will no longer be about the time it takes to code, but how long it takes to learn to code and how well someone codes.

This pivot wasn’t an easy choice. Computing education research is a much smaller, much less mature, and much less prestigious research community in computing research. There’s less funding, fewer students, and honestly, the research is much more difficult than HCI and software engineering research, because measuring learning and shaping how people think and behave is more difficult than creating tools. Making this pivot means making real sacrifices in my own professional productivity. It means seeing the friends I made in the software engineering research community less often. It means tackling much trickier, more nuanced problems, and having to educate my doctoral students in a broader range of disciplines (computer science, social science, and learning science).

But here’s the upside: I believe my work will be vastly more important and impactful in the arc of my career. I won’t just be making an engineer at Google ship product faster, I’ll be inventing learning technologies and techniques that make the next 10,000 Google engineers more effective at their job. I’ll be helping to eliminate the hundreds of thousands of horrific experiences that people have learning to code into more fulfilling and empowering experiences, potentially giving the world an order of magnitude more capable engineers. Creating a massive increase in the supply of well-educated engineers might even slow down some of the unsustainable growth of software engineering salaries, which are at least part of the unsustainable gentrification of many of our great American cities. And most importantly, I’ll be helping to give everyone that learns to code the belief that they can succeed at learning something that is shaping the foundational infrastructures of our societies.

I’ll continue to be part of the software engineering research community. But don’t be surprised if my work begins to focus on making helping better developers write better code than simply writing code faster. I’ll continue to be part of the HCI research community, but you’ll see my work focus on interactive learning technologies that accelerate learning, promote transfer, and shape identity. And for now, you’ll see me invest much more in building the nascent community of computing education researchers, helping it blossom into the field it needs to become to transform society’s ability to use and reason about code as it weaves itself deeper into our world.

I’m so excited to engage in this new trajectory, and hope to see many of you join me!

If learning to code were like learning to write…

If learning to code were like learning to write, we’d start with words, first teaching children what a token is and how to read them. “Madison, look at that billboard, see the ‘if’? That’s a token in a lot of programming languages.” “Daniel, did you know that the numbers you practiced writing in kindergarten today are called ‘integers’? Python has integers too. They’re a sequence of digits. Want to go tokenize some digits while we shop at the farmers market?”

If learning to code were like learning to write, we’d move onto sentences, teaching children how to parse statements. “Look Madison, I brought you a new book from the library called ‘Python and other beasts’. Let’s read the first page: ‘print(‘ssssssss!’)’. Can you read those tokens? What kind of tokens are they? That’s right, an identifier, a parenthesis, a string, and another parenthesis. Together, they make a function call, which has a name, and a list of arguments. Good job!”

If learning to code were like learning to write, we’d next teach children how to read sentences, showing children how computers execute them. “Want to play computer Daniel? I got a new board game for us. It’s like the game Simon Says: you read a statement and you try to do what the computer is going to do. If you do it exactly like the computer does, you stay in the game, but if you do something different, you’re out. Check out this card, it says: ‘secret.index(“treasure”)’. Want to play?”

If learning to code were like learning to write, we’d next teach children how to read short books, giving them programs to read, exposing them to all of the computational possibilities of the language they were learning. “Madison, what did you choose for your book project this month? Oh, an Instagram post indexing algorithm, interesting! Are you liking it? What’s your favorite idiom?”

If learning to code were like learning to write, we’d ask children to start writing sentences, creating simple statements that accomplish small tasks. “Daniel, we keep forgetting to turn the light off in the garage. Can you log into the IoT portal and write a rule that turns it off every night at 9 pm?”

If learning to code were like learning to write, we’d then ask kids to write short essays, scaffolding their problem solving with design patterns for various genres of computational problems. “Okay, before you leave, let’s discuss your homework for this week: I want you to write a simple Python script that takes President Trump’s Twitter feed and finds all tweets that denigrate a person, place, or thing. I’ve provided the list of tweets that do this, so your job is to find an algorithm to do this automatically. This is classification problem, so the last two weeks of content should set you on a good path.”

If learning to code were like learning to write, kids would go to college and participate in writing workshops, doing code reviews with each other to improve their encapsulation, clarity, performance, and other software qualities. “Madison, I really like this abstraction here; its so simple, but you’ve managed to use it everywhere, reducing a lot of boilerplate and probably preventing a lot of defects. I bet if you added some polymorphism to this function you could simplify it even further”.

Unfortunately, with today’s computing education, learning to code is nothing like learning to write. Computer science never teach students about words. They give a few examples about grammar, but nothing comprehensive. They never teach students how to read, expecting them to pick it up independently. Then, on week two of intro CS, they immediately ask students to write whole essays. Its no wonder so many students are overwhelmed and drop out.

The invisibility of failure in computing education

Over the past few years I’ve pivoted from research on developer tools to a new focus on computing education research (CER). I was tired of seeing learners fail, drop out, or worse yet, self-select out of computing altogether because they viewed it as too hard, too boring, too irrelevant.

Four years in, I’m still surprised by how rare this sentiment is in academia, particularly in Computer Science departments. In fact, most faculty in CS departments I know view CER as “just teaching”, “not computer science”, or “not hard”. In fact, used to have this opinion of CER before I jumped into it.

Where do these negative and dismissive opinions about computing education research come from? I’ve been compiling a list:

  • Most CS researchers are only familiar with the SIGCSE conference, and if they know anything about it, they know that its attended primarily by instructors, publishes short 6-page papers, and in its history has mostly published anecdotal observations about teaching innovations. If this is all someone knew about the field, they’d be right: most of the work at SIGCSE is not research, or at least not rigorous research. This has changed slightly over the past five years, but not much.
  • Many CS faculty subscribe to the “geek gene” theory, believing that some people are born with the ability to code, and others not. If this is true (which it’s obviously not), there’s really nothing to be done to improve computing education, since learning doesn’t depend on the quality of instruction. That short circuits any interest in investigating better ways to teach and learn computing.
  • CS researchers value computational things that haven’t existed before, that expand the power of computing. Contributions in CER generally aren’t new computational things, and even when they are, their power is in shaping how learners think, reason, and problem solve, not in creating new computational possibilities.
  • The high-performing students that survive CS programs mask the failures of CS programs. Students get jobs, they create powerful software, they get paid more than anyone else, and they become productive members—and sometimes leaders—of the software industry. This survivorship bias makes faculty forget about the 50% of students who dropped out of CS1, the students who graduated without the ability to write a program from scratch, and the tens of thousands of students in our universities that would never even consider CS because of the racial and gender homogeneity or the unwelcoming culture.
  • When students fail to learn, we often don’t see these failures because we don’t have good measures of learning. Most exams in CS classes test for declarative knowledge about syntax, semantics, and algorithms, and for program execution tracing skills. They seldom test for the ability to do the things that CS faculty actually value: elegant, modular design; efficiency; algorithmic problem solving; task decomposition. And they rarely test for the things that the software industry cares about: clear communication, planning skills, decision-making, self-awareness, reliability, and so on.
  • Students successfully create software. That means they’re learning, right? Not necessarily. It’s very hard to see what went into creating a program. Most students make it through CS programs by leveraging TAs, classmates, and StackOverflow, and sometimes by cheating. If the goal is to educate students who can independently solve computational problems, that students have created things is no evidence of this ability. (That’s not to say that students should work alone—they shouldn’t—its just that teamwork and the Internet tend to confound our measurements of learning, and make us thing we’re succeeding when we’re not).
  • There aren’t many examples of tenure-track CER faculty, creating a chicken and egg problem. Why would a CS department hire tenure-track CER faculty when there aren’t many Ph.D. students in CER doing amazing research? But why would there be any students if there aren’t any tenure-track faculty? Even if CS departments did value CER—and some do—there aren’t yet many researchers to hire.

Despite all of these problems, I’m optimistic. And there are concrete things we can all do to eliminate all of the biases above:

  • Read the CRA white paper I helped write on the importance of CER and share it with your CS chairs, deans, and colleagues. We wrote it to make the case.
  • Make sure your colleagues know about ICER (the ACM International Computing Education Research conference). This, and the TOCE and CSE journals, is where the most rigorous research is being published.
  • Invite computing education researchers to speak at seminars, so departments can get to know what great research looks like.

Slowly but surely, we’ll bootstrap this thing into existence.

Learning contexts across the lifetime

One of the wonderful things about public education is that it provides children a dedicated context for learning. Even more than that, it really defines a child’s purpose structurally and socially: there job is to acquire skills, knowledge, and wisdom before entering the “real” world to contribute.

For me, this world of learning was something I never wanted to leave. The world of ideas and skills was the real world, and I wanted to find a place where I could keep learning. A life of research and teaching was a dream. The fact that I get paid to make and share discoveries still astounds me.

But in the rest of my adult life, learning contexts are exceedingly rare. Here’s a short list of contexts where I learn outside of my job:

  • News. To an extent, journalists teach. I learn about the world, what’s happening in it, and why it is happening. I occasionally learn some history.
  • Podcasts. I listen to Marketplace and learn about economics. I listen to Death, Sex & Money and learn about mortality and ethics. I listen to the Savage Lovecast to learn about relationships and sexuality. I listen to the Slate Culture Gabfest to learn about the human condition. These media spaces are places where analysis, ideas, and wisdom thrive, and are often grounded research.
  • Books and movies. In these I learn to empathize, seeing the world and the world’s conditions through the stories of others.
  • YouTube. Know how abounds, from how to tie knots to how to have difficult conversations with your teen.

The wonderful thing about these media is that they are explicitly framed as learning contexts. The news is intended to teach. Podcasts are designed for lecture and analysis. I listen to them because I’m ready and eager to learn about the world and its people and ideas.

In other areas of my adult life, I find that people are completely disinterested in learning. They don’t want to learn other people’s perspectives, learn new skills, or understand how the world works. They want to get their work done. They want to feel safe. They want confirmation that their beliefs are right. They want to be reassured about the future. Its only when they enter a learning context—a newspaper, a theater, a 20 minute podcast—that these anxieties melt away for a brief time, and they become open to knowledge.

How can we create more of these learning contexts? How do we create them in workplaces? How do we teach children to create their own learning contexts throughout their lifetime? Is there something about our formal educational systems that make people believe that schools are the only place where people learn? How do we help people value life long learning?

Privilege and CS1

With all of the recent discoveries about the unequal access of MOOCs, the bias in the lecture format, and access to computer science education in general, I’ve been thinking a lot about privilege.

One of the places that privilege crops up most in my environment is around undergraduate admissions. In The Information School’s Informatics program, for example, we try really hard to account for all of the sources of bias in our evaluations, in both our admissions form, our recruiting, and our decision processes. One of the most problematic part of our process, however, is our reliance on UW’s CSE 142 grades.

Now, UW’s CSE 142 is an amazing accomplishment in many ways. It’s one of a few departments making significant efforts and inroads into diversifying its student body, and doing so in a way that best leverages our university’s strengths. I commend everyone in CSE who not only innovates in broadening participation, but is developing sustainable practices for keep participation broad.

On the other hand, 142 is inescapably a “weed-out” course. There simply aren’t enough faculty to teach all of the students who want to be CS or Informatics majors, and so admissions committees rely heavily on student performance in 142 to predict future success in our programs. If we believe that the grades assigned in 142 reflect aptitude—and I believe they largely do—it seems entirely reasonable to use this as a significant factor in admissions.

And yet, if we dig deeper on what these grades actually reflect, I’m not convinced that introductory courses like 142 are really a fair test. The vast majority of students who succeed in the course came in with prior programming experience, and the access to this prior experience is highly privileged resource. The students who took a CS class in high school probably came from one of the few high schools in the United States that invests in computing education teachers and courses, which tend to be highly affluent. The students who had experiences from summer camps in middle and high school only enrolled because 1) they or their parents learned about them somehow, and 2) could afford the registration fees. For the students who were self taught, they needed 1) free time to learn, rather than work part time jobs, 2) access to the internet, and 3) someone to introduce them to the possibilities in computing. All of these are things that the vast majority of American’s don’t have.

There are some fantastic efforts to rectify these inequities., CS NYC, and even No Child Left Behind 2.0 are attempting to level the playing field. Until these efforts pay off, however, what do we do in the mean time? Is it reasonable to continue to just admit the mostly wealthy, mostly white and asian, and mostly urban students who succeed because of their prior exposure to computing? And if its not, is it really fair to exclude some students from these groups to make room for a more diverse cohort, even though the more diverse cohort has less practice?

I don’t know. The U.S. Supreme Court seems to have an opinion on the matter, at least for college admissions broadly. But at some point, we need to have a serious discussion about the balance between likelihood of success in our programs, the diversity of our workforce, and the more advanced types of teaching that it might take to achieve both.

Gidget, a 21st century approach to programming literacy

Over the past two years, my Ph.D. student Mike Lee has been working on Gidget, a new way to give teens this self efficacy in programming. I’m proud to announce that Gidget is now available for anyone to play at Give it a try!

The game takes a very different approach than existing learning technologies for programming. Rather than trying to motivate kids through creativity (as in Scratch and Alice), provide instruction through tutorials (like Kahn Academy and Codecademy), or inject programming into traditional game mechanics (as in CodeCombat or LightBot), Gidget attempts to translate programming itself into a game, providing a sequence of debugging puzzles for learners to solve. It does this, however, with a particular learning objective in mind: teach players that computers are not omniscient, flawless, and intelligent machines, but rather fast, reliable, and mostly ignorant machines that incredibly powerful problem solving tools. The game’s goal is not necessary for players to learn to code (though this does happen in spades), but to teach players that programmers are the ones that give software their magic, and that they could be a code magician.

Our efforts are part of a much larger national conversation about programming and digital literacy. The basic observation, which many have noted over the past two decades, is that professional programmers aren’t the only people who program. Anyone who has to manipulate large volumes of information is at some point going to write a program. Gidget is explicitly designed to give children, teens, and really anyone with an interest in knowing more about programming, the confidence they need to learn more.

Try the game yourself. Share it with your kids. If you teach a CS1 class, try giving it to your students as their first assignment. Send us feedback in the game directly or write me with ideas.

Computer science, information science, and the TI-82


I first learned to code in 7th grade. Our math teacher required a graphing calculator and in the first few weeks of class, he showed us briefly how it could solve math problems if we gave it the right set of instructions. Awesome, right? Not really. What could possibly be more boring than learning a cryptic, unforgiving set of machine instructions to simply do the math I could already do by hand?

That all changed one day when a classmate showed me how to play Tetris on his TI-82. When I asked him how this was possible, he said his brother had made it using that “Program” button that seemed so useless. Suddenly, the calculator became much more to me. It wasn’t a device for doing calculations, it was a Game Boy. I found my owners manual, cut out and mailed the form a LINK cable so that I could transfer the program to my calculator (my classmate had transcribed the program by hand from his brother’s calculator!). Four weeks later the cable arrived in the mail, and I was in business, ready to play my very own poor man’s version of a Game Boy in math class!

Unfortunately, the game was abysmally slow. I could watch the pieces erase and redraw themselves, pixel by pixel. The glacial pace of the rendering made the game impossible to play. Something in me found this unacceptable and I spent the next several weeks trying to figure out precisely how the game worked, hoping that I might find some way to make it faster and more playable.

I learned about variables, which were for storing information about the state of the game. I learned about control statements, which allowed the game to change its response based on state. I learned about user interfaces, which govern how information from the player could be provided and structured and re-presented. I learned about data structures, which helped organize information about the shape of Tetris pieces and the game grid. Most of all, I learned about software architecture, which helped keep the monstrous 5,000 lines of TI-BASIC, viewed through the 8 line display, organized and understandable, but also determined how information flowed from the player, to the game, and back to the player.

I emerged from those arduous weeks not only with a much faster version of the game (using the text console instead of the graph to reach interactive speeds), but also a realization that has shaped my life for over two decades: information is a thing, and algorithms and data structures are how we create, structure, process, search and reason about it.

But what is the relationship between computing and information? For instance, is all programming really about information, or are there some kinds of programming that are about something else? How much overlap is there between computer science and information science as academic disciplines? Should departments of computer science and information science really be separate? As a 12 year old, the relationship between these two perspectives was powerful and exciting, because they represented untapped potential for creativity. Twenty one years later, as a professor of information science fundamentally concerned with computer science, the relationship between computing and information is something deeply fascinating as an intellectual perspective.

Let’s consider the disciplinary relationship first. Both computer science and information science think about search, both study how to organize information, both are concerned with creating systems that help people access, use, and create information. Many of the fundamental questions in computer science are fundamentally about information. What information is computable? How can information be stored and retrieved? How can information be processed? If computer science is about more than information, what else is it about?

Perhaps the difference isn’t the phenomena they investigate, but the media and methods through which the phenomena flow. Computer scientists are strictly concerned with information as it is represented and processed in Turing machine equivalents, whereas information scientists are more concerned with how information is represented in minds, on paper, through metadata, and other, older media. Computer scientists also often view information as context free, whereas information scientists are often more interested in the context of the information than the information itself. Because of this different emphasis, the disciplines methods are also quite different, with computer scientists reasoning about information through mathematical abstractions, and information scientists reasoning about it through the empirical methods of social science. It’s not that the fields study something different, they just study it differently.

And what of the role of information in programming and software engineering? I’ve been writing software for twenty years now, and the more code I write, the more it becomes clear that great software ultimately arises from a deep understanding of the information that will flow through it and the contexts in which that information is produced and consumed. The rest of the details in programming—language, architecture, paradigm—they’re all conveniences that facilitate the management of this information flow. The right programming language, framework, or library is the one that best represents the information being processed and the types of reasoning that must be done about that information. Perhaps its not surprising that the biggest, most impactful software organizations in the world such as Google, Facebook, and Baidu, are explicitly interested in organizing the worlds information, factual, social, or otherwise.

I don’t know yet whether intellectual questions like these matter so much to the world. Something inside me, however, tells me that they do, and that understanding the nature of computing, information, and their overlap might be key to understanding how and why these two ideas are having such a dramatic impact on our world. Somehow, it also feels like these ideas aren’t simply human tools for thought, but something much more fundamental, something more natural. Give me twenty more years and maybe I’ll have the words for it.

The economics of computing for all has been getting some great press, and rightfully so: it’s full of great videos, great stats, and great resources. I also think it has a great mission: there are hundreds of thousands of businesses who need talented software developers in order to grow and provide value, but these businesses can’t find the engineers they need. Moreover, people need jobs and software development jobs are abundant and high quality. Hence the need for more students, more teachers, and more classes in computing. Win, win, right?

I don’t think so. I do believe in this mission. I do research on this mission. I feel strongly that if we don’t massively increase the number of teachers in computing, we’ll get nowhere. But I don’t think that by simply increasing the number of people who can code, we’ll address this gap. This is because the problem, as frames it, is one of quantity, where as the problem is actually about quality.

To put it simply, companies don’t need more developers, they need better developers. The Googles, Facebooks, Apples, and Microsofts of the world get plenty of applicants for jobs, they just don’t get applicants that are good enough. And the rest of the companies in the world, while they can hire, are forced to hire developers who often lack the talent to create great software, leading to a world of poor quality, broken software. Sure, just training more developers might increase the tiny fraction who are great, but that seems like a terribly inefficient way of training more great developers.

This brings us back to teaching. We absolutely need more teachers, but more importantly we need more excellent teachers and excellent learning opportunities. We need the kind of learning infrastructure that empowers every 15 year old who’s never seen a line of code to become as good as your typical CMU, Stanford, Berkely, or UW CS grad, without necessarily having to go to those specific schools. (They don’t have the capacity for the kind of growth, nor should they). We need to understand what excellent software development is, so we can discover ways to help developers achieve it.

This infrastructure is going to be difficult to create. For one, there are going to be a tiny fraction of excellent developers who choose to choose to take a 50% pay cut to teach in a high school or university, and yet we need those engineers to impart their expertise somehow. We need to understand how to create excellent computing teachers and how to empower them to create excellent developers. We need to learn how to make computing education efficient, so that graduates in computing and information sciences have 4 years of actual practice, rather than 4 years of ineffective lectures. We need an academic climate that respects current modes of computing education as largely broken and ineffective for all but the best and brightest self-taught.

Unfortunately, all of this is going to take significant investment. The public and the most profitable of our technology companies must reach deep into their pockets to fund this research, this training, and this growth that they and our world so desperately needs. And so kudos to and every other bottom up effort to democratize computing, but it’s not enough: we need real resources from the top to create real change.

UW MSR Summer Institute on Crowdsourcing Personalized Online Education

For the past three days I’ve been at the 2012 UW MSR Summer Institute, which is an annual retreat on an emerging research topic. This year’s topic was “Crowdsourcing Personalized Online Education”. What this really meant in practice was two things: what is the future of education and how can we leverage the connectedness of observability of learning online? The workshop was mainly talks, but there were an impressive number of great speakers and attendees that kept everyone engaged.

There are a lot of important things that I observed out of all of these discussions and talks:

  • The first thing that was apparent is just how different the motives and values are in the different communities that attended. The majority of the attendees were coming from a computing perspective, with primary interests in creating new, more powerful, and more effective learning technologies. There were a smaller number of learning scientists, with interests in explaining learning and devising better measurements of learning, much more rigorously than any of the computing folks had done. Two representations from the Gates Foundation also came briefly, and it was clear that their primary interests were much less in specific technologies and much more in creating educational infrastructure and new, sustainable markets of educational technologies. There were also representatives from Khan Academy and Coursera, who were broadly interested in providing access to content, and mechanisms to enable experts to share content. My view on what’s really new behind all of this press on online learning is that computing researchers are newly interested in learning and education: almost everything else, except for the scale of access, has been done in online learning before.
  • Jennifer Widom, Andrew Ng, and Scott Klemmer (all at Stanford), talked about their experiences creating MOOCs for their courses. The key take away message is that it is very time consuming to create the course, with each spending countless hours recording high quality lectures before the hours, negotiating rights for copyrighted material, and working out bugs in the platform. All of them implied that running the course the first time was more than a full time job. On the other hand, many were confident it would take much less time for later offerings and had confidence that most aspects of the class can scale to be arbitrarily large (even design critiques, in Scott Klemmer’s case, through calibrated peer assessment). The one part that doesn’t scale is student-specific concerns (for example, students getting injured and needing an extension on an assignment). Scott also suggested that every order of magnitude increase in the number of students demands an order of magnitude increase in the perfection of the materials (because there are so many more eyes on the material), but again, this is a decaying cost, assuming the materials don’t change frequently.
  • In many of the conversations I had around how MOOCs might change education, many faculty believed that the sheer availability and accessibility of instructional content would shift the responsibilities of instructors. Today, most individual instructors are responsible for making their own materials, making them accessible, and then using them to teach. In a world where great materials are available for free, these first two responsibilities disappear. The new job of a higher ed instructor may therefore much less about designing materials and providing access to them, but correcting misconceptions, motivating students, designing good measurements, and building learning communities. One could argue that this is an overall improvement (and also that it actually mirrors the way that textbooks work, which are written by a small number of experts and used as is by instructors).
  • Interestingly, most of the MOOC teachers reported that the social experience of students online were critical, including forum conversations, ad hoc study groups in different cities around the world, and peer assessments. This might quell a lot of the concerns that higher ed teachers had about the loss of interaction in online—it might just be that the interaction shifts from instructor/student interaction to student/student interaction and student/intelligent tutor interaction. Some of the preliminary data suggests that students actually greatly prefer this, since they don’t get that much instructor interaction already, but they’re getting much more student/student interaction than in a traditional co-located course. This might therefore be an improvement over traditional lecture-based classes, but not classes in which teachers interact closely with students (such as small studio courses).
  • No one knows what will happen to the education market, including the people running Kahn and Coursera. However, there were some predictions. First, these platforms are going to make it so easy to share and access content, in the same way that the web has for everything else, that finding and choosing content is going to become a critical challenge for students. Therefore, one new role that instructors might play is in selecting and curating content in a way that is coherent and personalized for the populations that they teach.
  • Most of the interests related to crowdsourcing are either in (1) enabling classes to be taught at scale (by finding ways to free instructors and TAs to have to grade and assess all of the work), (2) to improve the effectiveness, efficiency, and or engagement of learning activities, or (3) to create new opportunities through informal learning, such as through oDesk or Duolingo. Researchers are thinking about how to use data to optimize the sequence of instruction, give just the right hints to correct misconceptions, select a task that is challenging but not too challenging. In my view, this is leading to a renewed interest intelligent tutoring systems.
  • As usual, most of this new research work suffers from a lack of grounding in and leveraging of prior literature in learning sciences and intelligent tutoring systems. There is tons of research on all of these challenges that computing researchers are tackling, but I don’t seem them really using any of the work. This happens over and over in computing research, since the interests are often in creating new things and not understanding the things themselves. I was impressed, however, how much Andrew Ng had leveraged findings in learning sciences to support certain design decisions in Coursera.
  • There was a big undercurrent of data science at the workshop. Everyone was excited about big data sets and how they might be leveraged to improve learning technologies. Most of the methods reported were fairly primitive (AB testing, retention rates), but I’m hoping this new energy behind learning will lead to much better methods and tools for doing educational data mining.

Phew! Sorry for the lack of coherence here. We covered a lot of ground in 2.5 days and this is just a sliver of it.