The invisibility of failure in computing education

Over the past few years I’ve pivoted from research on developer tools to a new focus on computing education research (CER). I was tired of seeing learners fail, drop out, or worse yet, self-select out of computing altogether because they viewed it as too hard, too boring, too irrelevant.

Four years in, I’m still surprised by how rare this sentiment is in academia, particularly in Computer Science departments. In fact, most faculty in CS departments I know view CER as “just teaching”, “not computer science”, or “not hard”. In fact, used to have this opinion of CER before I jumped into it.

Where do these negative and dismissive opinions about computing education research come from? I’ve been compiling a list:

  • Most CS researchers are only familiar with the SIGCSE conference, and if they know anything about it, they know that its attended primarily by instructors, publishes short 6-page papers, and in its history has mostly published anecdotal observations about teaching innovations. If this is all someone knew about the field, they’d be right: most of the work at SIGCSE is not research, or at least not rigorous research. This has changed slightly over the past five years, but not much.
  • Many CS faculty subscribe to the “geek gene” theory, believing that some people are born with the ability to code, and others not. If this is true (which it’s obviously not), there’s really nothing to be done to improve computing education, since learning doesn’t depend on the quality of instruction. That short circuits any interest in investigating better ways to teach and learn computing.
  • CS researchers value computational things that haven’t existed before, that expand the power of computing. Contributions in CER generally aren’t new computational things, and even when they are, their power is in shaping how learners think, reason, and problem solve, not in creating new computational possibilities.
  • The high-performing students that survive CS programs mask the failures of CS programs. Students get jobs, they create powerful software, they get paid more than anyone else, and they become productive members—and sometimes leaders—of the software industry. This survivorship bias makes faculty forget about the 50% of students who dropped out of CS1, the students who graduated without the ability to write a program from scratch, and the tens of thousands of students in our universities that would never even consider CS because of the racial and gender homogeneity or the unwelcoming culture.
  • When students fail to learn, we often don’t see these failures because we don’t have good measures of learning. Most exams in CS classes test for declarative knowledge about syntax, semantics, and algorithms, and for program execution tracing skills. They seldom test for the ability to do the things that CS faculty actually value: elegant, modular design; efficiency; algorithmic problem solving; task decomposition. And they rarely test for the things that the software industry cares about: clear communication, planning skills, decision-making, self-awareness, reliability, and so on.
  • Students successfully create software. That means they’re learning, right? Not necessarily. It’s very hard to see what went into creating a program. Most students make it through CS programs by leveraging TAs, classmates, and StackOverflow, and sometimes by cheating. If the goal is to educate students who can independently solve computational problems, that students have created things is no evidence of this ability. (That’s not to say that students should work alone—they shouldn’t—its just that teamwork and the Internet tend to confound our measurements of learning, and make us thing we’re succeeding when we’re not).
  • There aren’t many examples of tenure-track CER faculty, creating a chicken and egg problem. Why would a CS department hire tenure-track CER faculty when there aren’t many Ph.D. students in CER doing amazing research? But why would there be any students if there aren’t any tenure-track faculty? Even if CS departments did value CER—and some do—there aren’t yet many researchers to hire.

Despite all of these problems, I’m optimistic. And there are concrete things we can all do to eliminate all of the biases above:

  • Read the CRA white paper I helped write on the importance of CER and share it with your CS chairs, deans, and colleagues. We wrote it to make the case.
  • Make sure your colleagues know about ICER (the ACM International Computing Education Research conference). This, and the TOCE and CSE journals, is where the most rigorous research is being published.
  • Invite computing education researchers to speak at seminars, so departments can get to know what great research looks like.

Slowly but surely, we’ll bootstrap this thing into existence.

Making money versus making knowledge

I’ve spent the past three years doing two very different things. As CTO of AnswerDash, my goal was to make money. As an Associate Professor at the University of Washington, my goal was to make knowledge. What’s the difference?

In my experience, making money is fundamentally about relating to people. To convince anyone to give you money, you have to understand their needs, their desires, their fears, and their anxieties. If you’re marketing to them, you have to find words, images, and experiences that provoke these emotions and stir them to action. If you’re selling to them, you have to understand them interpersonally and find a way to influence their behavior through your words and actions. And if you’re designing product for them, you have to envision experiences that change their life in a dramatic enough way that they’re willing to give you time or money. Making money is fundamentally about changing people’s behavior by understanding their emotions.

In contrast, making knowledge is fundamentally about understanding things that are much more abstract: nature, truth, reality, humanity. To make knowledge, you have to understand how the world works, how people works, how society works. And attaining this understanding doesn’t involve having to know any people in particular or how their emotions work. Instead, you need to understand ideas, what makes good ones, how to come up with them, evaluate them, critique them, explain them. Making knowledge feels like walking around in the dark, looking around for the light switch, until you stumble upon it, fumble to flip it, and suddenly everything is clear. Making knowledge is fundamentally about bring clarity to chaos.

Now, some people who make money might argue that they’re making knowledge too. Certainly anyone in a large, well-resourced company, investigating future products is creating new know-how. And I actually believe that many people in companies are making the same discoveries that researchers are. What’s different, however, is that those discoveries are not expressed and they are not shared as often. By not expressing them, there’s not an opportunity to evaluate how clearly the ideas are understood, which leaves the ideas weak, tacit, and fragmented. By not sharing them, there’s no way for these discoveries to impact the things that people create and the choices that people make. This is changing, as more people in industry blog and share their ideas online, but the clarity of the ideas is lacking, because the people sharing are often not trained to bring clarity, and because they don’t have as much incentive to share clear ideas.

Of course, people who make knowledge make money too. I get paid to share the knowledge that I and others have made through teaching. Sometimes I get paid to share my knowledge with companies or juries. Sometimes, I don’t understand the enterprise of knowledge: how is it rational for someone to pay me money for knowledge they don’t have, they can’t describe, and they can’t imagine, on the promise that it will bring clarity to the chaos they see in the world? And why is that clarity worth so much to them? If there weren’t jobs on the other end, would they pay as much to have that clarity? Sometimes, I think that professors forget that part of their job is to bring clarity to students, and not just to themselves.

I prefer to make knowledge. I find it more personally interesting, more intellectually challenging, and more meaningful. That doesn’t mean that I think making money is bad, it’s just not something I enjoy as much. That’s partly because I don’t enjoy the puzzle of understanding someone’s emotions. I find that I can see the structure behind an idea more easily than I can see the heartbeat behind someone’s behavior.

In a way, programmers are also people who deal with ideas more than they deal with people. That’s because code is a form of knowledge: it’s an expression of how to that embodies beliefs about the world. In some ways, that’s why it’s hard for so many people who enjoy writing code to understand who they’re writing it for and why: that requires understanding the people’s feelings. It’s strange that something so logical and so formal as code is still fundamentally about feelings.

The service implications of interdisciplinarity

I am what academics like me like to call an “interdisciplinary researcher”. This basically means that rather than investigate questions within the traditional boundaries of established knowledge, I reach across boundaries, creating bridges between disparate areas of work. For example, the research questions I try to answer generally span computer science, psychology, design, education, engineering, and organizational science. I use theories from all of these, I build upon the knowledge from all of these, and occasionally I even contribute back to these areas of knowledge.

There are some wonderful things about interdisciplinary work, and some difficult things. The wonderful things mostly stem from being able to usefully apply knowledge to the world, synthesizing ideas from multiple disciplines to the problems of today. This is possible because I don’t have the duty to a discipline to deepen its understanding. Instead, my charge is to invent technologies, policies, methods and processes that embody the best ideas from more basic research. In a way, interdisciplinary research is necessary for those basic research discoveries to ever make it into the world. This is highly rewarding because I get to focus on everyday problems, learn a ton about many different fields of research, and can easily show people in my life how my work is relevant to theirs.

Where interdisciplinary work gets difficult is in the nitty gritty of academic life. Because I know a little bit about a lot of things, I get asked to participate on a lot of committees. I get invitations to software engineering committees, computing education committees, and HCI committees, since they all touch on aspects of people’s interactions with code. I get invited to curriculum committees more often because my work seems more directly applicable to what we teach (because it is). People from industry contact me more often because they can see how my work informs their work, more so than the basic research.

And of course, my time isn’t infinite, so I have to pick and choose which of these bridges to make. I find myself with some difficult choices: should I create a link to industry, or bridge two fields of academia? Should I invest in disseminating a discovery through a startup, an academic conference talk, or YouTube video? Or should I just focus on my research, slowly transforming my fuzzy interdisciplinary research area into something more disciplinary, with all of its strengths and weaknesses?

Someone probably does research on these research policy questions. Maybe they can help!

Design and the limits of automation

One of the central themes of U.S. President Barack Obama’s final State of the Union addresses was the idea that wages are flat because of automation. He argued that automation, and in particular, computing, is something rapidly eliminating jobs, especially those that involve routine, proceduralized, deterministic tasks. And with machine learning, AI, and deep learning, many of the tasks that require judgment and decision-making are also being automated.

I was talking about this—forebodingly—with my 14 year old daughter at dinner the other night, and she had a surprising reaction:

“That’s great! Then we can all be artists, designer, and inventors!”

I probed:

“Why won’t those be automated too?”

“Because computers aren’t creative, and even when they are, they don’t have any taste.”

What an interesting hypothesis! We spoke on this a bit longer, and arrived at an interesting conclusion. Computers may be able to generate a lot of ideas (because of their speed and scalability), but when it comes time to selecting which of those ideas are good, they will always struggle, since notions of what makes an idea good are so subtle, multidimensional, and often subjective. This is especially true in, where emotional response has primacy over functionality. For evidence, look at any review of a movie, album, or exhibit. Could a machine predict the critiques, let alone act upon them to improve the art?

Now, even if a computer were able to leverage humanity to make these judgements (posting its ideas on Mechanical Turk for feedback), and even if it were able to synthesize this feedback into new ideas, would humanity tolerate the scale of critique necessary for computers to independently arrive at good designs and good art? It’s hard to imagine. Furthermore, wouldn’t it still be humanity making the judgements of what is right? We would still need critics to offer feedback and constructive critique. Without us, computers would not know what to choose.

Perhaps the implication of this little thought experiment is that the asymptote of computational automation leads to a society of people who do not create, but do critique, constructively. In some domains, we already see this. For example, in electronic dance music, much of the sonic material comes from other pre-existing recordings. Or in DJing, where much of the art is in selecting what to play. Algorithms may take over the task of generating the new art and designs, but we will be the editors and critics.

Learning contexts across the lifetime

One of the wonderful things about public education is that it provides children a dedicated context for learning. Even more than that, it really defines a child’s purpose structurally and socially: there job is to acquire skills, knowledge, and wisdom before entering the “real” world to contribute.

For me, this world of learning was something I never wanted to leave. The world of ideas and skills was the real world, and I wanted to find a place where I could keep learning. A life of research and teaching was a dream. The fact that I get paid to make and share discoveries still astounds me.

But in the rest of my adult life, learning contexts are exceedingly rare. Here’s a short list of contexts where I learn outside of my job:

  • News. To an extent, journalists teach. I learn about the world, what’s happening in it, and why it is happening. I occasionally learn some history.
  • Podcasts. I listen to Marketplace and learn about economics. I listen to Death, Sex & Money and learn about mortality and ethics. I listen to the Savage Lovecast to learn about relationships and sexuality. I listen to the Slate Culture Gabfest to learn about the human condition. These media spaces are places where analysis, ideas, and wisdom thrive, and are often grounded research.
  • Books and movies. In these I learn to empathize, seeing the world and the world’s conditions through the stories of others.
  • YouTube. Know how abounds, from how to tie knots to how to have difficult conversations with your teen.

The wonderful thing about these media is that they are explicitly framed as learning contexts. The news is intended to teach. Podcasts are designed for lecture and analysis. I listen to them because I’m ready and eager to learn about the world and its people and ideas.

In other areas of my adult life, I find that people are completely disinterested in learning. They don’t want to learn other people’s perspectives, learn new skills, or understand how the world works. They want to get their work done. They want to feel safe. They want confirmation that their beliefs are right. They want to be reassured about the future. Its only when they enter a learning context—a newspaper, a theater, a 20 minute podcast—that these anxieties melt away for a brief time, and they become open to knowledge.

How can we create more of these learning contexts? How do we create them in workplaces? How do we teach children to create their own learning contexts throughout their lifetime? Is there something about our formal educational systems that make people believe that schools are the only place where people learn? How do we help people value life long learning?

Startup good and evil

Pioneer square

“Software is eating the world.” said Marc Andreesen. This is truer ever day, especially here in Seattle, as I watch the job postings pile up, the resumes pour in, the old apartments being torn down to be replaced with expensive condos for the engineer elite. I go to parties and everyone talks about software, what they want to do with it, or if they aren’t in software, how they feel about their world being eaten by it.

As I sit here in this posh Pioneer Square coffee shop, surrounded by both poverty and a hundred and one startups full of wealthy engineers, I can’t help but wonder: what should developers and software companies in this neighborhood be doing with all of this power? Do they have any responsibility to use it for good? Or is profit motive enough?

One conclusion I’ve come to after founding a software startup of my own is that it’s certainly hard enough to profit that factoring in any other consideration is nearly impossible. There’s just not time once a business already exists.

There is a magical point at which there is time to consider good and evil, and that’s before someone has chosen an opportunity pursue. We don’t have to choose the “evil” opportunities. We can choose the “good” ones. (We’ll save definitions of good and evil for another post).

Yes, knowing which opportunities are good and which are bad is hard. Unintended side effects abound. Finding opportunities that are both good and profitable is often an over-constrained problem. And what’s good to one person isn’t always good to another.

But we software people solve hard problems all the time, don’t we? Why can’t we solve the problem of finding profitable, “good” businesses?

One reason is that the software industry doesn’t like to talk about the opportunities its pursuing. We operate in stealth, fearing theft of our precious opportunities, and wait until they’re fully formed and ready to share before we get feedback from the world.

But is there really all that much risk on someone capitalizing before us? There are tons of profitable ideas. There are only so many people with the timing, resources, and risk profile to pursue them, and so few do.

Take the energy industry. Saving energy is “good” by some definitions, for us, for the planet, and for business. There are lots of ideas about how to do this, many of them shared publicly in the academic archive; many are even disclosed in patents. That industry seems to be doing just fine. Growing many business, with lots of competition, seems to help everyone, even if most businesses go under.

So the next time you’re thinking about starting a business, consider talking to people about your opportunity. Yes, learn about its profit potential, but also learn about its other potential. What harm will it do? What jobs will it kill? What joy will it bring? What are the full spectrum of side effects it will have, good or bad? Are there ways to tweak the opportunity to bend the curve for the better, while also encouraging profitability?

And don’t just ask the software people in your software bubble. Ask an ethicist. Solicit a sociologist. Argue with an anthropologist. Inquire an information scientist. Persuade a political scientist. Try getting the perspective of every discipline rather than just your discipline. Does everyone see a win? Go for it. Does someone see an evil? At least you’ll know it, going in.

Privilege and CS1

With all of the recent discoveries about the unequal access of MOOCs, the bias in the lecture format, and access to computer science education in general, I’ve been thinking a lot about privilege.

One of the places that privilege crops up most in my environment is around undergraduate admissions. In The Information School’s Informatics program, for example, we try really hard to account for all of the sources of bias in our evaluations, in both our admissions form, our recruiting, and our decision processes. One of the most problematic part of our process, however, is our reliance on UW’s CSE 142 grades.

Now, UW’s CSE 142 is an amazing accomplishment in many ways. It’s one of a few departments making significant efforts and inroads into diversifying its student body, and doing so in a way that best leverages our university’s strengths. I commend everyone in CSE who not only innovates in broadening participation, but is developing sustainable practices for keep participation broad.

On the other hand, 142 is inescapably a “weed-out” course. There simply aren’t enough faculty to teach all of the students who want to be CS or Informatics majors, and so admissions committees rely heavily on student performance in 142 to predict future success in our programs. If we believe that the grades assigned in 142 reflect aptitude—and I believe they largely do—it seems entirely reasonable to use this as a significant factor in admissions.

And yet, if we dig deeper on what these grades actually reflect, I’m not convinced that introductory courses like 142 are really a fair test. The vast majority of students who succeed in the course came in with prior programming experience, and the access to this prior experience is highly privileged resource. The students who took a CS class in high school probably came from one of the few high schools in the United States that invests in computing education teachers and courses, which tend to be highly affluent. The students who had experiences from summer camps in middle and high school only enrolled because 1) they or their parents learned about them somehow, and 2) could afford the registration fees. For the students who were self taught, they needed 1) free time to learn, rather than work part time jobs, 2) access to the internet, and 3) someone to introduce them to the possibilities in computing. All of these are things that the vast majority of American’s don’t have.

There are some fantastic efforts to rectify these inequities. code.org, CS NYC, and even No Child Left Behind 2.0 are attempting to level the playing field. Until these efforts pay off, however, what do we do in the mean time? Is it reasonable to continue to just admit the mostly wealthy, mostly white and asian, and mostly urban students who succeed because of their prior exposure to computing? And if its not, is it really fair to exclude some students from these groups to make room for a more diverse cohort, even though the more diverse cohort has less practice?

I don’t know. The U.S. Supreme Court seems to have an opinion on the matter, at least for college admissions broadly. But at some point, we need to have a serious discussion about the balance between likelihood of success in our programs, the diversity of our workforce, and the more advanced types of teaching that it might take to achieve both.

The black hole of software engineering research

Over the last couple of years as a startup CTO, I’ve made a point of regularly bringing software engineering research into practice. Whether it’s been bleeding edge tools, or major discoveries about process, expertise, or bug triage, it’s been an exciting chance to show professional engineers a glimpse of what academics can bring to practice.

The results have been mixed. While we’ve managed to incorporate much of the best evidence into our tools and practices, most of what I present just isn’t relevant, isn’t believed, or isn’t ready. I’ve demoed exciting tools from research, but my team has found them mostly useless, since they aren’t production ready. I’ve referred to empirical studies that strongly suggest the adoption of particular practices, but experience, anecdote, and context have usually won out over evidence. And honestly, many of our engineering problems simply aren’t the problems that software engineering researchers are investigating.

Why is this?

I think the issue is more than just improving the quality and relevance of research. In fact, I think it’s a system-level issue between the interaction between academia and industry. Here’s my argument:

  • Developers aren’t aware of software engineering research.
  • Why aren’t they aware? Most explicit awareness of research findings comes through coursework, and most computer science students take very little coursework in software engineering.
  • Why don’t they take a lot of software engineering? Software engineering is usually a single required course, or even just an elective. There also aren’t a large number of software engineering masters programs, to whom much of the research might be disseminated.
  • Why are there so few courses? Developers don’t need a professional masters degree in order to get high paying engineering jobs (unlike other fields, like HCI, where professional masters programs are a dominant way to teach practitioners the latest research and engage them in the academic community). This means fewer needs for software engineering faculty, and fewer software engineering Ph.D. students.
  • Why don’t students need coursework to get jobs? There’s huge demand for engineers, even complete novice ones, and many of them know enough about software engineering practice through open source and self-guided projects to quickly learn software engineering skills on the job.
  • Why is it sufficient to learn on the job? Most software engineering research focuses on advanced automated tools for testing and verification. While this is part of software engineering practice, there are many other aspects of software engineering that researchers don’t investigate, limiting the relevance of the research.
  • Why don’t software engineering researchers investigate more relevant things? Many of the problems in software engineering aren’t technical problems, but people problems. There aren’t a lot of faculty or Ph.D. students with the expertise to study these people problems, and many CS departments don’t view social science on software engineering as computer science research.
  • Why don’t faculty and Ph.D. students have the expertise to study the people problems? Faculty and Ph.D. students ultimately come from undergraduate programs that inspire students to pursue a research area. Because there aren’t that many opportunities to learn about software engineering research, there aren’t that many Ph.D. students that pursue software engineering research.

The effect of this vicious cycle? There’s are few venues for disseminating software engineering research discoveries, and few reasons for engineers to study the research themselves.

How do we break the cycle? Here are a few ideas:

  1. Software engineering courses need present more research. Show off the cool things we invent and discover!
  2. Present more relevant research. Show the work that changes how engineers do their job.
  3. Present and offer opportunities to engage in research. We need more REU students!

This obviously won’t solve all of the problems above, but its a start. At the University of Washington, I think we do pretty well with 1) and 3). I occasionally teach a course in our Informatics program on software engineering that does a good job with 2). But there’s so much more we could be doing in terms of dissemination and impact.

The watch

The Apple Watch on my wrist.

The Apple Watch on my wrist.

Yes, I bought the watch.

As an HCI researcher, I couldn’t resist knowing: what’s this thing good for?

I’ve worn it for 24 hours now and found that’s it’s good for many a small thing, but no big things. For example:

  • As someone who often has a day full of meetings at random times and places, the killer app for me is being able to glance at my wrist to see where I’m supposed to be —without awkwardly pulling my phone out in the middle of a meeting. It’s hard to overstate how valuable this is to me. It turns the social meaning of pulling out my phone to check my calendar from “I’m checking email/browsing the internet/texting a friend/and generally disinterested in this conversation” into a brief glance that means “I think have somewhere to be, but I’m listening”.
  • When I’m driving, I frequently have thoughts that I’ll lose if I don’t write them down. This creates a critical dilemma: do I grab my phone and try to have Siri transcribe it, but risk my life and a traffic ticket, or risk losing the thought? With the watch, I can dictate thoughts hands free with a quick flick of my wrist and a “Hey Siri, remind me to…”. This is particularly handy for OmniFocus, where I externalize all of my prospective memory.
  • Text messages are much less disruptive socially. No more loud phone vibrations or accidental sounds to disrupt my coworkers. Instead, the watch tells my wrist and I glance down briefly.
  • This is the best UX for Uber. “Hey Siri, open Uber”, tap the request button, and wait 5 minutes. Yes, you can do it on a phone, where you can get far more information, but I’m usually using Uber in unfamiliar places where I don’t necessarily know how safe it is to pull out a big bright iPhone and tap on the screen for a minute. This makes me feel safer.
  • This one is completely idiosyncratic to me, but I absolutely love the Alaska Airlines glance view, which simply shows a countdown until the number of days until my next flight. I hate flying, and somehow, being able to quickly see how many more days of freedom I have before I climb into a tiny box and suffer sinus pain, dry air, and cranky people, gives me a sense of freedom and appreciation for being a land mammal.

So far, everything else is of little value. I don’t like reading Twitter on the device and certainly don’t want to feel every tweet on my wrist. I’m active enough and haven’t ever wanted any device to support exercise. I get way too much email to want to triage on the device for very long. Most of the third party apps aren’t that useful yet (although as companies learn what information and services are most valuable to their users, I believe they’ll improve in their utility).

As a 1.0 device, it has all of the problems you might expect. It’s slow at times, while it talks to my iPhone. The navigation is model clunky and inconsistent. Sometimes Siri hears me say her name, sometimes she doesn’t. These will probably all be ironed out in a version or two, just as with most devices.

These issues aside, if you look at the list of benefits to me above, they fall into some unexpected categories. I thought the value to me would mostly be getting information faster, but most of the value is actually in reducing friction in social interactions and a sense of safety in various situations. This is not a device to get digital stuff done. It’s a device to get digital stuff out of the way.

I am tenured

I am now a tenured professor.

After working towards this for 15 years, it’s surreal to type that simple sentence. When I first applied to Ph.D. programs in 2001, it felt like a massive, nearly unattainable goal, with thousands of little challenges and an equal number of opinions about how I should approach them. Tenured professors offered conflicting advice about how to survive the slog. I watched untenured professors crawl towards the goal, working constant nights and weekends, only to get their grant proposal declined or paper rejected, or worst of all, tenure case declined. I had conversations with countless burned out students, turned off by the relentless, punishing meritocracy, regretting the time they put into a system that doesn’t reward people, but ideas.

Post-tenure, what was a back-breaking objective has quickly become a hard earned state of mind. Tenure is the freedom to follow my intellectual curiosity without consequence. It is the liberty to find the right answers to questions rather than the quick ones. Its that first step out of the car, after a long drive towards the ocean, beholding the grand expanse of unknown possibilities knowable only with time and ingenuity. It is a type of liberty that exists in no other profession, and now that I have and feel it, it seems an unquestionably necessary part of being an effective scientist and scholar.

I’ve talked frequently with my colleagues about the costs and benefits of tenuring researchers. Before having tenure, it always seemed unnecessary. Give people ten year contracts, providing enough stability to allow for exploration, but reserving the right to boot unproductive scholars. Or perhaps do away with it altogether, requiring researchers continually prove their value, as untenured professors must. A true meritocracy requires continued merit, does it not?

These ideas seem naive now. If I were to lose this intellectual freedom, it would constrain my creativity, politicize my pursuits, and in a strange way, depersonalize my scholarship, requiring it to be acceptable by my colleagues, in all the ways that it threatened to do, and sometimes did, before tenure. Fear warps knowledge. Tenure is freedom from fear.

For my non-academic audience, this reflection must seem awfully privileged. With or without tenure, professors have vastly more freedom than really any other profession. But having more freedom isn’t the same as having enough. Near absolute freedom is an essential ingredient of the work of discovery, much like a teacher must have prep time, a service worker must have lunch breaks, an engineer must have instruments, and a doctor must have knowledge.

And finally, one caveat: tenure for researchers is not the same as tenure for teachers. Freedom may also be an ingredient for successful teaching, in that it allows teachers to discuss unpopular ideas, and even opinions, without retribution. But it may be necessary for different reasons: whereas fear of retribution warps researchers’ creation of knowledge, it warps the minds of teachers’ dissemination of that knowledge.