My sabbatical research pivot

Since I started research back in 1999 as an undergraduate, I’ve always been intrigued by the goal of helping people write code more productively. Sometimes I ran studies that tried to identify barriers to their productivity. Sometimes I made tools that helped them navigate faster, debug faster, test faster, and communicate faster. Every one of my research efforts was aimed explicitly at speed: the more a developer can do per unit time, the better off the world is, right?

Then something changed. I founded a software startup, and led it’s technical and product endeavors. And I watched: how much did developer productivity matter? What influenced the quality of the software? What was the real value of faster developers?

In my experience, faster development didn’t matter much. Developers’ speed mattered somewhat, but only to the extent that we made effective product design choices based on the valid understanding of customer, user, and stakeholder needs. It didn’t matter how fast they were moving if they were moving in the wrong direction. And developer tools—whether a language, API, framework, IDE, process, or practice—mattered only to the extent that developers could learn to use these tools and techniques effectively. Many times they could not. Rather than faster developers, I needed better developers, and better developers came through better and faster learning.

Furthermore, I couldn’t help but wonder: what part of this job is fulfilling to them? It’s certainly wasn’t writing a lot of code. There was always more code to write. In fact, it was the moments they weren’t coding—when they were reading about a new framework, picking up a new language, trying a new process—that they enjoyed most. These were moments of empowerment and moments of discovery. These were moments of learning. Around the same time, my student Paul Li was beginning to investigating software engineering expertise, and finding that, much as I had experienced, it was the quality of a developer’s code, and their ability to learn new coding skills, that were critical facets of great engineers. Better learning allows developers to not only be more productive and more effective, but also more fulfilled, Better learning makes better developers, who envision better tools, better processes, and better ideas. As obvious as it should have been to someone with a Ph.D. in HCI, it was the human in this equation that was the source of productivity, not the computer. Like most things computational, developer tools are garbage in, garbage out.

After I stepped down as AnswerDash CTO and begin my post-tenure sabbatical, it became clear I had to pivot my research focus. No more developer tools. No more studies of productivity. I’m now much less interested in accelerating developers’ work, and much more interested shaping how developers (and developers-in-training) learn and shape their behavior.

This pivot from productivity to learning has already had profound consequences to my research career. For a long time, I’ve published in software engineering venues that are much more concerned with productivity than learning. That might mean I have less to say to that community, or that I start contributing discoveries that they’re not used to reading about, evaluating, or prioritizing. It means that I’ll be publishing more in computing education conferences (like ACM’s International Computing Education Research conference). It means I’ll be looking for students that are less interested in designing tools that help them code faster, and more interested in designing tools to help developers of all skill levels code better. And it means that my measures of success will no longer be about the time it takes to code, but how long it takes to learn to code and how well someone codes.

This pivot wasn’t an easy choice. Computing education research is a much smaller, much less mature, and much less prestigious research community in computing research. There’s less funding, fewer students, and honestly, the research is much more difficult than HCI and software engineering research, because measuring learning and shaping how people think and behave is more difficult than creating tools. Making this pivot means making real sacrifices in my own professional productivity. It means seeing the friends I made in the software engineering research community less often. It means tackling much trickier, more nuanced problems, and having to educate my doctoral students in a broader range of disciplines (computer science, social science, and learning science).

But here’s the upside: I believe my work will be vastly more important and impactful in the arc of my career. I won’t just be making an engineer at Google ship product faster, I’ll be inventing learning technologies and techniques that make the next 10,000 Google engineers more effective at their job. I’ll be helping to eliminate the hundreds of thousands of horrific experiences that people have learning to code into more fulfilling and empowering experiences, potentially giving the world an order of magnitude more capable engineers. Creating a massive increase in the supply of well-educated engineers might even slow down some of the unsustainable growth of software engineering salaries, which are at least part of the unsustainable gentrification of many of our great American cities. And most importantly, I’ll be helping to give everyone that learns to code the belief that they can succeed at learning something that is shaping the foundational infrastructures of our societies.

I’ll continue to be part of the software engineering research community. But don’t be surprised if my work begins to focus on making helping better developers write better code than simply writing code faster. I’ll continue to be part of the HCI research community, but you’ll see my work focus on interactive learning technologies that accelerate learning, promote transfer, and shape identity. And for now, you’ll see me invest much more in building the nascent community of computing education researchers, helping it blossom into the field it needs to become to transform society’s ability to use and reason about code as it weaves itself deeper into our world.

I’m so excited to engage in this new trajectory, and hope to see many of you join me!

The service implications of interdisciplinarity

I am what academics like me like to call an “interdisciplinary researcher”. This basically means that rather than investigate questions within the traditional boundaries of established knowledge, I reach across boundaries, creating bridges between disparate areas of work. For example, the research questions I try to answer generally span computer science, psychology, design, education, engineering, and organizational science. I use theories from all of these, I build upon the knowledge from all of these, and occasionally I even contribute back to these areas of knowledge.

There are some wonderful things about interdisciplinary work, and some difficult things. The wonderful things mostly stem from being able to usefully apply knowledge to the world, synthesizing ideas from multiple disciplines to the problems of today. This is possible because I don’t have the duty to a discipline to deepen its understanding. Instead, my charge is to invent technologies, policies, methods and processes that embody the best ideas from more basic research. In a way, interdisciplinary research is necessary for those basic research discoveries to ever make it into the world. This is highly rewarding because I get to focus on everyday problems, learn a ton about many different fields of research, and can easily show people in my life how my work is relevant to theirs.

Where interdisciplinary work gets difficult is in the nitty gritty of academic life. Because I know a little bit about a lot of things, I get asked to participate on a lot of committees. I get invitations to software engineering committees, computing education committees, and HCI committees, since they all touch on aspects of people’s interactions with code. I get invited to curriculum committees more often because my work seems more directly applicable to what we teach (because it is). People from industry contact me more often because they can see how my work informs their work, more so than the basic research.

And of course, my time isn’t infinite, so I have to pick and choose which of these bridges to make. I find myself with some difficult choices: should I create a link to industry, or bridge two fields of academia? Should I invest in disseminating a discovery through a startup, an academic conference talk, or YouTube video? Or should I just focus on my research, slowly transforming my fuzzy interdisciplinary research area into something more disciplinary, with all of its strengths and weaknesses?

Someone probably does research on these research policy questions. Maybe they can help!

The black hole of software engineering research

Over the last couple of years as a startup CTO, I’ve made a point of regularly bringing software engineering research into practice. Whether it’s been bleeding edge tools, or major discoveries about process, expertise, or bug triage, it’s been an exciting chance to show professional engineers a glimpse of what academics can bring to practice.

The results have been mixed. While we’ve managed to incorporate much of the best evidence into our tools and practices, most of what I present just isn’t relevant, isn’t believed, or isn’t ready. I’ve demoed exciting tools from research, but my team has found them mostly useless, since they aren’t production ready. I’ve referred to empirical studies that strongly suggest the adoption of particular practices, but experience, anecdote, and context have usually won out over evidence. And honestly, many of our engineering problems simply aren’t the problems that software engineering researchers are investigating.

Why is this?

I think the issue is more than just improving the quality and relevance of research. In fact, I think it’s a system-level issue between the interaction between academia and industry. Here’s my argument:

  • Developers aren’t aware of software engineering research.
  • Why aren’t they aware? Most explicit awareness of research findings comes through coursework, and most computer science students take very little coursework in software engineering.
  • Why don’t they take a lot of software engineering? Software engineering is usually a single required course, or even just an elective. There also aren’t a large number of software engineering masters programs, to whom much of the research might be disseminated.
  • Why are there so few courses? Developers don’t need a professional masters degree in order to get high paying engineering jobs (unlike other fields, like HCI, where professional masters programs are a dominant way to teach practitioners the latest research and engage them in the academic community). This means fewer needs for software engineering faculty, and fewer software engineering Ph.D. students.
  • Why don’t students need coursework to get jobs? There’s huge demand for engineers, even complete novice ones, and many of them know enough about software engineering practice through open source and self-guided projects to quickly learn software engineering skills on the job.
  • Why is it sufficient to learn on the job? Most software engineering research focuses on advanced automated tools for testing and verification. While this is part of software engineering practice, there are many other aspects of software engineering that researchers don’t investigate, limiting the relevance of the research.
  • Why don’t software engineering researchers investigate more relevant things? Many of the problems in software engineering aren’t technical problems, but people problems. There aren’t a lot of faculty or Ph.D. students with the expertise to study these people problems, and many CS departments don’t view social science on software engineering as computer science research.
  • Why don’t faculty and Ph.D. students have the expertise to study the people problems? Faculty and Ph.D. students ultimately come from undergraduate programs that inspire students to pursue a research area. Because there aren’t that many opportunities to learn about software engineering research, there aren’t that many Ph.D. students that pursue software engineering research.

The effect of this vicious cycle? There’s are few venues for disseminating software engineering research discoveries, and few reasons for engineers to study the research themselves.

How do we break the cycle? Here are a few ideas:

  1. Software engineering courses need present more research. Show off the cool things we invent and discover!
  2. Present more relevant research. Show the work that changes how engineers do their job.
  3. Present and offer opportunities to engage in research. We need more REU students!

This obviously won’t solve all of the problems above, but its a start. At the University of Washington, I think we do pretty well with 1) and 3). I occasionally teach a course in our Informatics program on software engineering that does a good job with 2). But there’s so much more we could be doing in terms of dissemination and impact.

I am tenured

I am now a tenured professor.

After working towards this for 15 years, it’s surreal to type that simple sentence. When I first applied to Ph.D. programs in 2001, it felt like a massive, nearly unattainable goal, with thousands of little challenges and an equal number of opinions about how I should approach them. Tenured professors offered conflicting advice about how to survive the slog. I watched untenured professors crawl towards the goal, working constant nights and weekends, only to get their grant proposal declined or paper rejected, or worst of all, tenure case declined. I had conversations with countless burned out students, turned off by the relentless, punishing meritocracy, regretting the time they put into a system that doesn’t reward people, but ideas.

Post-tenure, what was a back-breaking objective has quickly become a hard earned state of mind. Tenure is the freedom to follow my intellectual curiosity without consequence. It is the liberty to find the right answers to questions rather than the quick ones. Its that first step out of the car, after a long drive towards the ocean, beholding the grand expanse of unknown possibilities knowable only with time and ingenuity. It is a type of liberty that exists in no other profession, and now that I have and feel it, it seems an unquestionably necessary part of being an effective scientist and scholar.

I’ve talked frequently with my colleagues about the costs and benefits of tenuring researchers. Before having tenure, it always seemed unnecessary. Give people ten year contracts, providing enough stability to allow for exploration, but reserving the right to boot unproductive scholars. Or perhaps do away with it altogether, requiring researchers continually prove their value, as untenured professors must. A true meritocracy requires continued merit, does it not?

These ideas seem naive now. If I were to lose this intellectual freedom, it would constrain my creativity, politicize my pursuits, and in a strange way, depersonalize my scholarship, requiring it to be acceptable by my colleagues, in all the ways that it threatened to do, and sometimes did, before tenure. Fear warps knowledge. Tenure is freedom from fear.

For my non-academic audience, this reflection must seem awfully privileged. With or without tenure, professors have vastly more freedom than really any other profession. But having more freedom isn’t the same as having enough. Near absolute freedom is an essential ingredient of the work of discovery, much like a teacher must have prep time, a service worker must have lunch breaks, an engineer must have instruments, and a doctor must have knowledge.

And finally, one caveat: tenure for researchers is not the same as tenure for teachers. Freedom may also be an ingredient for successful teaching, in that it allows teachers to discuss unpopular ideas, and even opinions, without retribution. But it may be necessary for different reasons: whereas fear of retribution warps researchers’ creation of knowledge, it warps the minds of teachers’ dissemination of that knowledge.

Startup life versus faculty life

As some of you might have heard, this past summer I co-founded a company based on my former Ph.D. student Parmit Chilana‘s research on LemonAid along with her co-advisor Jake Wobbrock. I’m still part time faculty, advising Ph.D. students, co-authoring papers, and chairing a faculty search committee, but I’m not teaching, nor am I doing my normal academic service load. My dean and the upper administration have been amazingly supportive of this leave, especially given that it began in the last year of my tenure clock.

This is a fairly significant detour from my academic career, and I expected the makeup of daily activities that I’m accustomed to as a professor would change substantially. In several interesting ways, I couldn’t have been more wrong: doing a startup is remarkably similar to being a professor in a technical domain, at least with respect to the skills it requires. Here’s a list of parallels I’ve found striking as a founder of a technology company:

  • Fundraising. I spend a significant amount of my time seeking funding, carefully articulating problems with the status quo and how my ideas will solve these problems. The surface features of the work are different—in business, we pitch these ideas in slide decks, elevators, whereas in academia, we pitch them as NSF proposals and DARPA white papers—but the essence of the work is the same: it requires understanding the nature of a problem well enough that you can persuade someone to provide you resources to understand it more deeply and ultimately address it.
  • Recruiting. As a founder, I spent a lot of time recruiting talent to support my vision. As a professor, I do almost the exact same thing: I recruit undergrad RAs, Ph.D. students, faculty members, and trying to convince them that my vision, or my school or university’s vision, is compelling enough to join my team instead of someone else’s.
  • Ambiguity. In both research and startups, the single biggest cognitive challenge is dealing with ambiguity. In both, ambiguity is everywhere: you have to figure out what questions to ask, how to answer them, how to gather data that will inform these questions, how to interpret the data you get to make decisions about how to move forward. In research, we usually have more time to grapple with this ambiguity and truly understand it, but the grappling is of the same kind.
  • Experimentation. Research requires a high degree of iteration and experimentation, driven by carefully formed hypotheses. Startups are no different. We are constantly generating hypotheses about our customers, our end users, our business plan, our value, and our technology, and conducting experiments to verify whether the choice we’ve made is a positive or negative one.
  • Learning. Both academia and startups require a high degree of learning. As a professor, I’m constantly reading and learning about new discoveries and new technologies that will change the way I do my own research. As a founder, and particularly as a CTO, I find myself engaging in the same degree of constant learning, in an effort to perfect our product and our understanding of the value it provides.
  • Teaching. The teaching I do as a CTO is comparable to the teaching I do as a Ph.D. advisor in that the skills I’m teaching are less about specific technologies or processes, and more about ways of thinking about and approaching problems.
  • Service. The service that I do as a professor, which often involves reviewing articles, serving on curriculum committees, and providing feedback to students, is similar to the coffee chats I have with aspiring entrepreneurs, the feedback I provide to other startups about why I do or don’t want to adopt their technology, and the discussions I have with Seattle area angels and VCs about the type of learning that aspiring entrepreneurs need to succeed in their ventures.

Of course, there are also several sharp differences between faculty work and co-founder work:

  • The pace. In startups, time is the scarcest resource. There are always way too many things that must be done, and far too few people and hours to get things done. That makes triage and prioritization the most taxing and important parts of the work. In research, when there’s not enough time to get something done, there’s always the freedom to take an extra week to figure it out. (To my academic friends and students, it may not feel like you have extra time, you have much more flexibility than those in business do).
  • The outcomes. The result of the work is one of the most obvious differences. If we succeed at our startup, the result will be a slight shift in how the markets we’re targeting will work and hopefully a large profit demonstrating the value of this shift. In faculty life, the outcomes come in the form teaching hundreds, potentially thousands of students, lifelong thinking skills, and in producing knowledge and discoveries that have lasting value to humanity for decades, or even centuries. I still personally find the latter kinds of impact much more valuable, because I think they’re more lasting than the types of ephemeral changes that most companies achieve (unless you’re a Google, Facebook, or Twitter).
  • The consequences. When I fail at research, at worst it means that a Ph.D. student doesn’t obtain the career they wanted, or taxpayers have funded some research endeavor that didn’t lead to any new knowledge or inventions. That’s actually what makes academia so unique and important: it frees scholars to focus on truth and invention without the artificial constraint of time. If I fail at this startup, investors have lost millions of dollars, several people will lose their jobs, and I’ll have nothing to show for it (other than blog posts like this!). This also means that it’s necessary to constantly make decisions on limited information with limited confidence.

Now, you might be wondering which I prefer, given how similar the actual jobs the skills required in the jobs are. I think this is actually a matter of very personal taste that has largely to do with the form of impact one wants to have. You can change markets or you can change minds, but you generally can’t change both. I tend to find it much more personally rewarding to change minds through teaching and research, because the changes feel more permanent and personal to me. Changing a market is nice and can lead to astounding financial rewards and a shift in how millions of people conduct a part of their lives, but this change feels more fleeting and impersonal. I think I have a longing to bring meaning and understanding to people’s lives that’s fundamentally at odds with the profit motive.

That said, I’m truly enjoying my entrepreneurial stint. I’m learning an incredible amount about capitalism, about business, behavioral economics, and the limitations of research and industry. When I return as full time faculty, I think I’ll be in a much better position to do the things that only academics can do, and argue for why universities and research are of critical importance to civilization.

reflections on conference papers and journals

For the first time in my academic career this week, I was working on a journal paper and a conference paper at the same time. This wasn’t entirely intentional; both of these papers were going to be CHI papers, but as the results and writing for one of them materialized, it became clear that not only was the audience not a fit, but I actually couldn’t fit all of the important results into 10 page SIGCHI format. This realization, and the fact that I was working on both simultaneously, led several realizations about how the two kinds of submissions differ.

First and foremost, the lack of a strict length restriction on the journal paper was surprisingly freeing. While on the CHI paper, every other discussion with my student was what to cut and what to keep, discussions about the journal paper were much more about what details and results were missing. Obviously, there are advantages to each: with the CHI paper we were probably forced to be much more concise and selective about the most significant results; similarly, the journal paper was slightly more verbose than it needed to be, because I didn’t have the threat of desk rejection to force more careful editing. At the same time, there were many interesting things that we had to leave out of the CHI paper that could have fit into just one additional page. With journal paper, the question was not “what’s most significant?” but “is this complete?”

The length differences also had a significant effect on how much space we gave to details necessary for reproducibility. For the journal paper, I felt like our task was to enable other researchers understand exactly what we did and how we did it. With the CHI paper, our task was to provide enough detail for reviewers to see the rigor of what we did, but the amount of detail we ended up including really wasn’t enough to actually reproduce our study. In the long term, this is not good science.

Although the journal paper didn’t have a deadline, I did impose one on my lab in order to align with the end of summer, since the undergrad research assistants on the paper would have to resume classes (as would I). The deadline worked well enough to motivate us to finish the paper, but it also freed us to take an extra day or two to read through the manuscript a few extra times, improve some of the figures, and verify some of the results that we felt may have been done too hastily. The CHI paper, in contrast, was rushed, as most CHI submissions are. There was just enough time to edit thoroughly yesterday and submit today, but there’s an extensive list of to do’s that we have if the paper is accepted. Sure, we could do them now, but why not wait until reviewers provide more feedback? With the journal paper, we submitted when we felt it was ready.

Of course, the biggest difference between the two submissions has yet to come. In November, we’ll get CHI reviews back and likely know with some certainty whether the paper will be accepted or rejected. There will be no major revisions, no guidance from reviewers about what they’d like to see added or changed, and certainly no opportunity for major improvements if it is accepted. Instead, the reviews will focus on articulating a rationale for acceptance or rejection. With the journal paper, I’ll (hopefully) get three extensive positions in a few months on what is missing or wrong with the paper and what they’d like me to change in a revision. The process will likely take longer, but in trade, I hope the paper will be much better than original manuscript.

One of these processes is designed for speed, the other is designed for quality. I’ll let you guess which is which. And let me be clear: I’m a big fan of conferences. Most of my work is published at major HCI and software engineering venues and not journals and I truly enjoy the fact that nearly everyone in our community rallies together at the same time of year to contribute our latest and greatest for review. But as someone who has the freedom to really publish in either, I’m really starting to question whether the average conference paper can actually be of comparable quality to the average journal paper. There might just be inherent limits to a review process that is optimized for selecting papers for presentation rather than improving them.

Of course, this isn’t a necessary dichotomy. I’ve talked to many people in my research community about blending the two. For example, if we simply had journals of infinite capacity and no conference papers, and instead put all of our reviewing effort into our journals, we could easily design an annual conference where people present the best work from recent journal publications (and work in progress, as we already do). In fact, CHI already lets ToCHI authors present their recently published papers, so we’re part way there. With changes like this, we might find a nice balance between a review process designed for improving papers and a conference designed for fostering discussion about them.

UW MSR Summer Institute on Crowdsourcing Personalized Online Education

For the past three days I’ve been at the 2012 UW MSR Summer Institute, which is an annual retreat on an emerging research topic. This year’s topic was “Crowdsourcing Personalized Online Education”. What this really meant in practice was two things: what is the future of education and how can we leverage the connectedness of observability of learning online? The workshop was mainly talks, but there were an impressive number of great speakers and attendees that kept everyone engaged.

There are a lot of important things that I observed out of all of these discussions and talks:

  • The first thing that was apparent is just how different the motives and values are in the different communities that attended. The majority of the attendees were coming from a computing perspective, with primary interests in creating new, more powerful, and more effective learning technologies. There were a smaller number of learning scientists, with interests in explaining learning and devising better measurements of learning, much more rigorously than any of the computing folks had done. Two representations from the Gates Foundation also came briefly, and it was clear that their primary interests were much less in specific technologies and much more in creating educational infrastructure and new, sustainable markets of educational technologies. There were also representatives from Khan Academy and Coursera, who were broadly interested in providing access to content, and mechanisms to enable experts to share content. My view on what’s really new behind all of this press on online learning is that computing researchers are newly interested in learning and education: almost everything else, except for the scale of access, has been done in online learning before.
  • Jennifer Widom, Andrew Ng, and Scott Klemmer (all at Stanford), talked about their experiences creating MOOCs for their courses. The key take away message is that it is very time consuming to create the course, with each spending countless hours recording high quality lectures before the hours, negotiating rights for copyrighted material, and working out bugs in the platform. All of them implied that running the course the first time was more than a full time job. On the other hand, many were confident it would take much less time for later offerings and had confidence that most aspects of the class can scale to be arbitrarily large (even design critiques, in Scott Klemmer’s case, through calibrated peer assessment). The one part that doesn’t scale is student-specific concerns (for example, students getting injured and needing an extension on an assignment). Scott also suggested that every order of magnitude increase in the number of students demands an order of magnitude increase in the perfection of the materials (because there are so many more eyes on the material), but again, this is a decaying cost, assuming the materials don’t change frequently.
  • In many of the conversations I had around how MOOCs might change education, many faculty believed that the sheer availability and accessibility of instructional content would shift the responsibilities of instructors. Today, most individual instructors are responsible for making their own materials, making them accessible, and then using them to teach. In a world where great materials are available for free, these first two responsibilities disappear. The new job of a higher ed instructor may therefore much less about designing materials and providing access to them, but correcting misconceptions, motivating students, designing good measurements, and building learning communities. One could argue that this is an overall improvement (and also that it actually mirrors the way that textbooks work, which are written by a small number of experts and used as is by instructors).
  • Interestingly, most of the MOOC teachers reported that the social experience of students online were critical, including forum conversations, ad hoc study groups in different cities around the world, and peer assessments. This might quell a lot of the concerns that higher ed teachers had about the loss of interaction in online—it might just be that the interaction shifts from instructor/student interaction to student/student interaction and student/intelligent tutor interaction. Some of the preliminary data suggests that students actually greatly prefer this, since they don’t get that much instructor interaction already, but they’re getting much more student/student interaction than in a traditional co-located course. This might therefore be an improvement over traditional lecture-based classes, but not classes in which teachers interact closely with students (such as small studio courses).
  • No one knows what will happen to the education market, including the people running Kahn and Coursera. However, there were some predictions. First, these platforms are going to make it so easy to share and access content, in the same way that the web has for everything else, that finding and choosing content is going to become a critical challenge for students. Therefore, one new role that instructors might play is in selecting and curating content in a way that is coherent and personalized for the populations that they teach.
  • Most of the interests related to crowdsourcing are either in (1) enabling classes to be taught at scale (by finding ways to free instructors and TAs to have to grade and assess all of the work), (2) to improve the effectiveness, efficiency, and or engagement of learning activities, or (3) to create new opportunities through informal learning, such as through oDesk or Duolingo. Researchers are thinking about how to use data to optimize the sequence of instruction, give just the right hints to correct misconceptions, select a task that is challenging but not too challenging. In my view, this is leading to a renewed interest intelligent tutoring systems.
  • As usual, most of this new research work suffers from a lack of grounding in and leveraging of prior literature in learning sciences and intelligent tutoring systems. There is tons of research on all of these challenges that computing researchers are tackling, but I don’t seem them really using any of the work. This happens over and over in computing research, since the interests are often in creating new things and not understanding the things themselves. I was impressed, however, how much Andrew Ng had leveraged findings in learning sciences to support certain design decisions in Coursera.
  • There was a big undercurrent of data science at the workshop. Everyone was excited about big data sets and how they might be leveraged to improve learning technologies. Most of the methods reported were fairly primitive (AB testing, retention rates), but I’m hoping this new energy behind learning will lead to much better methods and tools for doing educational data mining.

Phew! Sorry for the lack of coherence here. We covered a lot of ground in 2.5 days and this is just a sliver of it.

ageism in academia

I have a young face, especially for a professor. Other faculty assume I’m an undergrad, Ph.D. students assume I’m an undergrad, even undergrads assume I’m an undergrad. In some ways this is nice. I can be stealth on campus, blending in with the rest of the students. When I’m teaching, I have to earn my authority rather than getting it simply because I look wise (and I like earning things). And the undergrads I teach probably relate to me differently simply because I look their age, even though I’m a decade older than most of them.

As a researcher, however, looking young can feel like a disadvantage, since the wisdom and knowledge one has typically grows with age (at least in academia). Sometimes I feel like people discount my opinions because I look young, perhaps because my face communicates inexperience. Sometimes I feel like I have to compensate by being extraordinarily articulate or insightful, just to get people to listen to me. At conferences, people always ask me what I’m studying, who my advisor is, or where I go to school. I suspect that when people who don’t know me see me at a conference, they think, “just another student” instead of “I wonder who that important researcher is” like I do when I see older researchers at conferences.

Not that this has held me back. If anything, it means that any success I’ve had has been earned, which makes it all the more rewarding. And it shows that academia is still indeed some form of meritocracy, where it is the ideas and knowledge that one produces that ultimately shapes our reputations. In fact, when I’m 60, I’ll probably look like I’m in my 40’s (as my parents do), which will help me avoid all of the ageism directed at older professors, so any disadvantage I have now might turn into an advantage later in life. That should enable me to have a nice long career into my 70’s (assuming my brain still works!).

Ultimately, I feel lucky that ageism is the only real discrimination that I face. There are faculty who face ageism, sexism, and racism, which seems like an incredible amount of bias for one person to struggle against. Facing a bit of ageism here and there makes me empathize with people who face even more discrimination and makes it easier for me to avoid assuming anything about a person until I talk with them. And it helps me respect their successes even more, because I have a tiny glimpse into what it took to earn it.

What do professors do all day?

I get this question a lot from students, friends, and family, but I’m never quite sure how to answer it. Research? Teaching? Service? Those don’t really capture what the job is really like. So I decided to write everything down in a big long list at the end of today, capturing every single goal accomplished, every single e-mail I sent, and every single conversation I had. See if you can find the  research!

  • 6:30-6:31 Woke up at 6:30 am still feeling under the weather and with another recurring corneal abrasion in my right eye. Applied eye-drops.
  • 6:32-6:40 Read e-mail in bed for a few minutes.
  • 7:00-7:15 Read more e-mail while eating breakfast, set meeting with tech transfer department for next week.
  • 7:15-7:30 Loaded CHI PC chairing site and assigned a few 2ACs
  • 7:35-7:50 Left for work and arrived in the central garage
  • 7:50-8:00 Got coffee from Mary Gates cafe and briefly said hi to Karen Fisher and Joe Tennis in the hallway.
  • 8:00-8:02 Sent e-mail about paper conflicts for CHI PC for assigning 2ACs.
  • 8:00-8:05 Scheduled doctor’s appointment about eye.
  • 8:05-8:35 Spent 30 minutes assigning 2ACs to 25 papers
  • 8:35-9:10 Spent 35 minutes crafting epic e-mail to junior Ph.D. student who is worried about his Ph.D. topic and unsure about next steps.
  • 9:10:9:30 Revised slides for lecture for 18 minutes, improving clarity over last year’s version.
  • 9:30-9:32 Responded to followup e-mail from director of corporate relations about corporate connections I made at last week’s research fair.
  • 9:32-9:35 Responded to e-mail about affiliate status renewal for affiliate faculty
  • 9:35-9:45 Spent 10 more minutes assigning 2AC reviewers for CHI papers
  • 9:45-10:10 Spent more time improving slides for lecture
  • 10:20-11:00 Left for class and lectured for 30 minutes about limitations and assumptions in designs
  • 11:00-11:20 Led activity using simulated impairments on mobile devices to elicit assumptions
  • 11:20-12:15 Led activity in which teams brainstormed assumptions in their own design projects
  • 12:15-12:25 Finished class at 12:15 and spend 10 minutes eating lunch
  • 12:25-12:30 Responded to e-mail about planning dub retreat
  • 12:30-12:35 Responded to e-mail about when 2ACs need to finish their reviews by
  • 12:35-12:40 Responded to e-mail about corporatizing universities
  • 12:40-12:45 Spoke briefly with Scott Barker about how many in-class hours the new capstone should include
  • 12:45-12:50 E-mails to student who wants into INFO 461 next quarter
  • 12:45-1:15 Drove to Kirkland to work at library to avoid traffic from 1
  • 1:15-1:18 Responded to e-mail approving new member to EUSES consortium
  • 1:18-3:18 Copied proposal draft to hard drive for writing and worked on diagrams for Cyberlearning proposal draft.
  • 3:18-3:23 Driving to car line
  • 3:23-3:30 Waiting in car line
  • 3:30-4:20 Getting snacks for Elle before swim practice
  • 4:20-4:35 Writing e-mail to colleagues about frustration around methodological rifts among faculty
  • 4:35-4:45 Taking Elle into swim practice
  • 4:45-4:58 Writing student services staff about Spring capstone event
  • 4:58-5:00 Replying to student about spring capstone team
  • 5:00-5:08 Replying to request about all school meeting participation conflicting with my final exam schedule
  • 5:08-5:15 Writing co-PI about updated figures in proposal draft
  • 5:15-5:25 Writing this list
  • 5:25-5:30 Call with ex about Thanksgiving plans
  • 5:35-5:41 Finishing this list
  • 5:41-5:48 Editing this list and pressing the publish post button.

future me gets all the attention

According to my OmniFocus database, I have 1,272 active to do items spanning 197 projects and 93 zip files. Their due dates range from tomorrow to retirement. Every day, I open OmniFocus and it tells me what to do today, so I don’t have to worry about tomorrow. And yet, I have this nagging feeling of dissociation from the present. I think past me planning for future me has left present me with nothing to do.

Case in point: last Friday was a miscellaneous day, where I take all of those to dos that I pushed off from the past couple of weeks and got them done. I had a nice tidy list of 17 of of them, each with carefully chosen deadlines and terse, but effective notes reminding me what I was doing when I last worked on it, why it was important, and what was left to finish. I spent the day firing off e-mails, editing stale paragraphs in paper submissions, submitting travel forms, planning grant spending, setting up Trac for my class in the fall. Yet by the end of the day, I wasn’t really sure what I’d accomplished. When the girlfriend asked me about the highlights of my day, I was at a complete loss. What had I done? Was any of it fun? Was it frustrating? Were there any memorable moments at all? At least to my conscious self, it felt like I’d really only done one thing that day: clear my to do list. The rest was a blur. Past me had present me so prepared to mechanistically work through those 17 items, I hadn’t even formed memories of the work I’d done or the emotions I felt. I was a to do bot whose sole mission was placing checkmarks on a virtual list, all in service of future me’s ever growing workload.

Now that I reflect on this, I think what’s going on is that I’ve separated all of the thinking and deciding about what I should be doing now from the now itself. At least at work, I rarely find a moment where deciding what to do and actually doing it co-occur in any meaningful way. This is never clearer than on the weekends, where I try to let present me make the decisions, instead of past me. Past me had nothing to say about brunch this morning, he didn’t give me a list of chores. It was present me who got to play with my 16 waking hours and decide how to break them down, how to fill them, and when the day was done. And being so involved in deciding about today has led to so many wonderful memories: the arugula hollandaise risotto benedict, the peach dutch baby pancakes, writing this blog post at Uptown Espresso in the Belltown sunshine with my girlfriend. Aren’t these the kind of memories and experiences that life is about?

I suppose it’s a tradeoff, like anything else. What’s more important at work, getting things done or remembering getting things done? I like my job, or at least I like the idea of it, but lately my obsession with efficiency is transforming work that used to be so satisfying into a hazy blur of typing and talking. To combat this, maybe I’ll try inserting little moments of reflection into my day, where I sit and reflect for 5 minutes and maybe write a bit, just to crystallize the concrete in my mind. In fact, I’ll add it to my to do list right now!