CHI 2017: Automation, Agency, and Learning

The CHI conference (the ACM Conference on Human Factors in Computing Systems) is a strange beast. From 1,000 feet, it’s an incredible gathering of thousands of researchers, teachers, practitioners interested broadly with how people and technology interact. From 10 feet, however, there’s massive diversity. Some people come to share new ways for people to interact with and through technology; others come to critique how technology is shaping and perhaps even eroding our humanity. Attendees somehow coexist and learn from each other despite our dramatic differences in interests.

Because of the scale of the conference, there’s no way to summarize what happened of it. At any given time, there are dozens of parallel sessions and hundreds of hallway conversations happening. Any trip report is therefore mostly a personal account of ideas that were salient, interesting, and impactful.

With my recent pivot to computing education research, learning and education was the focus of many of my conversations. Many of these were about the practice of being a teacher. For example, I had a great conversation with Scott Hudson (Carnegie Mellon) about their new undergraduate degree in HCI, and some of the challenges with trying to recruit students into a major from high school, where there’s barely any visibility of computing, let alone HCI. I also talked to Thomas Fritz (University of British Columbia) about the challenges of incorporating active learning into software engineering education at scale. Brian Dorn (University of Nebraska, Omaha) shared the many unique challenges of state and local level K-12 computing education policy development.

My other conversations about learning were grounded in research. David Karger (MIT) shared his views with me on programming language learning and programming problem solving. I had a great conversation with Nathalie Riche (Microsoft Research) about the role of learning in harnessing the power of interactive information visualizations. I also engaged in a riveting deconstruction with Jonathan Grudin (Microsoft Research) about the history of education policy in the U.S. and the implications of that history on present day flaws in high school and college.

Because many attendees knew of my foray into startups, I had many interesting conversations about technology transfer, the role of design in product management, and my own personal experiences with these things. Jason Hong (Carnegie Mellon) shared details about a new masters in product management, while Bonnie John (Bloomberg) shared her practical experiences with product managers and how they interact with designers. I talked to many of our MHCI+D students about startup life, mentoring them on both how to compare startup and non-startup jobs, but also how to negotiate offers. Danyel Fisher (Microsoft Research) described his new work with Andy Begel on understanding the vast diversity of barriers in technology transfer between Microsoft Research and Microsoft proper. Geraldine Fitzpatrick (TU Wien) also interviewed me for her podcast on changing academic life about my recent blog post on work life balance, where I discussed how the time stressors inherent to building a business motivated me to develop more rigorous time management skills.

Each year, we throw a DUB party in collaboration with Georgia Tech and Michigan, usually attracting hundreds of attendees who want to network, drink, and reconnect with friends. I had a great time learning about many of our former doctoral students’ experiences with faculty life.

I don’t usually go to talks at CHI, mostly because I find them to have too low an information density to be valuable. I did go to a few great ones, however, two of which concerned accessibility. One was my student Amanda’s talk on Genie, in which she described several clever techniques for automatically transforming interactive websites to support multiple forms of input. At the beginning of the conference, my colleague and friend Jake Wobbrock accepted his CHI Social Impact Award, discussing ability-based design, which is the idea that we should be designing for what people can do, not what they can’t do, adapting systems to individual abilities.

The other two notable talks I attended were the opening and closing plenary’s, both of which critiqued commercial software and it’s impact on society. Wael Ghonim (Quora) dissected social media, enumerating the numerous consequences of driving traffic through popularity metrics such as “likes” and ad impressions on our media diet. He argued that news feeds editorialize content via these metrics result in mob rule, where whoever is the loudest and controversial controls the conversation. Nicholas Carr in his closing plenary argued that automation actual disrupts our ability to learn, which creates dependency and ultimately a less capable humanity rather than a more capable one. He argued that commercial software enterprises, whether they realize it or not, must automate in order to create this dependency and make profit. Both of these talks aligned well with a birds of a feather session run by Jonathan Grudin (Microsoft Research) and Umer Farooq (Microsoft) on the topic of human-computer integration, or the idea that digital agents will become so autonomous that they will become to act as our assistants and friends. Unlike the two talks, this session was framed more optimistically, trying to uncover compelling examples of integration, but also open problems.

While some people might find the breadth and diversity of the topics above a bit overwhelming and potentially irrelevant to their work, I always find it energizing. It contextualizes my work and offers methods, techniques, and perspectives that help shape, motivate, and refine my work. I never leave CHI new knowledge about the questions I’m trying to answer in my research, but I always leave it with a new way of asking and answering the questions.

How I (sometimes) achieve academic work life balance

I was a young father. Just twenty-one and a senior in college when my daughter was born in 2001. I probably don’t have to say this, but having a child at 21 wasn’t a smart move, generally: my (then) wife and I had basically no income, lots of student debt, and only an impression of who we were as people. Fortunately, we were both also pretty mature and goal-driven. Why not have a kid while in grad school? As students, we’d have fewer responsibilities and being under the poverty line, we wouldn’t get caught up in materialism. It’d be us, our love for our child, and our professional dreams.

This might sound overly romantic, but it was true. As a doctoral student, I really only had a few responsibilites: 1) learn to do research and 2) do a lot of it, well. What’s usually hard about this is focus: there’s so much to read, so many projects that one can work on, and so many paths to take, students can get stuck trying to find perfect projects, trying to motivate themselves, and trying to find ways to have the greatest impact. I had endless peers in grad school who were lost in this soup, and often spent 10-12 hours a day searching.

This search can be a wonderful part of grad school. But as a poor young father who had a wife in school too, I didn’t really have the luxury of such expansive time. I had different priorities: 1) be a great father and 2) get a job that I loved and that would provide my family stability (tenure-track professor). You’d think (as I thought) that these two might be incompatible. How is there possibly enough time in the day to achieve both goals?

The only way I found to reconcile the two was to budget time. I gave myself 9 hours per day to make progress on research. I gave the rest of my time to my family. I negotiated some exceptions with my wife (paper deadlines, conference travel, late meetings that others scheduled), but generally, I committed to a 45 hour work week throughout grad school.

This had several positive effects:

  1. I worked the hell out of those 9 hours each day. As most of my grad school peers can tell you, I was always working. I took breaks to stay healthy, went to class, and met with advisors and collaborators. But I spent every minute of every weekday practicing research.
  2. I leveraged required activities for research. One of my more widely cited papers, for example, was based on data I gathered as a teaching assistant. Another class project led to award-winning CHI and ICSE papers. These weren’t luck; I went into these classes with plans, knowing that I had to make the most of the experiences.
  3. With the help of my advisor, I became ruthlessly critical of the potential outcomes of research opportunities. I learned to pursue projects that would result in discovery regardless of the outcomes, so I wouldn’t have any dead ends.

This 45-hour weekly cap also had some negative effects. For example, I spent too little time making friends and maintaining friendships. Because I gave the rest of my time to family, and I wanted to make the most of my work time, I passed on parties, extracurricular activities, and other social time that really would have grown me as a person, and would have grown my network of colleagues and collaborators.

To be productive in these constraints, I had to develop some robust time management skills:

  • I learned to religiously maintain my calendar, protecting research time
  • I used professional to do list management tools and build practices of reviewing it multiple times a day
  • I became extremely disciplined about capturing to do items so I’d never forget ideas I generated or anything I’d committed to
  • I became facile at decomposing tasks, breaking large, difficult-to-start tasks (e.g., write this paper) into hundreds of smaller tasks, making it easy to squeeze progress into 5-10 minute chunks.

All of these skills also helped me develop better self-regulation skills, giving me more awareness about which skills I was developing, which I excelled at, and which kinds of tasks would require more undivided attention than others. This self-knowledge helped me better select when to do particular types of tasks and plan my time accordingly.

All of this carried over into faculty life. Because my responsibilites were more numerous (adding teaching, service, Ph.D. student management, grant management, fundraising, impact efforts, etc.), I’ve had to learn some new skills over the past nine years of faculty life:

  • I keep logs about the multiple projects I’m engaged in, spending a few minutes before a context switch to capture, allowing me to context switch back more easily.
  • I set quotas on commitments and time, giving myself a maximum number of papers to review each year, a maximum time commitment to committee work, a maximum number of talks to go to each week, a maximum amount of time to spend on classes I’m teaching.
  • I maintain a “commitment” calendar for each month into the next two years, to keep track of all categories of activities I’m engaged in. This helps me assess whether I’ve run out of time to say yes to something (e.g., so I can write emails like “I want to say yes to this review, but I have no hours left to do it in May“).
  • To the extent that I can, I organize the tasks each day around a single role (teaching, research, service). A typical week has 2.5 research days, 1.5 teaching days, and 1 service day.
  • I don’t schedule meetings with doctoral students. Instead, I have a weekly lab meeting for reporting and block of entire half days for ad hoc advising. My students know when these times are and that they’ll be able to talk to me then. This means I meet with students for less time overall, but the meetings are focused and therefore far more useful. (I do schedule quarterly mentoring meetings to proactively discuss career planning, networking, milestones, etc.).
  • I (try) to read email only once per day).
  • I revise my courses each quarter to identify ways of streamlining my time while improving learning outcomes.
  • When possible, I schedule 30 minute meetings, forcing attendees to come prepared. This often has the effect of resolving the meeting topic over email.
  • I try to use Slack instead email, since it gives me a more visible context for a conversation with a person or group.

I still aim for 45 hours a week. I still have exceptions, but they mostly come from collaborations now, rather than my own work (e.g., students working up to a deadline, collaborators working up to a deadline). And even then, I work hard to teach my students their own good time management skills, both to help them have better work-life balance, and to help me maintain my work-life balance. The key to avoiding these exceptions is to not overcommit and always pre-crastinate, preparing papers grants and other deliverables at least a few days before they’re due.

All of this of course takes one big commitment. First, I have to commit to doing less. There’s a constant pressure an academia to publish as much as possible. It’s really hard to say no to an opportunity. It takes a lot of discipline (and desire) to turn down a collaboration or not pursue a grant, especially since I usually want to do these things. That’s a natural by-product of loving what I do.

Why do I set a limit if I like what I do? Lots of reasons:

  • My family and friends matter to me more than ever.
  • I have to take care of my body, both physically and mentally (exercise and sleep matter!)
  • I believe I’m genuinely more creative when I have open, unrestricted time to think.  (I don’t count this as work time; if my mind is wandering at the grocery story or on a walk, so be it).

As much as I love nearly everything about my job, I’ve learned to enjoy my free time just as much. It makes me feel like a fuller, more integrated citizen and human, and unquestionably a better father. In a surprising way, I feel like maintaining this discipline over my time makes me a better scholar too. Others agree that open time is actually a critical resource for strong, deep scholarship.

Of course, I fail at capping my time all the time. I failed multiple times in the past few months while engaging in faculty searches. I fail when I’m not sufficiently ahead of deadlines. I fail when a student fails to be ahead of a deadline, and I’ve committed to helping them. And I’m failing right now, writing this blog post at 9 pm!

That’s okay. The point isn’t to be flawless, it’s to draw a line, and try to stay on one side of it.

My sabbatical research pivot

Since I started research back in 1999 as an undergraduate, I’ve always been intrigued by the goal of helping people write code more productively. Sometimes I ran studies that tried to identify barriers to their productivity. Sometimes I made tools that helped them navigate faster, debug faster, test faster, and communicate faster. Every one of my research efforts was aimed explicitly at speed: the more a developer can do per unit time, the better off the world is, right?

Then something changed. I founded a software startup, and led it’s technical and product endeavors. And I watched: how much did developer productivity matter? What influenced the quality of the software? What was the real value of faster developers?

In my experience, faster development didn’t matter much. Developers’ speed mattered somewhat, but only to the extent that we made effective product design choices based on the valid understanding of customer, user, and stakeholder needs. It didn’t matter how fast they were moving if they were moving in the wrong direction. And developer tools—whether a language, API, framework, IDE, process, or practice—mattered only to the extent that developers could learn to use these tools and techniques effectively. Many times they could not. Rather than faster developers, I needed better developers, and better developers came through better and faster learning.

Furthermore, I couldn’t help but wonder: what part of this job is fulfilling to them? It’s certainly wasn’t writing a lot of code. There was always more code to write. In fact, it was the moments they weren’t coding—when they were reading about a new framework, picking up a new language, trying a new process—that they enjoyed most. These were moments of empowerment and moments of discovery. These were moments of learning. Around the same time, my student Paul Li was beginning to investigating software engineering expertise, and finding that, much as I had experienced, it was the quality of a developer’s code, and their ability to learn new coding skills, that were critical facets of great engineers. Better learning allows developers to not only be more productive and more effective, but also more fulfilled, Better learning makes better developers, who envision better tools, better processes, and better ideas. As obvious as it should have been to someone with a Ph.D. in HCI, it was the human in this equation that was the source of productivity, not the computer. Like most things computational, developer tools are garbage in, garbage out.

After I stepped down as AnswerDash CTO and begin my post-tenure sabbatical, it became clear I had to pivot my research focus. No more developer tools. No more studies of productivity. I’m now much less interested in accelerating developers’ work, and much more interested shaping how developers (and developers-in-training) learn and shape their behavior.

This pivot from productivity to learning has already had profound consequences to my research career. For a long time, I’ve published in software engineering venues that are much more concerned with productivity than learning. That might mean I have less to say to that community, or that I start contributing discoveries that they’re not used to reading about, evaluating, or prioritizing. It means that I’ll be publishing more in computing education conferences (like ACM’s International Computing Education Research conference). It means I’ll be looking for students that are less interested in designing tools that help them code faster, and more interested in designing tools to help developers of all skill levels code better. And it means that my measures of success will no longer be about the time it takes to code, but how long it takes to learn to code and how well someone codes.

This pivot wasn’t an easy choice. Computing education research is a much smaller, much less mature, and much less prestigious research community in computing research. There’s less funding, fewer students, and honestly, the research is much more difficult than HCI and software engineering research, because measuring learning and shaping how people think and behave is more difficult than creating tools. Making this pivot means making real sacrifices in my own professional productivity. It means seeing the friends I made in the software engineering research community less often. It means tackling much trickier, more nuanced problems, and having to educate my doctoral students in a broader range of disciplines (computer science, social science, and learning science).

But here’s the upside: I believe my work will be vastly more important and impactful in the arc of my career. I won’t just be making an engineer at Google ship product faster, I’ll be inventing learning technologies and techniques that make the next 10,000 Google engineers more effective at their job. I’ll be helping to eliminate the hundreds of thousands of horrific experiences that people have learning to code into more fulfilling and empowering experiences, potentially giving the world an order of magnitude more capable engineers. Creating a massive increase in the supply of well-educated engineers might even slow down some of the unsustainable growth of software engineering salaries, which are at least part of the unsustainable gentrification of many of our great American cities. And most importantly, I’ll be helping to give everyone that learns to code the belief that they can succeed at learning something that is shaping the foundational infrastructures of our societies.

I’ll continue to be part of the software engineering research community. But don’t be surprised if my work begins to focus on making helping better developers write better code than simply writing code faster. I’ll continue to be part of the HCI research community, but you’ll see my work focus on interactive learning technologies that accelerate learning, promote transfer, and shape identity. And for now, you’ll see me invest much more in building the nascent community of computing education researchers, helping it blossom into the field it needs to become to transform society’s ability to use and reason about code as it weaves itself deeper into our world.

I’m so excited to engage in this new trajectory, and hope to see many of you join me!

The service implications of interdisciplinarity

I am what academics like me like to call an “interdisciplinary researcher”. This basically means that rather than investigate questions within the traditional boundaries of established knowledge, I reach across boundaries, creating bridges between disparate areas of work. For example, the research questions I try to answer generally span computer science, psychology, design, education, engineering, and organizational science. I use theories from all of these, I build upon the knowledge from all of these, and occasionally I even contribute back to these areas of knowledge.

There are some wonderful things about interdisciplinary work, and some difficult things. The wonderful things mostly stem from being able to usefully apply knowledge to the world, synthesizing ideas from multiple disciplines to the problems of today. This is possible because I don’t have the duty to a discipline to deepen its understanding. Instead, my charge is to invent technologies, policies, methods and processes that embody the best ideas from more basic research. In a way, interdisciplinary research is necessary for those basic research discoveries to ever make it into the world. This is highly rewarding because I get to focus on everyday problems, learn a ton about many different fields of research, and can easily show people in my life how my work is relevant to theirs.

Where interdisciplinary work gets difficult is in the nitty gritty of academic life. Because I know a little bit about a lot of things, I get asked to participate on a lot of committees. I get invitations to software engineering committees, computing education committees, and HCI committees, since they all touch on aspects of people’s interactions with code. I get invited to curriculum committees more often because my work seems more directly applicable to what we teach (because it is). People from industry contact me more often because they can see how my work informs their work, more so than the basic research.

And of course, my time isn’t infinite, so I have to pick and choose which of these bridges to make. I find myself with some difficult choices: should I create a link to industry, or bridge two fields of academia? Should I invest in disseminating a discovery through a startup, an academic conference talk, or YouTube video? Or should I just focus on my research, slowly transforming my fuzzy interdisciplinary research area into something more disciplinary, with all of its strengths and weaknesses?

Someone probably does research on these research policy questions. Maybe they can help!

The black hole of software engineering research

Over the last couple of years as a startup CTO, I’ve made a point of regularly bringing software engineering research into practice. Whether it’s been bleeding edge tools, or major discoveries about process, expertise, or bug triage, it’s been an exciting chance to show professional engineers a glimpse of what academics can bring to practice.

The results have been mixed. While we’ve managed to incorporate much of the best evidence into our tools and practices, most of what I present just isn’t relevant, isn’t believed, or isn’t ready. I’ve demoed exciting tools from research, but my team has found them mostly useless, since they aren’t production ready. I’ve referred to empirical studies that strongly suggest the adoption of particular practices, but experience, anecdote, and context have usually won out over evidence. And honestly, many of our engineering problems simply aren’t the problems that software engineering researchers are investigating.

Why is this?

I think the issue is more than just improving the quality and relevance of research. In fact, I think it’s a system-level issue between the interaction between academia and industry. Here’s my argument:

  • Developers aren’t aware of software engineering research.
  • Why aren’t they aware? Most explicit awareness of research findings comes through coursework, and most computer science students take very little coursework in software engineering.
  • Why don’t they take a lot of software engineering? Software engineering is usually a single required course, or even just an elective. There also aren’t a large number of software engineering masters programs, to whom much of the research might be disseminated.
  • Why are there so few courses? Developers don’t need a professional masters degree in order to get high paying engineering jobs (unlike other fields, like HCI, where professional masters programs are a dominant way to teach practitioners the latest research and engage them in the academic community). This means fewer needs for software engineering faculty, and fewer software engineering Ph.D. students.
  • Why don’t students need coursework to get jobs? There’s huge demand for engineers, even complete novice ones, and many of them know enough about software engineering practice through open source and self-guided projects to quickly learn software engineering skills on the job.
  • Why is it sufficient to learn on the job? Most software engineering research focuses on advanced automated tools for testing and verification. While this is part of software engineering practice, there are many other aspects of software engineering that researchers don’t investigate, limiting the relevance of the research.
  • Why don’t software engineering researchers investigate more relevant things? Many of the problems in software engineering aren’t technical problems, but people problems. There aren’t a lot of faculty or Ph.D. students with the expertise to study these people problems, and many CS departments don’t view social science on software engineering as computer science research.
  • Why don’t faculty and Ph.D. students have the expertise to study the people problems? Faculty and Ph.D. students ultimately come from undergraduate programs that inspire students to pursue a research area. Because there aren’t that many opportunities to learn about software engineering research, there aren’t that many Ph.D. students that pursue software engineering research.

The effect of this vicious cycle? There’s are few venues for disseminating software engineering research discoveries, and few reasons for engineers to study the research themselves.

How do we break the cycle? Here are a few ideas:

  1. Software engineering courses need present more research. Show off the cool things we invent and discover!
  2. Present more relevant research. Show the work that changes how engineers do their job.
  3. Present and offer opportunities to engage in research. We need more REU students!

This obviously won’t solve all of the problems above, but its a start. At the University of Washington, I think we do pretty well with 1) and 3). I occasionally teach a course in our Informatics program on software engineering that does a good job with 2). But there’s so much more we could be doing in terms of dissemination and impact.

I am tenured

I am now a tenured professor.

After working towards this for 15 years, it’s surreal to type that simple sentence. When I first applied to Ph.D. programs in 2001, it felt like a massive, nearly unattainable goal, with thousands of little challenges and an equal number of opinions about how I should approach them. Tenured professors offered conflicting advice about how to survive the slog. I watched untenured professors crawl towards the goal, working constant nights and weekends, only to get their grant proposal declined or paper rejected, or worst of all, tenure case declined. I had conversations with countless burned out students, turned off by the relentless, punishing meritocracy, regretting the time they put into a system that doesn’t reward people, but ideas.

Post-tenure, what was a back-breaking objective has quickly become a hard earned state of mind. Tenure is the freedom to follow my intellectual curiosity without consequence. It is the liberty to find the right answers to questions rather than the quick ones. Its that first step out of the car, after a long drive towards the ocean, beholding the grand expanse of unknown possibilities knowable only with time and ingenuity. It is a type of liberty that exists in no other profession, and now that I have and feel it, it seems an unquestionably necessary part of being an effective scientist and scholar.

I’ve talked frequently with my colleagues about the costs and benefits of tenuring researchers. Before having tenure, it always seemed unnecessary. Give people ten year contracts, providing enough stability to allow for exploration, but reserving the right to boot unproductive scholars. Or perhaps do away with it altogether, requiring researchers continually prove their value, as untenured professors must. A true meritocracy requires continued merit, does it not?

These ideas seem naive now. If I were to lose this intellectual freedom, it would constrain my creativity, politicize my pursuits, and in a strange way, depersonalize my scholarship, requiring it to be acceptable by my colleagues, in all the ways that it threatened to do, and sometimes did, before tenure. Fear warps knowledge. Tenure is freedom from fear.

For my non-academic audience, this reflection must seem awfully privileged. With or without tenure, professors have vastly more freedom than really any other profession. But having more freedom isn’t the same as having enough. Near absolute freedom is an essential ingredient of the work of discovery, much like a teacher must have prep time, a service worker must have lunch breaks, an engineer must have instruments, and a doctor must have knowledge.

And finally, one caveat: tenure for researchers is not the same as tenure for teachers. Freedom may also be an ingredient for successful teaching, in that it allows teachers to discuss unpopular ideas, and even opinions, without retribution. But it may be necessary for different reasons: whereas fear of retribution warps researchers’ creation of knowledge, it warps the minds of teachers’ dissemination of that knowledge.

Startup life versus faculty life

As some of you might have heard, this past summer I co-founded a company based on my former Ph.D. student Parmit Chilana‘s research on LemonAid along with her co-advisor Jake Wobbrock. I’m still part time faculty, advising Ph.D. students, co-authoring papers, and chairing a faculty search committee, but I’m not teaching, nor am I doing my normal academic service load. My dean and the upper administration have been amazingly supportive of this leave, especially given that it began in the last year of my tenure clock.

This is a fairly significant detour from my academic career, and I expected the makeup of daily activities that I’m accustomed to as a professor would change substantially. In several interesting ways, I couldn’t have been more wrong: doing a startup is remarkably similar to being a professor in a technical domain, at least with respect to the skills it requires. Here’s a list of parallels I’ve found striking as a founder of a technology company:

  • Fundraising. I spend a significant amount of my time seeking funding, carefully articulating problems with the status quo and how my ideas will solve these problems. The surface features of the work are different—in business, we pitch these ideas in slide decks, elevators, whereas in academia, we pitch them as NSF proposals and DARPA white papers—but the essence of the work is the same: it requires understanding the nature of a problem well enough that you can persuade someone to provide you resources to understand it more deeply and ultimately address it.
  • Recruiting. As a founder, I spent a lot of time recruiting talent to support my vision. As a professor, I do almost the exact same thing: I recruit undergrad RAs, Ph.D. students, faculty members, and trying to convince them that my vision, or my school or university’s vision, is compelling enough to join my team instead of someone else’s.
  • Ambiguity. In both research and startups, the single biggest cognitive challenge is dealing with ambiguity. In both, ambiguity is everywhere: you have to figure out what questions to ask, how to answer them, how to gather data that will inform these questions, how to interpret the data you get to make decisions about how to move forward. In research, we usually have more time to grapple with this ambiguity and truly understand it, but the grappling is of the same kind.
  • Experimentation. Research requires a high degree of iteration and experimentation, driven by carefully formed hypotheses. Startups are no different. We are constantly generating hypotheses about our customers, our end users, our business plan, our value, and our technology, and conducting experiments to verify whether the choice we’ve made is a positive or negative one.
  • Learning. Both academia and startups require a high degree of learning. As a professor, I’m constantly reading and learning about new discoveries and new technologies that will change the way I do my own research. As a founder, and particularly as a CTO, I find myself engaging in the same degree of constant learning, in an effort to perfect our product and our understanding of the value it provides.
  • Teaching. The teaching I do as a CTO is comparable to the teaching I do as a Ph.D. advisor in that the skills I’m teaching are less about specific technologies or processes, and more about ways of thinking about and approaching problems.
  • Service. The service that I do as a professor, which often involves reviewing articles, serving on curriculum committees, and providing feedback to students, is similar to the coffee chats I have with aspiring entrepreneurs, the feedback I provide to other startups about why I do or don’t want to adopt their technology, and the discussions I have with Seattle area angels and VCs about the type of learning that aspiring entrepreneurs need to succeed in their ventures.

Of course, there are also several sharp differences between faculty work and co-founder work:

  • The pace. In startups, time is the scarcest resource. There are always way too many things that must be done, and far too few people and hours to get things done. That makes triage and prioritization the most taxing and important parts of the work. In research, when there’s not enough time to get something done, there’s always the freedom to take an extra week to figure it out. (To my academic friends and students, it may not feel like you have extra time, you have much more flexibility than those in business do).
  • The outcomes. The result of the work is one of the most obvious differences. If we succeed at our startup, the result will be a slight shift in how the markets we’re targeting will work and hopefully a large profit demonstrating the value of this shift. In faculty life, the outcomes come in the form teaching hundreds, potentially thousands of students, lifelong thinking skills, and in producing knowledge and discoveries that have lasting value to humanity for decades, or even centuries. I still personally find the latter kinds of impact much more valuable, because I think they’re more lasting than the types of ephemeral changes that most companies achieve (unless you’re a Google, Facebook, or Twitter).
  • The consequences. When I fail at research, at worst it means that a Ph.D. student doesn’t obtain the career they wanted, or taxpayers have funded some research endeavor that didn’t lead to any new knowledge or inventions. That’s actually what makes academia so unique and important: it frees scholars to focus on truth and invention without the artificial constraint of time. If I fail at this startup, investors have lost millions of dollars, several people will lose their jobs, and I’ll have nothing to show for it (other than blog posts like this!). This also means that it’s necessary to constantly make decisions on limited information with limited confidence.

Now, you might be wondering which I prefer, given how similar the actual jobs the skills required in the jobs are. I think this is actually a matter of very personal taste that has largely to do with the form of impact one wants to have. You can change markets or you can change minds, but you generally can’t change both. I tend to find it much more personally rewarding to change minds through teaching and research, because the changes feel more permanent and personal to me. Changing a market is nice and can lead to astounding financial rewards and a shift in how millions of people conduct a part of their lives, but this change feels more fleeting and impersonal. I think I have a longing to bring meaning and understanding to people’s lives that’s fundamentally at odds with the profit motive.

That said, I’m truly enjoying my entrepreneurial stint. I’m learning an incredible amount about capitalism, about business, behavioral economics, and the limitations of research and industry. When I return as full time faculty, I think I’ll be in a much better position to do the things that only academics can do, and argue for why universities and research are of critical importance to civilization.

reflections on conference papers and journals

For the first time in my academic career this week, I was working on a journal paper and a conference paper at the same time. This wasn’t entirely intentional; both of these papers were going to be CHI papers, but as the results and writing for one of them materialized, it became clear that not only was the audience not a fit, but I actually couldn’t fit all of the important results into 10 page SIGCHI format. This realization, and the fact that I was working on both simultaneously, led several realizations about how the two kinds of submissions differ.

First and foremost, the lack of a strict length restriction on the journal paper was surprisingly freeing. While on the CHI paper, every other discussion with my student was what to cut and what to keep, discussions about the journal paper were much more about what details and results were missing. Obviously, there are advantages to each: with the CHI paper we were probably forced to be much more concise and selective about the most significant results; similarly, the journal paper was slightly more verbose than it needed to be, because I didn’t have the threat of desk rejection to force more careful editing. At the same time, there were many interesting things that we had to leave out of the CHI paper that could have fit into just one additional page. With journal paper, the question was not “what’s most significant?” but “is this complete?”

The length differences also had a significant effect on how much space we gave to details necessary for reproducibility. For the journal paper, I felt like our task was to enable other researchers understand exactly what we did and how we did it. With the CHI paper, our task was to provide enough detail for reviewers to see the rigor of what we did, but the amount of detail we ended up including really wasn’t enough to actually reproduce our study. In the long term, this is not good science.

Although the journal paper didn’t have a deadline, I did impose one on my lab in order to align with the end of summer, since the undergrad research assistants on the paper would have to resume classes (as would I). The deadline worked well enough to motivate us to finish the paper, but it also freed us to take an extra day or two to read through the manuscript a few extra times, improve some of the figures, and verify some of the results that we felt may have been done too hastily. The CHI paper, in contrast, was rushed, as most CHI submissions are. There was just enough time to edit thoroughly yesterday and submit today, but there’s an extensive list of to do’s that we have if the paper is accepted. Sure, we could do them now, but why not wait until reviewers provide more feedback? With the journal paper, we submitted when we felt it was ready.

Of course, the biggest difference between the two submissions has yet to come. In November, we’ll get CHI reviews back and likely know with some certainty whether the paper will be accepted or rejected. There will be no major revisions, no guidance from reviewers about what they’d like to see added or changed, and certainly no opportunity for major improvements if it is accepted. Instead, the reviews will focus on articulating a rationale for acceptance or rejection. With the journal paper, I’ll (hopefully) get three extensive positions in a few months on what is missing or wrong with the paper and what they’d like me to change in a revision. The process will likely take longer, but in trade, I hope the paper will be much better than original manuscript.

One of these processes is designed for speed, the other is designed for quality. I’ll let you guess which is which. And let me be clear: I’m a big fan of conferences. Most of my work is published at major HCI and software engineering venues and not journals and I truly enjoy the fact that nearly everyone in our community rallies together at the same time of year to contribute our latest and greatest for review. But as someone who has the freedom to really publish in either, I’m really starting to question whether the average conference paper can actually be of comparable quality to the average journal paper. There might just be inherent limits to a review process that is optimized for selecting papers for presentation rather than improving them.

Of course, this isn’t a necessary dichotomy. I’ve talked to many people in my research community about blending the two. For example, if we simply had journals of infinite capacity and no conference papers, and instead put all of our reviewing effort into our journals, we could easily design an annual conference where people present the best work from recent journal publications (and work in progress, as we already do). In fact, CHI already lets ToCHI authors present their recently published papers, so we’re part way there. With changes like this, we might find a nice balance between a review process designed for improving papers and a conference designed for fostering discussion about them.

UW MSR Summer Institute on Crowdsourcing Personalized Online Education

For the past three days I’ve been at the 2012 UW MSR Summer Institute, which is an annual retreat on an emerging research topic. This year’s topic was “Crowdsourcing Personalized Online Education”. What this really meant in practice was two things: what is the future of education and how can we leverage the connectedness of observability of learning online? The workshop was mainly talks, but there were an impressive number of great speakers and attendees that kept everyone engaged.

There are a lot of important things that I observed out of all of these discussions and talks:

  • The first thing that was apparent is just how different the motives and values are in the different communities that attended. The majority of the attendees were coming from a computing perspective, with primary interests in creating new, more powerful, and more effective learning technologies. There were a smaller number of learning scientists, with interests in explaining learning and devising better measurements of learning, much more rigorously than any of the computing folks had done. Two representations from the Gates Foundation also came briefly, and it was clear that their primary interests were much less in specific technologies and much more in creating educational infrastructure and new, sustainable markets of educational technologies. There were also representatives from Khan Academy and Coursera, who were broadly interested in providing access to content, and mechanisms to enable experts to share content. My view on what’s really new behind all of this press on online learning is that computing researchers are newly interested in learning and education: almost everything else, except for the scale of access, has been done in online learning before.
  • Jennifer Widom, Andrew Ng, and Scott Klemmer (all at Stanford), talked about their experiences creating MOOCs for their courses. The key take away message is that it is very time consuming to create the course, with each spending countless hours recording high quality lectures before the hours, negotiating rights for copyrighted material, and working out bugs in the platform. All of them implied that running the course the first time was more than a full time job. On the other hand, many were confident it would take much less time for later offerings and had confidence that most aspects of the class can scale to be arbitrarily large (even design critiques, in Scott Klemmer’s case, through calibrated peer assessment). The one part that doesn’t scale is student-specific concerns (for example, students getting injured and needing an extension on an assignment). Scott also suggested that every order of magnitude increase in the number of students demands an order of magnitude increase in the perfection of the materials (because there are so many more eyes on the material), but again, this is a decaying cost, assuming the materials don’t change frequently.
  • In many of the conversations I had around how MOOCs might change education, many faculty believed that the sheer availability and accessibility of instructional content would shift the responsibilities of instructors. Today, most individual instructors are responsible for making their own materials, making them accessible, and then using them to teach. In a world where great materials are available for free, these first two responsibilities disappear. The new job of a higher ed instructor may therefore much less about designing materials and providing access to them, but correcting misconceptions, motivating students, designing good measurements, and building learning communities. One could argue that this is an overall improvement (and also that it actually mirrors the way that textbooks work, which are written by a small number of experts and used as is by instructors).
  • Interestingly, most of the MOOC teachers reported that the social experience of students online were critical, including forum conversations, ad hoc study groups in different cities around the world, and peer assessments. This might quell a lot of the concerns that higher ed teachers had about the loss of interaction in online—it might just be that the interaction shifts from instructor/student interaction to student/student interaction and student/intelligent tutor interaction. Some of the preliminary data suggests that students actually greatly prefer this, since they don’t get that much instructor interaction already, but they’re getting much more student/student interaction than in a traditional co-located course. This might therefore be an improvement over traditional lecture-based classes, but not classes in which teachers interact closely with students (such as small studio courses).
  • No one knows what will happen to the education market, including the people running Kahn and Coursera. However, there were some predictions. First, these platforms are going to make it so easy to share and access content, in the same way that the web has for everything else, that finding and choosing content is going to become a critical challenge for students. Therefore, one new role that instructors might play is in selecting and curating content in a way that is coherent and personalized for the populations that they teach.
  • Most of the interests related to crowdsourcing are either in (1) enabling classes to be taught at scale (by finding ways to free instructors and TAs to have to grade and assess all of the work), (2) to improve the effectiveness, efficiency, and or engagement of learning activities, or (3) to create new opportunities through informal learning, such as through oDesk or Duolingo. Researchers are thinking about how to use data to optimize the sequence of instruction, give just the right hints to correct misconceptions, select a task that is challenging but not too challenging. In my view, this is leading to a renewed interest intelligent tutoring systems.
  • As usual, most of this new research work suffers from a lack of grounding in and leveraging of prior literature in learning sciences and intelligent tutoring systems. There is tons of research on all of these challenges that computing researchers are tackling, but I don’t seem them really using any of the work. This happens over and over in computing research, since the interests are often in creating new things and not understanding the things themselves. I was impressed, however, how much Andrew Ng had leveraged findings in learning sciences to support certain design decisions in Coursera.
  • There was a big undercurrent of data science at the workshop. Everyone was excited about big data sets and how they might be leveraged to improve learning technologies. Most of the methods reported were fairly primitive (AB testing, retention rates), but I’m hoping this new energy behind learning will lead to much better methods and tools for doing educational data mining.

Phew! Sorry for the lack of coherence here. We covered a lot of ground in 2.5 days and this is just a sliver of it.

ageism in academia

I have a young face, especially for a professor. Other faculty assume I’m an undergrad, Ph.D. students assume I’m an undergrad, even undergrads assume I’m an undergrad. In some ways this is nice. I can be stealth on campus, blending in with the rest of the students. When I’m teaching, I have to earn my authority rather than getting it simply because I look wise (and I like earning things). And the undergrads I teach probably relate to me differently simply because I look their age, even though I’m a decade older than most of them.

As a researcher, however, looking young can feel like a disadvantage, since the wisdom and knowledge one has typically grows with age (at least in academia). Sometimes I feel like people discount my opinions because I look young, perhaps because my face communicates inexperience. Sometimes I feel like I have to compensate by being extraordinarily articulate or insightful, just to get people to listen to me. At conferences, people always ask me what I’m studying, who my advisor is, or where I go to school. I suspect that when people who don’t know me see me at a conference, they think, “just another student” instead of “I wonder who that important researcher is” like I do when I see older researchers at conferences.

Not that this has held me back. If anything, it means that any success I’ve had has been earned, which makes it all the more rewarding. And it shows that academia is still indeed some form of meritocracy, where it is the ideas and knowledge that one produces that ultimately shapes our reputations. In fact, when I’m 60, I’ll probably look like I’m in my 40’s (as my parents do), which will help me avoid all of the ageism directed at older professors, so any disadvantage I have now might turn into an advantage later in life. That should enable me to have a nice long career into my 70’s (assuming my brain still works!).

Ultimately, I feel lucky that ageism is the only real discrimination that I face. There are faculty who face ageism, sexism, and racism, which seems like an incredible amount of bias for one person to struggle against. Facing a bit of ageism here and there makes me empathize with people who face even more discrimination and makes it easier for me to avoid assuming anything about a person until I talk with them. And it helps me respect their successes even more, because I have a tiny glimpse into what it took to earn it.