Whenever I close out a grant, I like to reflect on what I achieved with the money. Well, to be clear, NSF likes me to do that too, in the form of a final report of project outcomes. And as it should be: the average American gave me a tenth of a penny to do some research. What did it buy them?
This particular grant was my CAREER award, granted in 2009. This grant is given out to a select few faculty each year who “have the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization.” Really though, it’s an award given out for important research by new faculty.
In my final outcomes report, I described my work like this (note that these reports are intended for the general public and aren’t supposed to require any expertise to understand):
When software companies release software, there are only a few ways for them to learn about problems that users experience. They can wait for users to report problems, which leads to large amounts of unstructured text that is difficult to aggregate and analyze. They can also automatically monitor for easily detectable problems such as crashes, errors, and performance issues. The broad set of usability and usefulness issues that arise, however, are difficult to monitor and aggregate, making it difficult for teams to improve software for users.
I then went on to summarize the discoveries and impact of the work:
Across the seven years of the project, we made numerous discoveries about this problem. We learned how developers, designers, and product managers evolve software, finding that many ignore feedback that comes through technical support channels, that feedback from users often comes from highly technical users, and that developers do engage with user feedback, they often view it as irrelevant minority opinion. We also found that when developers discuss these issues, they tend to ignore evidence, relying instead on anecdote, speculation, and hyperbole. We also discovered that the most expert software engineers are more rational and evidence-based in their decision making and assessment of feedback, relying on objective data sources to inform their product decisions. However, we also found that expert engineers require substantial interpersonal skills to persuade less experienced developers who rely on less objective decision-making practices.
We invented many approaches to address these problems. One was a way for users to request help while using software without having to express their problem. It dynamically creates a repository of frequently asked questions, predicts which questions a user will have based on their context, and provides structured data to software teams about which questions users have and where. This data can then be used to make more evidence-based decisions about how to improve software. In addition to this, we invented new algorithms for mining software feedback from technical support forums and for automatically detecting usability problems in software without even having to release software.
Were all of the facts above worth the $600K that received over 7 years (including 2 years of “no cost extensions” while I was on leave)? When they’re summarized as they are above, it’s hard to judge, since facts alone probably aren’t the most valuable thing to anyone in the general public—they’re more useful to us academics trying to build larger truths about software engineering. The questionable value of intermediate scientific discoveries is why NSF also requires reports to describe “broader impacts”. I described mine like this:
We disseminated this work in diverse ways. We co-founded a software startup called AnswerDash that sells the help technology, raising venture capital, and to date have created dozens of jobs, while increasing the sales of numerous companies, indirectly creating more jobs. At the time of this grant’s expiration, over 10 million people have used the product to seek help. We also shared our discoveries through multiple articles in popular press, through a webinar reaching over 30,000 software engineers, and through a podcast reaching over 10,000 software engineers. The PI also developed a new software engineering course and wrote a free online book to support the course, which summarizes the forty year history of research on human aspects of software engineering. The grant also supported the professional development of the PI, directly supported the research of four doctoral students (two of whom are now faculty), and trained over a dozen undergraduates about research, several of whom pursued graduate degrees.
Tech transfer? Teaching 40,000 software engineers? A new course and a new textbook? Those are pretty good, right?
Then there are the things that the general public wouldn’t really care about at all, but that I care about as an academic:
- The grant supported the general research infrastructure at the University of Washington, including buildings, electricity, staff, and other expenses associated with the research. This is called “overhead”, and while it’s generally supposed to cover research related expenses, it supports highly coupled resources like buildings, which inadvertently also support the educational mission of the university.
- My lab published 25 papers with the funding, spanning HCI, Software Engineering, and Computing Education venues. Four of those papers receive best paper awards.
- These papers have already been cited over 350 times by other researchers in the world, impacting the ideas and directions of other researchers.
- I was invited to give my first keynote at SPLASH 2016, which challenged me to think bigger about programming languages and equity.
Most importantly, because I wasn’t spending as much time fundraising all of these years, I was able to focus on becoming a better teacher, a better researcher, a better mentor, and a better leader. Without the support of the CAREER grant, there’s no way I’d have achieved the level of success and impact that I have at this point in my career. And there’s no way I’d be a position to resume frantic fundraising now without failing at my teaching, mentorship, and leadership duties. Because of the grant, I’m a more productive, effective, prolific, and impactful public intellectual, which ultimately helps the hundreds of students I teach every year be more productive, effective, and impactful people.
All that cost the average American a tenth of a penny (and given our current tax brackets, more like a penny for upper middle class Americans and everyone else basically nothing). Is the world that much better for its investment?
In this case, clearly yes: me and my co-founders (my colleague Jake Wobbrock and our former Ph.D. student Parmit Chilana) convinced a venture capitalist to invest $2.54 million in a local U.S. company that created more than two dozen jobs instead of investing somewhere else in the world. Even if you don’t care about anything above except for direct financial returns, that’s a $1,940,000 profit on a $600,000 investment—a 323% return!
Take that Trump-kins. Research beats stock market when done right.