Snowbird trip report: automation, education, and academia

IMG_3488.JPG

This past Sunday, Monday, and Tuesday I attended the biennial Snowbird Conference, which is sponsored by the Computing Research Association and brings together chairs, deans, directors, and other leaders of North American computer and information science units, as well as leaders of industry research labs. I was very fortunate to have been invited to speak on a panel about computing education research (which I’ll discuss shortly). This also meant that I had the privilege of interacting with hundreds of the world’s most powerful leaders in computer and information science. The trip was more than enlightening, it was pivotal: I learned just how small a world CS is in practice, especially relative to the massive impact that computing is having on society. The trip will change how I conduct my research, how I view my academic community, and how I view myself as a scholar and a leader.

Let’s dig in to some of the experiences I had. First and foremost was the peculiar dynamic of being one of the few people at the conference who was not a current or former chair or dean. Imposter syndrome has been on my mind a lot lately in my teaching, and so I decided before coming that I’d view myself (and present myself) as what I want to be: an eager early-career faculty member excited to learn and contribute and passionate about teaching computing leaders about computing education research (rather than a quiet, intimidated fish out of water that I’m prone to being). By now I’m used to my baby face subverting any effort to appear authoritative on anything, but I was really pleased with how open and supportive the group was, and how receptive they were to my contributions and my expertise. After all, why would I be there if I had nothing to contribute?

Throughout the three days, I had conversations with about 40 chairs and deans, ranging from those at prestigious institutions such as Harvard, Yale, Princeton, and Northwestern, to the long tail of excellent CS workhorses such as Texas A&M, Purdue, Harvey Mudd, CU Boulder, and UC Santa Cruz. I also connected with chairs more local to the University of Washington, including Joe Sventek at the University of Oregon and James Hook at Portland State University. This was a networking fantasy for anyone interested in the infrastructure of academia and research policy, not to mention an incredible chance to have substantive research and policy conversations across computing with some of the best researchers in the world.

I learned much about the life of computing leaders. For example, most chairs and deans, at the most basic level, are necessarily concerned with the slow building of academic infrastructure. The problems they solve are complex, over-constrained, and long term, including issues of space, faculty renewal, the future of the field, and the politics of federal research funding, along with internal faculty governance and conflict resolution. Second, the majority of chairs and deans that I spoke to did not describe starting their roles with a particular passion or interest in solving these problems, but many found unexpected joys in their ability to have big impact on their campus the broader academic community. Some described the long three, often four year struggle to even understand the scope of the job they’d accepted, and often felt that once they were truly competent at the jobs, they were often at the end of their terms, passing the baton to the next person. These are positions that require constant learning, just like faculty positions, but with bigger, often permanent consequences.

It’s difficult to discuss the experience of chairs and deans without mentioning some of the stark differences between those as top ranked universities and everyone else. Leaders as these premier institutions tackled big, complex, often global initiatives, whereas those at lower ranked institutions were focused on faculty recruiting, retention, and student enrollment issues. These as sounded to me to be substantially different roles, one facing outward to the world and the other facing inward toward operations. I actually found it emotionally difficult to watch these differences manifest in the social context of a conference, with hallway conversations often segregated along these different responsibilities and university reputations. The irony of rankings reified spatially, just before a session on the flaws of university rankings, was too much for me.

The conference was not all networking. The content of the prepared panels and talks was explicitly focused on three big topics: 1) the future of computing in academia, 2) the political and policy factors that might affect computing in academia, 3) and disruptive perspectives on computing.

Let’s start with the future of computing. Speakers across the conference discussed two big trends: the middle class is disappearing because of automation and the world wants to learn computing. These two factors are related to each other: as more jobs are displaced by automation, more people are eager to learn computing so that they don’t get displaced, and this is leading to massive increases in enrollment in CS programs at the undergraduate and graduate levels (not to mention bootcamps, summer camps, etc.). This has also caused many other disciplines to begin to require computer science classes, which has dramatically increased the number of non-majors in computer science classes across the curriculum. All of this upstream activity is leading to serious downstream pressure on teaching load and class sizes, which is leading to fears about lower student retention and decreased student diversity. Simultaneously, many CS departments are engaging in new initiatives with the goal of increasing enrollment further, such as UIUC CS+X initiative that creates a whole collection of mixed computing degrees with other disciplines, and dozens of new professional data science programs.

At the same time, trends in federal funding are putting additional pressure on CS departments. Peter Harsha from CRA explained that federal funding is flat and likely to stay flat given the U.S. political climate in congress. A university relations rep from Microsoft Research read a statement that basically suggested Microsoft and the rest of industry is unlikely to compensate. There are opportunities to raise more money from NIH and foundations, but these often require much more applied, problem-driven research than the type of basic research most faculty prefer. This status quo would be survivable, but because computer science departments are growing to respond to the massive increases in enrollment, and much of this growth is in tenure-track faculty. This means that there is going to be increased pressure on everyone to raise money from a pie that isn’t growing. This increased competition means that faculty will simply be raising less money, funding fewer Ph.D. students, and getting less research done, while also spending more of their time teaching and fundraising. Martha Pollock strongly recommended that chairs and deans start to carefully reconsider what constitutes a tenurable record, that they sensitize their faculty to these new potentially lower standards, and that we all start writing tenure and promotion evaluation letters with these systemic effects in mind.

Many people at the conference believed that the engine for all of this change is actually computing itself. Tom Mitchell from Carnegie Mellon presented convincing evidence from labor economists (among others) that the stereotypes about computing killing jobs is most certainly true: it does not appear that for every job automated, another is created. This is redistributing wealth more than ever to the wealthiest in the world, leaving only jobs that require extremely rarified skills and thus pay extremely high salaries, or service jobs that are simply to difficult to computing to yet perform, but are not particularly valuable, and therefore not well paid. The middle class jobs that we still have are all likely to be displaced by automation in the coming decades. In a circular way, this erosion of the middle class is leading to the boom in CS enrollment, which in turn, fuels further erosion. The panel discussing these trends admitted that no one knows the end game. It may be that the invention of new jobs replacement is laggy (as it been with past technological change), and that it will happen eventually after a long period of social strife. To survive this social strife, we may face the necessity of raising the education bar for society again, which will involve transforming both K-12 and higher education for this new reality. Alternatively, education may only exacerbate the problem further, and we’re going to have to reinvent how we live, exploring minimum incomes and other new forms of public safety nets to compensate for the net decrease in demand for human labor. Beth Mynatt of Georgia Tech suggested a testable hypothesis that fully automated systems, rather than mixed initiative systems with humans in the loop, may be more┬áprone to eliminating jobs, instead of augmenting the jobs that already exist.

As an explanation for these trends, Kentaro Toyama gave a rousing, witty talk on an “amplification” (a theory I believe is from science and technology studies). The basic claim is that technology is rarely the cause of social change, but rather an amplifier for social change, whether that change happens to be positive or negative. For example, if a democratic uprising is fomenting in an autocratic country, Twitter is not the cause of the uprising, but rather an amplifier, augmenting human ability to communicate, organize, and effect the change they desire. According to amplification theory, all of this organizing would have happened without Twitter, but it may have happened on a smaller scale and been less visible and effective. Similarly, if educational technology like MOOCs or some of my recent research on coding tutorials advances human ability to learn, it will primarily enhance those who have access and opportunity to learn, amplifying the structural inequities that already exist in society, rather than disrupting them. Kentaro’s argument, therefore, was that computing needs to move past its belief in technology as an unqualified agent of social change, and start approaching its research with more sober, realistic expectations of possible impact. He suggested this might start with collaborations with those in other discipline who better understand social change.

All of this discussion of social change, enrollment, and education was perfectly complemented by a broad undercurrent of discussion on computing education and computing education research (my reason for attending). I spent much of my three days at Snowbird advocating for the research area, for policy change, and explicitly for the hiring of tenure-track computing education research faculty. On Monday, I sat on a panel with Mark Guzdial (Georgia Tech), Scott Klemmer (UCSD), Ben Shapiro (CU Boulder), and Diana Franklin (Chicago). To a full room of nearly eighty chairs and deans we made a systematic argument that computing education research is a new, exciting, fundable, fundamental, rigorous, and essential area of computer science. We focused on more tactical reasons to hire tenure-track faculty (e.g., invest while a rising stock is cheap, claim research area territory while you can, take global leadership on an issue of significant public interest), while the rest of the themes at the conference bolstered the more fundamental concerns in social change that demand an increased focus on computing education excellence and scale. To my surprise, the audience was not skeptical, but rather curious and eager to envision what a computing education researcher might bring to their department.

Complementing our panel and my many one on one discussions was a reception organized by Jan Cuny of NSF helping to educate chairs and deans about the White House’s CS For All budget proposal. This initiative aims to train tens of thousands of high school CS teachers, giving access to computing education to every public school in the U.S. Each table at the reception had an expert and a topic and attendees sat and cookies while the expert discussed what the funding would mean to K-12 education, how this change would necessarily transform college-level computing education for the better, and what chairs and deans could do to ensure the funding and the policy are passed and successful. I had a table full of mostly skeptics, but found that a few key points about computing education research were persuasive: 1) basic research is vastly more rigorous than it used to be, and 2) there are fundamentally interesting computer science research questions to answer. I think most left convinced that, with the right candidate, it might be a smart move for their department to hire at least one tenure-track junior professor in the next decade years.

I was heartened by the reactions I received about computing education research. Most of the chairs and deans I met at top departments were surprisingly supportive, and some even said they could see hiring in the area if it starts regularly attracting and producing world class research talent. Most of the other chairs and deans I met ranged from curious to aggressively interested, seeing clear opportunities to collaborate with colleges of education to create joint positions, or even full positions in CS as a way of differentiating themselves from their peer institutions. Most were connecting the dots between the next decade of social change discussed at the conference, the impending policy changes in K-12, and the severe lack of expertise that most CS departments have in dealing with these coming changes.

These conversations led to a clear call to action for myself and my computing education research peers. The conditions are right for computing education research to take its next big leap in maturity, but this only happens if there are Ph.D. students who can show that computing education research is exciting, groundbreaking, and rigorous. It’s up to me and the many computing education research faculty in the world to show what the field is capable of in the next five years for computer science to permanently invest in the field, so it can have stable leadership that steers the world in its effort to effectively retrain the world in computing.

Stepping back, I can’t imagine a more productive three days in my professional life. I have an entirely new image of academia, I made (hopefully good) impressions with hundreds of leaders in computing, and I have a much greater clarity of my research and service goals for the next decade. What a perfect way to end my exhilarating return to academic life from industry and sabbatical!

8 thoughts on “Snowbird trip report: automation, education, and academia

  1. Thank you for this thoughtful summary. With regard to the idea of having faculty members who research computer science education, I note that Germany has been doing this for a while. They call it “informatics didactics” (Didaktik der Informatik). Here’s an example: http://ddi.cs.uni-potsdam.de/

    • Yes! I had the fortunate of meeting several German faculty during a Dagstuhl workshop on computing education research. It’s great to see the international community that has been forming over the past decades.

  2. Thanks Mark! (comment on Mark Guzdial’s comment, from his blog).

    I didn’t buy either Randy’s or Rich’s arguments — separate *departments* within a larger “Arts and Sciences” college is likely a better idea — and the long ago not good practice of having a separate Engineering college just happens again with computng.

    E.g. is a “Physics College” a good idea? (I don’t think so)

    Kentaro Toyama’s talk was more fun to poke through. He missed a chance to point out that the Renaissance was already started by some decades before the printing press was revealed in 1454 (and that one of the main histories of this has the title “The Printing Press as an Agent of Change” (Eisenstein).

    He could have asked questions about “literacy” and “what literacy brings and doesn’t bring” (both with reference (say) to the classic studies of Scribner and Cole in Africa, or to what literacy means in a totalitarian and/or a religious fundamentalist society).

    But he should also have asked questions about exposure to “multiple perspectives” and what else is required to see distinctions between the simple “right” and “wrong” versus “more than one way to look at the world”.

    An equally acute problem are the many forms of “scale up failures” in education in general, and computing in particular. There are many of these: two are (a) the failure of simple computing ideas from the 50s to work after generations of Moore’s Law and the Internet, and (b) the human failure to move on from simple childhood narcissism and world view to the points of view needed for adults in the 21st century.

    There are lots of manifestations of these (and many more examples of other failures). One that is particularly troubling is to see “successful industrialists” not understand the amount and kinds of things that had to be going successfully — over historical durations — for them to become successful. That “computer people” also have a hard time with this — maybe even a harder time — means partly that they have gained almost no sense of “systems” but are still thinking on much smaller and more local — even parochial — terms.

    Seems as though this discussion should not have been relegated to a banquet talk ….

    • Thanks for the questions Alan, and the critiques. I found the prepared sessions, aside from Kentaro’s, were pretty conservative with respect to reenvisioning computing. And most of the chairs and deans I met were just keeping their head above water trying to manage the operations of their unit, with little time to develop and act upon broader external visions.

      But there were exciting exceptions. Mark mentioned in his latest post the CS+X effort at Illinois, which is trying to figure out what CS+Anthropology, CS+English, CS+Biology, etc. mean. It’s hard to say where it’s going to go, but I can imagine that students with some very exciting perspectives might emerge over time. I was a CS+Psychology major myself, and I felt like taffy, repeatedly pulled to my limits and then recombined. The same was true when I was at CMU in the HCI Institute, and I sought out the same thing in the Information School I currently reside in. There’s much to be said for just putting many people with many different perspectives in the same space and seeing what happens.

  3. Following up on Alan’s comment:

    Actually, there was — I was saving it for a future blog post. One of my favorite sessions at Snowbird was on creating Colleges or Schools of Computing. Two of those talks talked about their vision of what CS is and what it should be. Randy Bryant talked about the Alan Perlis, Alan Newell, and Herb Simon definition of computer science which drove the creation of the School of CS at CMU (see slides here). Rich LeBlanc gave a terrific talk about the multi-faceted definition of “computing” that was the early vision for the College of Computing at Georgia Tech (see slides here). Carla Brodley had the best zinger of the session, pointing out that her College of Computer and Information Science was created eight years before any of the others and has more interdisciplinary degrees than even Illinois.

    Kentaro Toyama’s after dinner talk “Computing Alone Doesn’t Solve Social Problems. So, What’s Next?” had a decidedly anti-corporate perspective (see slides here). His first slide, on how Facebook is like The Matrix, was worth the price of admission. He quoted Mark Zuckerberg, “The richest 500 million [people] have way more money than the next six billion combined. You solve that by getting everyone online.” And then he spent the rest of the talk showing how wrong that quote is. His talk led to the most controversy, the most angry questioners I saw at Snowbird. (Tell CS department chairs that CS may not be the solution to all of the world’s problems, and they get really angry.)

    What I found most interesting about the talks I described in this post was not about the Dean’s problems of large enrollments. I’m excited about the opportunity of have a different kind of student interested in computer science, in sizable numbers. So much of CS undergraduate curriculum is about preparing students to be software engineers. Now here’s evidence that almost half our students don’t want to be software engineers, don’t want to create the next start-up. They want to use computer science to make the world a better place. That gives us leave to teach something other than Eclipse, C++, code reviews, and Agile methods.

  4. Hi Andy

    You didn’t mention discussion about “improving computing itself” (especially vs. academia as more and more reflecting pulls from corporate perceptions of their needs rather than both being “keepers of the flame” and “trying to improve the flame”).

    I’m much more concerned with what is being taught and the field’s own weak perceptions of itself than “dean’s problems” of large enrollments, etc.

    I think of myself as an actual “computerist” — anyone like me at this conference? (I was certainly not invited …)

Leave a Reply

Your email address will not be published. Required fields are marked *