ICER 2016 (the ACM International Computing Education Research conference) just ended a few hours ago and I’m enjoying a quiet Sunday afternoon in Melbourne, reflecting on what I learned. Since you probably didn’t get to attend, here’s my synthesis of what I found notable.
First, a meta comment. As I’ve noted in past years, I still find ICER to consistently be the most inclusive, participatory, rigorous, and constructive computer science research conferences I attend (specifically relative to my multiple experiences at CHI, UIST, CSCW, ICSE, FSE, OOPSLA, SIGCSE, and VL/HCC). There are a few specific norms that I think are responsible for this:
- Attendees sit at round tables and after each talk, discuss the presentation with each other for 5 min. After, people ask questions that emerged. This filters out nasty and boring questions, but can also lead to powerful exchange of expertise and interesting new ideas. It also forces attendees to pay attention, lest they lose social capital from having nothing to say.
- Session chars explicitly celebrate new attendees, first time speakers, first time authors publicly, creating a welcoming spirit to many new attendee
- The program committee regularly accepts excellent replications, creating a tone of scientific progress rather than cult of identity
- The conference gives two awards: one for rigor and one for provocative risk taking, incentivizing both kinds of research necessary for progress
- The conference has a culture of senior faculty as mentors to the community, not to just their students. All doctoral consortium all the time.
- The end of each conference is an open session for getting feedback on research designs for ongoing work and new ideas, creating a community feeling to discovery.
There are always things to improve about a conference, but most of the things I’d improve about ICER are things that other conferences should do too: shorter talks, more networking, move to a revise and resubmit journal-style model, allow authors of journal papers to present, and find ways to include authors of rejected work.
Now, to the content. The program was diverse and rigorous. A number of papers further reinforced that perceptions, stereotypes, and beliefs are the powerful forces not only in engagement in computing education, but also learning. Shitanshu Mishra showed that this is true in India as well, where culture creates even stronger stereotypes around the primacy of computer science as the “best” degree to pursue. Kara Behnke provided some nice evidence that AP CS Principles, with its focus on the relationship between CS and the world, powerfully changed these perceptions, and reshaped the conceptions of computing that students took into their post-secondary studies, whereas Sabastian Ericson talked about the role of 1-year industry experiences during college on students reframing of the skills they were acquiring in school. This year’s best paper award was by Alex Lishinki et al., providing convincing evidence that self-efficacy is reshaped by course performance, and that this reshaping occurs differently among men and women and exams and projects.
Many papers investigated problem solving through different lenses. Our own paper explored student self-regulation, which we found was infrequent and shallow, but predictive of problem solving success. Efthimia Aivaloglou presented an investigation into Scratch programs, replicating prior work that showed that most Scratch programs are simple, lack much use of programming constructs, and lack conditionals almost entirely, suggesting that the majority of use lacks many of the features of programming found in other learning contexts. Several also presented more evidence that students don’t understand programming language semantics very well, learn programming language constructs at very different rates (and sometimes not at all), but that subgoal labeled examples and peer assistance can be powerful methods for increasing problem solving success and learning.
My favorite paper, and the paper that won the John Henry award for provocative risk-taking in research, was a study by Elizabeth Patitsas et al., which gathered convincing evidence from almost two decades of intro CS grades at UBC that despite broad beliefs among faculty that grades are bimodal because of the existence of a “geek gene,” there is virtually no empirical evidence of this. In fact, grades were quite normally distributed. Elizabeth also found evidence that the more teachers believed in a “geek gene”, the more they were label distributions as bimodal (even when those distributions were not bimodal, statistically). This is strong evidence that not only do students’ prior beliefs powerfully shape learning, but teachers’ prior beliefs (and confirmation bias in particular) powerfully shape teachers attitudes toward their students.
The conference continues to raise the bar on rigor, and continues to grow in submissions and accepted papers. It’s exciting to be part of a such a dedicated, talented community. If you’re thinking about doing some computing education research, or already are and are publishing it elsewhere, consider attending ICER 2017 in Tacoma next year, August 17-20. My lab is planning several submissions, and UW is likely to bring a large number of students and faculty. I hope to see you there!