I’m no fan of student evaluations. They’re fraught with gender bias, age bias, and all kinds of construct validity issues. They certainly are not good measures of learning outcomes or teaching quality. At their best, they are good indicators of an instructor’s success at creating a coherent, engaging experience, which is important to learning. And engagement is no small feat in a world that increasingly frames colleges as businesses and students as customers, compelling students to constantly question the value of what they’re learning to their career paths.
Since we nevertheless gather student evaluations every quarter at the University of Washington, I do use them to track my own progress at engaging students in learning. And I’ve usually done pretty well on whatever they’re measuring. Take, for example, my Design Methods course, which is basically an introduction to HCI and Design methods for undergraduates. Since I started teaching it about eight years ago, I’ve generally earned anywhere from a 4.0 to 4.6, which is generally considered by faculty to be excellent. At the University of Washington, these scores are the median of all students average across four prompts on a scale of very poor to excellent (how was the course, how was the content, how were the instructor’s contributions, and how effective was the instructor at teaching). So my generally high scores mean that most of my students believe I can engage them, believe I can explain things to them, and believe that I have sufficient expertise on design. None of this means I can actually do these things well, but pre-tenure, that was good enough for me.
On sabbatical last year, however, I began to read learning sciences and education research more deeply, partly because I’ve been doing more computing education research, and partly because I wanted to become a better teacher. What I found was that while my teaching as adequate, it was far from ideal. While reading through the book How People Learn, I found countless opportunities to produce better learning outcomes, usually without significantly more effort (and sometimes with less effort!).
The source of most of these opportunities was a simpler, but more robust theory of learning. In essence, I learned from learning sciences that effective, efficient learning requires three things:
- A clear sense of the knowledge to be taught.
- Deliberate practice of that knowledge (meaning immediate, targeted feedback)
- Attention, and therefore motivation, on that practice.
That’s it. I learned that the complexity isn’t so much in learning (humans seem to do that quite naturally) but in setting up conditions that predispose people to learning. Getting students motivated, and therefore attending to practice, is hard. And designing effective deliberate practice is hard, often because we don’t know exactly what we’re teaching or what’s hard about what we’re teaching. It’s also hard to scale targeted, immediate feedback to individual learners.
Given these basics, I spent part of my sabbatical trying to redesign my course to achieve better learning outcomes in my Design Methods course. Here are a few of the things I did, applying the theory above.
One of the first and easiest things I did was share with students my theory of learning, to frame how I was engaging with them. I taught Carol Dweck’s work on theory of intelligence, explaining that every student has beliefs about where ability comes from, but those beliefs actually mediate how much people learn. I encouraged them to adopt a growth mindset, remembering that all ability comes from deliberate practice, and that the class would be structured to give them that practice. Second, I told them that as much as it was my job to structure an environment conducive to learning, it would only happen if they engaged, believed in their ability to learn, and listened closely to the feedback I provided.
Next, I tackled the problem of motivating students. I’ve always had some model in my head of what my undergraduates care about, but that model was always based on a few close relationships with undergraduate researchers, generic surveys, or student feedback in evaluations. None of these work that well in providing substantial insight into what motivates my students. To solve this, I spent the first day of class asking students to write a brief essay in class answering the question “Why are you in college and what does design have to do with it?” Then, rather than reading them privately, I had students share them with each other in small groups, and then construct an elaborate whiteboard diagram of their life trajectories and how design fit into it. What we learned was that because my course was a required course, most had little intrinsic motivation to learn design, but they were curious about it and thought it might be useful. Most also had very concrete life goals, including specific careers, visions for where they would live and how much money they needed to make to live there, and what kinds of friends and family they wanted. For most of them, school was a tool for getting them to those futures.
I used this model of my students’ motivations to shape a third pedagogical practice: at the beginning of every class and every in-class activity, I explicitly stated how I thought the day’s activity would contribute to their life goals. Devising these links was not easy and couldn’t be done in advance; I was constantly updating my model of what was motivating my students so I could come up with a single justification that would work best with the whole class. For example, the day I taught heuristic evaluation, I said something to the effect of: “So we’ve talked a lot about UX designers in class so far and how their general responsibility is to envision seamless user interface designs. Some of you want this job, others of you will be working with UX designers to implement their visions. How will they know if their design is good? And how can they know in just a few days, which is the time scale that many designers have to work at? There’s one method invented back in the 1990’s that tried to solve this problem. We’re going to talk about it today, learn its strength and weaknesses, and discuss when it makes sense to use it.” Note that this kind of justification is essentially the same justification that Jakob Nielsen used in his book Usability Engineering. I just needed to link his motivation to students’ individual aspirations.
Another challenge was in motivating students to learn the declarative knowledge about HCI and Design, such as important methods, concepts, histories, and ideas in design. How could I motivate students to read about these things? In addition to use the same strategy above (simply explaining how it linked to students’ own goals), I designed a series of reading exercises that aimed to be frictionless, but also engaging. Twice a week, students would read a short blog-post length chapter that I personally wrote as an introduction to a subject in design. They were short enough that students would read them, but deeply linked, so that throughout the reading, there were multiple followup readings students could do to deepen their knowledge. Then, to motivate students to read them, I held a reading quiz at the beginning of class to verify that they had read it (which had the added benefit of getting them to show up to class on time). I also required a brief summary of a reading of their own they could select, choosing from the readings I linked to, or from any other reading, podcast, or video on the web that concerned the same subject. After the reading quiz, students engaged in “think-pair-share”, turning to a few of their neighbors and explaining what they read and what they found interesting about it. Then, after a few minutes of sharing, I asked for students to voluntarily share the most interesting readings they heard about from their peers. In just about 20 minutes of class, we covered a range of readings, many of which were entirely new to me. I had to be ready to rapidly synthesize and relate the topics they raised to the subject of the day, but this kept me engaged as well. It also reinforced every day that I genuinely did have the expertise to be teaching the subject.
After the reading period, we would engage in an in-class activity. I explained to students that our time together was precious, because it was the only time that we could actually do design together (as design is rarely done alone). For each topic, I carefully designed an activity with a very specific form of deliberate practice, always beginning with a justification and ending with a reflection that tied together the practice they engaged in with feedback on what they did right and wrong in their practice. My role in these activities was to facilitate and closely observe so I could provide this feedback. One example of an activity was a 90-minute usability testing activity in which teams of two designed a paper prototype alarm clock interface, design a task to verify its usability, and conduct a series of usability tests with their classmates. The rules governing this activity were carefully designed to mimic the kind of usability tests that people run in industry, but also to reveal the fundamental scholarly questions behind usability testing (namely, how reliable is the knowledge they produce). I tried to design each of these activities to feel like a game, with some clear notion of the rules and definition of winning, but align these with authentic ideas in practice, and make their authenticity clear to the students.
The result of these readings and activities was that every day, students got to come to class to share what they learned in their selected readings, learn from each other, and then engage with each other with my help to acquire a skill that would help them get closer to their life goals. Almost all students came on time, excited about class, and many left craving more time to go into more depth (which we never had).
I can say with some certainty (both from student evaluations and my own observations) that students were engaged: my median student evaluation score was a 4.9/5.0, the highest I’ve ever received across eight years of teaching and twenty-five courses and the highest I’ve ever seen amongst my colleagues. Unfortunately, what I still can’t say was that they learned any better. We simply don’t know how to measure design skill with any reliability or validity. And so I take it on faith that, given what we know about learning, that as a natural byproduct of deeper, more sustained engagement, the students practiced more and more deliberately the content I gave them.
Now I just have to figure out if it’s the right content! And if students’ perceptions of my teaching skills have anything to do with the quality of their learning. And how to figure out what they’ve learned about design. And a million other unanswered questions about design education!