what’s in a frame?

A few days ago in the NY Times, there was a story reflecting on Amber Case’s idea that we are all cyborgs, using a wide range of tools for both physical and mental modification. The key idea in the story is lamenting the loss of memories that have physical embodiments, such as a photograph that has both meaning in what it contains, but also meaning in it’s physical container. In contrast, the digital photographs of today still have their meaning, but the container is meaningless, because it’s virtual. It could just as easily be opened on one of a hundred photo viewing applications and displayed in an infinite number of ways and devices.

To me, the divorce of information from embodiment is one of the most powerful but subversive aspects of software as a medium. It underlies nearly every major change in industry currently under debate, including music, print, libraries, publishing, journalism, movies, and every other kind of media. But the question that I still puzzle over is whether this divorce is a necessary part of preserving the power of computing. Does the ability to change a photo’s container require that the container doesn’t have meaning? Or, put another way, do people ascribe meaning to their cell phones and digital photo frames, even though they can now display any photo in the world?

An interesting case of this happened a few months ago when my iPhone’s USB port died and I could no longer charge it. It had a few identifiable scuffs on it, and I certainly had a memory for all of the places that I’d been with it and all of the photos I’d taken with it. But when I exchanged it for a nearly identical replacement phone, it only felt foreign for a few days. In fact, sometimes I mistake it for my old phone. This special case of an identical but different container is an interesting ones, because it speaks directly to the question at hand: what meaning, if any, is there in physical objects, other than our memories of them?

the semblance of objectivity in numbers

I just received my first ever first-authored conference paper rejection from FSE. The primary reasons, quoted from the reviews, include:

  • “The qualitative nature of the study … is liable to misinterpretation and bias.”
  • “I was expecting a quantitative analysis: is there any correlation between some of the characteristics and between [the results] and the time a bug takes to resolve and its resolution status?
  • “I would have thought that what types of elements to look for in discussion should be decided before by the researchers as it should be based on the problem”
  • “I was expecting concrete advice on HOW the tools should structure the discussion.”

I was hoping the reviewers would have been more epistemologically informed. For example, the first and second quotes are quite telling: they imply that some forms of empiricism are not subject to misinterpretation or bias. But quantitative empirical measures are just as subject to bias as any other measure. For example, if I had counted certain kinds of data and run correlations between these counts and other outcome measures, not only would one in twenty of them be “statistically significant” by chance, but whether there was any real meaning in the variables depends on the construct validity of the quantitative measurements. For example, if I had correlated hyperboles with bug resolution time, not only would the hyperbole measure have the same limitations as it did as a qualitative classification, but the bug resolution time would have any number of contextual factors that could influence its true reflection of the hyperbole’s impact on consensus. Transforming empirical observations into numbers does NOT make them objective, nor does it prevent bias and misinterpretation.

The third quote is ironic: this reviewer seems to believe that the only way to analyze a problem is to make some assumption about its nature upfront. The whole point of qualitative research is that the more you make upfront assumptions, the more you bias your findings. What this reviewer is proposing would have lessened the objectivity of the results and prevented us from uncovering the trends we did.

The last quote reveals the systemic bias in software engineering research (and also some HCI venues): qualitative studies are only valuable if they explicitly inform design. What this really reduces to is a view that material goods are real work, but the production of knowledge comes for free. Building a system or automating some activity, even if the system and automation are entirely impractical in the real world, is more valuable than understanding the real world. The comment also reveals the reviewer’s lack of understanding about design: innovations don’t come from studies, they come from people. Studies can support design decisions (and the results throughout our rejected submission have been quite valuable in our current design efforts), but they cannot generate ideas. People generate ideas.

Had I really wanted the paper in, I would have littered the submission with arbitrary, but seemingly objective quantifications and correlations of our data (which is what most quantifications are in software engineering papers). This has worked in past papers and is a tried and true workaround for the software engineering community’s lack of experience with qualitative methods. Reviewers would have thought, “I don’t get all of this qualitative stuff, but these numbers are great.” I decided not to do this on principle, since doing so would have only made the results seem more objective without adding any real objectivity.

So much for principle. Time to start correlating things!

tough T

I just spent a day at Edward Tufte’s course on information design at the Seattle Marriott Waterfront. I’ve always known his work, I’ve talked about it in design classes, I’ve told students to read his books, but not once have I heard him speak. Now I can confidently say that his captions speak louder than words. Snicker.

That’s not to say he wasn’t insightful. The books have always been a nice translation of classic design principles into static visual information design, but most of the course was simply him parroting his own words. What made it unbearable was that he spoke them with the lifeless apathy of a statistics professor. Oh wait, he was one.

Aside from his lack of spark, there were a number of nice things about the day. I got a box full of his books; I got a refresher on visual information design; I had a chance to think more about forms of dissemination for my research (I tire of limiting my influence to academic publications). It was also a nice calm before my early May storm of deadlines.