what’s surprising?

A common complaint of research is that it’s not “surprising.” For example, a reviewer might say, “The study was well done, but the results weren’t really that surprising.”, or, “I found the results a bit predictable.”

But what do these statements really mean? Do they mean, “Had you asked me the research question, I could have guessed the results with some degree of confidence.”? Or, “If you asked your research question of 100 experts, 95 of their guesses would have been right.”?

Maybe we might intend for them to mean that, but they don’t actually capture what happens when a reviewer reads a paper. What usually happens is the reviewer reads the research question and thinks, “Hm, I could guess, but I’m not sure.” Then, upon reading the results, the reviewer thinks, “Well of course, that’s not surprising at all.” The test executed here is not whether an expert can confidently predict the answer to a research question, but whether in hindsight it seems plausible that an expert could have guessed the result.

In this sense, what makes a result “surprising” has less to do with what we know as scientists and more to do with what we think we know about what other researchers know.

This social fabric that apparently underlies our judgements of what is known has other interesting effects on what is accepted as advancing knowledge. For example, that some finding has not been published, is rarely a satisfactory argument for why something should be published. What underlies this belief it that it is not our goal as scientists to document everything that we know. Instead, it is our job to document the subset of what we know that is interesting, important, and surprising.

But aren’t most judgements of what is interesting and important are grounded in the present? How are we to know what is interesting or important in the future? Who are we to judge that the future of humanity will find no interest in the uninteresting, unimportant results of today? Take, for example, a recent review I wrote on a paper about using multitouch, tabletop displays for engineering design. I argued that it was unclear what problem was being solved. But what if it solves a problem that doesn’t exist yet? Or what if it solves it in such a way that another problem I hadn’t even thought of becomes trivial? On what basis could I really judge whether the work would have future worth?

All of this makes me think I don’t give papers a fair shake. Maybe I’ll adopt a new reviewing protocol: instead of reading the paper straight through and recording my thoughts, I’ll look at the authors’ research question and try to answer it myself for five minutes. Then, I’ll read the paper and if they came up with a different solution or answer that mine (that is of course reliable, sound, etc.), whether or not I’m surprised, the authors get credit for discovering or inventing something that I didn’t know. Of course, If I guessed their results or solution in a mere five minutes, what could they possibly have contributed?

5 thoughts on “what’s surprising?

  1. Hi Andy! This is a funny serendipity, as I was just thinking about this problem you mention: “But aren’t most judgements of what is interesting and important are grounded in the present? How are we to know what is interesting or important in the future? Who are we to judge that the future of humanity will find no interest in the uninteresting, unimportant results of today?”

    I think the reciprocal can be true of the our views on the past as well. I recently saw a rather ridiculous show on the History channel called “Ancient Aliens”, whose entire premise seemed to be founded on the assumption that all ancient knowledge must be subsumed by our present knowledge, so if the ancients appear to have known things that we cannot figure out, they must have had divine or extraterrestrial help. I wrote about this on my blog: http://www.apieceofeverything.com/blog/index.php/2010/04/27/the-wisdom-of-the-ancients/

    But I feel your pain, as most of my research work has been in relatively un-groundbreaking territory, causing me to receive a lot of reviews to the effect of your “unsurprising” ones. It seems a hazard of doing research in technology in the internet age. People in HCI-oriented conferences want to publish the next desktop metaphor replacement or realizations of scifi interfaces from ILM movies, not solid empirical investigations into (for example and in my case) the effects of handwriting recognition accuracy on user satisfaction for various tasks, which is seen as stodgy and boring.

    *Lisa

    • Thanks for your comment, Lisa. I haven’t thought about this much lately, but I have a bunch of VL/HCC and UIST reviews to do, so it’s coming up again. I do think we should think about the “stodgy and boring” dimension differently than the “surprising” dimension. There exist studies that find things that were never known, but for whom no one cares (this makes me think of the public views of biologists cataloging every possible thing about some species of rain forest toad). There also exist studies about really important things that aren’t too surprising (headline: people still have a hard time quitting smoking). I guess most of this debate is about what we want to archive. These days, with infinite storage, why don’t we just archive everything that’s well written and well executed and flag the things we find surprising and important?

  2. Another problem of filtering papers by the unexpectedness of their findings is that it discourages replication. If the study a paper reports is very well executed, and the paper itself is clear and well argued, and the topic under study could still use some further exploration (that is, the paper isn’t just confirming something the community already takes as a fact), then I vote for accepting the paper. I do consider the element of surprise in my reviews, though, and a surprising result that meets the rest of these criteria is certainly a strong accept to me.

    I do like your proposal to give some thought to the research questions in advance. I know of at least one person that does something similar (he writes his predicted answers alongside the questions), and this helps him focus his reading and, later on, his critique.

  3. “Instead, it is our job to document the subset of what we know that is interesting, important, and surprising.”

    Not sure I agree. The criteria of interestingness or novelty might make a useful filter for a conference committee — to ensure the program is of interest — but presumably, just because you can ‘guess’ what the results are, isn’t the same as empirically validating them. I mean, it isn’t science just because you have a hunch about the outcome.

    Let’s look at Fitts’s Law, which states that the bigger and closer a UI element is, the quicker it is to navigate to. Seems unsurprising to me, kinda like saying the bigger the hole, the easier it is to get a ball through it. But I would expect Fitts’s Law to be ‘proven’ valid in the positivist sense, and I think a paper showing this would be a great contribution. Of course, the fifth or sixth paper showing the same thing, but in a slightly different context, may not be as important, and may not merit publication (although this is apparently untrue, judging by the wikipedia entry!)

    I think the same importance about empiricism is valid in any research, qualitative and quantitative. One of the problems with computer science, in my view, is just this insistence on novelty and interestingness. Of course the great papers show something unexpected, but science also advances in incremental steps.

    • I complete agree with you; I was implying that as a community, computing-related researchers have settled on these as criteria. I don’t personally think they’re always helpful criteria.

      I think that sometimes, our insistence on novelty leads us to focus on work that is narrow, but exceedingly novel. There ought to always be space for incremental work about big problems.

Leave a Reply

Your email address will not be published. Required fields are marked *