Dangerous Liaisons - UW Libraries

December 15, 2017

Is CRAAP dead? An inquest.  

Chloe Horning

I quite like the ACRL monthly digital publication “Keeping up With.” If you don’t receive it in your inbox each month; “Keeping up with” provides a brief rundown of a current library issue, with some citations at the end to check out. In November, the ‘Keeping up with” article, by Elizabeth Boden, was on debiasing and fake news, a topic that is probably of interest to most of us right now.

It is of interest to me, in any case, because I have recently returned to UW, and am facing a hefty Information Literacy teaching load for Winter Quarter. I’ve been working at local community colleges for the last few years, where various approaches to resource evaluation are taught. The most common method of course, is CRAAP. But the sentiment has been brewing amongst some of our community college colleagues for some time now that “CRAAP is dead” particularly in when it comes to evaluating “fake news.” Certainly, how to teach resource evaluation and the concept of bias is at the forefront of my mind as I launch into teaching UWB and Cascadia College students in 2018. So, I was immediately struck by the simple clarity of Elizabeth Bowen’s assertion that “Popular evaluation tests like the CRAAP test can help students and patrons catch egregious examples of fake news. However, the more society becomes ensconced in social media filter bubbles, the less effective the CRAAP test becomes.”

If it’s been a while since you’ve taught CRAAP, here’s a refresher of what the acronym means:

Currency: The timeliness of the information.

Relevance: The importance of the information for your needs.

Authority: The source of the information.

Accuracy: The reliability, truthfulness and correctness of the content.

Purpose: The reason the information exists.

The checklist is usually attributed to “the librarians of CSU Chico.”

But are reports of the death of CRAAP true, or greatly exaggerated? One thing is certain: they are nothing new. Marc Meola argued more than a decade ago that checklist-style evaluation doesn’t accurately reflect the way that students learn.

More recently, Lane Wilkinson on his “Sense and Reference” blog, wrote up a nice example of how the CRAAP test can fail when it runs up against several cognitive biases.

And Mike Caulfield has a couple of great articles on Medium, where he talks about the pitfall of CRAAP-esque checklists. Moreover, he criticizes Facebook for adopting this model in its documentation around news literacy.

Today, when the post-2016 election furor over public inability to identify fake news has raised the stakes of evaluating online information to dizzying heights, the question must be posed: have we, the information literacy educators of the world, failed?

I have personally taught CRAAP many times, but I think that my reluctance to really embrace it comes from the fact that 1. I bore myself to death when I teach it. 2. It feels dishonest, because I never apply a checklist model to my own research.

Already in this article, I have cited articles that live on Wordpress and Medium. If I was following CRAAP, that would be anathema. But we all know that there is very high quality information on blog platforms, which can enhance our professional practice, even if we don’t know for sure the credentials of the people sharing that information. Does that make me a rube? Or does it mean that as a librarian I consider myself “too good for CRAAP”? Aside from the obvious opportunities for scatological humor here, I think not. I think that rather the model itself is flawed.  Sure, you could point out that this blog post is not a scholarly research paper. But actually, the classes in which I am most often asked to focus on resources evaluation are ones in which the required output is not a paper, or the instructor has allowed the students some flexibility to use non-scholarly sources. I find that in IL instruction, the stakes for resource evaluation are highest when students are not required to use scholarly sources. Additionally, we are teaching the whole person, right? We want students to make good choices about the information they consume outside of the classroom as well.

Of course, there are alternatives to CRAAP out there. The IMVA/IN mnemonic is nice, because it offers a series of binary choices to help guide the evaluator. Research has been done on IMVAIN in K-12 libraries, but not in college/university settings to my knowledge.

  • Independent vs. Self-interested
  • Multiple vs. Lone or Sole source
  • Verifies vs. Asserts
  • Authoritative/Informed vs. Uninformed
  • Named vs. Unnamed

I have also seen some librarians implement the 5Ws (who, what, where, when, why), as an evaluation rubric.

But do either of these rubrics enhance student knowledge any more than CRAAP does? Perhaps because IMVAIN offers binary choices, and WWWWW is familiar to many learners from K-12, one could argue that these models reduce cognitive load and get to the heart of the matter more effectively. But at the end of the day, they are still checklist models. It’s difficult to envision a student, once they leave the information literacy classroom, applying any such checklist to information that they find on their own. And even if one was included to apply a checklist to say, an article that one was considering citing as part of a course assignment, one would be far less likely to apply it to news and information that is consumed in the course of daily events.

All of this points to the notion that librarians are unprepared to deal with the rising tide of fake news, or at least, we WERE unprepared at the time of the 2016 election. Now though, there are dozens if not hundreds of high quality online examples created by librarians who want us to understand how to identify fake news. For example, here’s a great list.

But this creates another conundrum…maybe a meta-conundrum: one of the reasons that people “fall for” fake news is because they are experiencing cognitive overload in an age of too much information. There seems to be an awful lot of static and noise in the conversation about libraries and fake news, and I wonder if librarians are unwittingly contributing to it with ever more libguides, articles, and lists. And checklist models keep cropping up, again and again. In fact, longer and more complicated checklists are often the answer to fake news that is proffered by entities like facebook.

Other resources, aimed at students and consumers of news, proclaim that “You are the fact checker now” offering tools and techniques to evaluate news the way the professionals do.

Mike Caulfield, who I mentioned earlier, also wrote a book (available as an open access e-Book) called “Web Literacy for Student Fact Checkers” which looks really useful, and I have already seen cited by some of my colleagues.

So, I wonder: should we be teaching students to evaluate information like fact checkers? Mike Caulfield’s three-step checklist surely looks appealing here:

  • Check for previous fact-checking work
  • Go upstream to the source
  • Read laterally

Caulfield also suggests adding a fourth step: Check your emotional reaction. This helps you gauge whether an article is activating your cognitive biases. Cognitive biases themselves are multifaceted and multitudinous (see also https://betterhumans.coach.me/cognitive-bias-cheat-sheet-55a472476b18 or https://en.wikipedia.org/wiki/List_of_cognitive_biases). So, teaching about cognitive bias seems untenable outside of a credit-bearing IL course, or a course on logic and discourse. It’s simply too much.  However, a librarian could try to get across what Buster Benson calls in his Medium Article “four giant problems our brains have evolved to deal with over the last few million years”

  • Information overload sucks, so we aggressively filter. Noise becomes signal.
  • Lack of meaning is confusing, so we fill in the gaps. Signal becomes a story.
  • Need to act fast lest we lose our chance, so we jump to conclusions. Stories become decisions.
  • This isn’t getting easier, so we try to remember the important bits. Decisions inform our mental models of the world.

If we know that these pitfalls are being triggered, perhaps we can learn to think past them.

All of this fails where CRAAP succeeds in one area: CRAAP asks you to evaluate information for currency (when was it written) and relevance (how closely does it relate to user needs).  The currency piece trips students up sometimes of course, because it is situationally dependent; the instructor may insist on sources that were written recently, or not. Developments in the field may render older information unreliable, or not. It all depends. Relevance is the bit that can’t be answered by simple fact checking, because it ties the source to the user need, or intended outcome. This begs the question of whether students would intentionally select irrelevant information unless we directed them not to. Perhaps these issues could take place as part of a conversation on search terms and filtering.

So, what do ya’ll think? Is CRAAP dead or it more relevant than ever? Does it bore you to tears, or do you just LOVE making that fun toilet joke in every one-shot? Should we fix what ain’t broke, or is our failure to teach effective evaluation contributing to the downfall of our democracy?

At this point, all I can say is that further study is needed.

Well, crap.