does automation free us or enslave us?

In his new book Shop Class as Soulcraft, Michael Crawford shares a number of fascinating insights about the nature of work, its economic history, and its role in the maintenance of our individual moral character. I found it a captivating read, encouraging me to think about the distant forces of tenure and reputation that impact my judgments as a teacher and researcher and to reconsider to what extent I let them intrude upon what I know my work demands.

Buried throughout his enlightening discourse, however, is a strike at the heart of computing—and in particular, automation—as a tool for human good.

His argument is as follows:

“Representing states of the world in a merely formal way, as “information” of the sort that can be coded, allows them to be entered into a logical syllogism of the sort that computerized diagnostics can solve. But this is to treat states of the world in isolation from the context in which their meaning arises, so such representations are especially liable to nonsense.”

This nonsense often gives machine, rather than man, the authority:

“Consider the angry feeling that bubbles up in this person when, in a public bathroom, he finds himself waving his hands under the faucet, trying to elicit a few seconds of water from it in a futile rain dance of guessed-at mudras. This man would like to know: Why should there not be a handle? Instead he is asked to supplicate invisible powers. It’s true, some people fail to turn off a manual faucet. With its blanket presumption of irresponsibility, the infrared faucet doesn’t merely respond to this fact, it installs it, giving it the status of normalcy. There is a kind of infantilization at work, and it offends the spirited personality.”

It’s not just a lack of accurate contextual information, however, that is missing from the infrared faucet, thieving our control to save water. Crawford argues that there is something unique that we do as human beings that is critical to sound judgment, but inimitable in machines:

“… in the real world, problems don’t present themselves in this predigested way; usually there is too much information, and it is difficult to know what is pertinent and what isn’t. Knowing what kind of problem you have on hand means knowing what features of the situation can be ignored. Even the boundaries of what counts as “the situation” can be ambiguous; making discriminations of pertinence cannot be achieved by the application of rules, and requires the kind of judgment that comes with experience.”

Crawford goes on to assert that this human experience, and more specifically, human expertise, is something that must be acquired through situated engagement in work. He describes his work as a motorcycle mechanic, articulating the role of mentorship and failure in acquiring this situated experience, and argues that “the degradation of work is often based on efforts to replace the intuitive judgments of practitioners with rule following, and codify knowledge into abstract systems of symbols that then stand in for situated knowledge.”

The point I found most damning was the designer’s role in all of this:

“Those who belong to a certain order of society—people who make big decisions that affect all of us—don’t seem to have much sense of their own fallibility. Being unacquainted with failure, the kind that can’t be interpreted away, may have something to do with the lack of caution that business and political leaders often display in the actions they undertake on behalf of other people.”

Or software designers, perhaps. Because designers and policy makers are so far removed from the contexts in which their decisions will manifest, it is often impossible to know when software might fail, or even what failure might mean to the idiosyncratic concerns of the individuals who use it.

Crawford’s claim that software degrades human agency is difficult to contest, and yet at odds with many core endeavors in HCI. As with the faucet, deficient models of the world are often at the root of usability problems and yet we persist in believing we can rid of them with the right tools and methods. Context-aware computing, as much as we try, is still in its infancy in trying to create systems that come remotely close in making facsimiles of human judgments. Our efforts to bring machine learning to the fold may help us reason about problems that were before unreasonable, but in doing so, will we inadvertently compel people, as Crawford puts it, “to be that of a cog … rather than a thinking person”? Even information systems, with their focus on representation, rather than reasoning, frame and fix data in ways that we never intended (as in Facebook’s recent release of phone numbers to marketers).

As HCI researchers, we also have some role to play in Crawford’s paradox about technology and consumerism:

“There seems to be an ideology of freedom at the heart of consumerist material culture; a promise to disburden us of mental and bodily involvement with our own stuff so we can pursue ends we have freely chosen. Yet this disburdening gives us fewer occasions for the experience of direct responsibility… It points to a paradox in our experience of agency: to be master of your own stuff entails also being mastered by it.”

Are there types of software technology that enhance human agency, rather than degrade it? And to what extent are we, as HCI researchers, furthering or fighting this trend by trying to make computing more accessible, ubiquitous, and context-aware? These are moral questions that we should all consider, as they are at the core of our community’s values and our impact on society.

5 thoughts on “does automation free us or enslave us?

  1. Maybe I need to read Crawford’s book, but based on the quotes here I don’t find his argument about automation very compelling.

    Sure, there’s some bad design out there. It’s too easy to pick a bad example and run with it. It would have been more compelling if he’d at least used an example that is generally regarded as good design.

    His point about problems in the real-world not being pre-digested and requiring experience to decide which aspects are pertinent sounds exactly like what is prescribed by most every design practice that I can think of. Mentorship of the designer is the main idea behind contextual inquiry, for example.

    I certainly agree that deficient models of the world are behind many usability problems, but again the design processes that we teach in HCI are all geared towards building better models. I also agree that we can’t rid these models of all their deficiencies, but again design process is geared towards identifying the most important parts of the model and ensuring those parts are best understood. Just because the model can’t be perfect doesn’t invalidate the whole process, nor is this the case because some (many?) people don’t use good process and produce poor designs.

    Finally, this seems to conveniently overlook all of the automation that does work and saves us all plenty of time. The mathematical capabilities of computers by themselves have saved humans a ridiculous amount of time…I happen to be sitting in an airport right now, and thinking about the amount of technology that will get me from one location to another is staggering, from the simulations that were no doubt run to ensure the plane would fly (efficiently), to the auto-pilot systems that fly the plane most of the time, to the credit card payment system that means the flight attendants don’t have to manage a huge stack of cash when half the passengers order Jack and Cokes (ok, maybe just me). I suspect because we rarely have to deal with the automation that does work, we completely forget that it is even there.

    -jn

    • Thanks Jeff, I think you’re in the same place I am on the issue. At least with respect to automation, he sets up a somewhat false dichotomy between electronic machines and other kinds of automated technology. There’s plenty of room for automation that models the world well enough, saves us time, and even saves lives.

      The larger point of the book, however, is less to rail on technology and more to consider the the effects of engaging in work that is subject to distant, invisible, or immovable forces. I think his concern is not with saving time and lives, but in helping people to live fulfilling and personally meaningful lives. His arguments boil down to the belief that fulfilling work comes from an ability to be fully responsible for the task at hand; automation removes some of this responsibility.

Leave a Reply to Jeff Nichols Cancel reply

Your email address will not be published. Required fields are marked *