Design and the limits of automation

One of the central themes of U.S. President Barack Obama’s final State of the Union addresses was the idea that wages are flat because of automation. He argued that automation, and in particular, computing, is something rapidly eliminating jobs, especially those that involve routine, proceduralized, deterministic tasks. And with machine learning, AI, and deep learning, many of the tasks that require judgment and decision-making are also being automated.

I was talking about this—forebodingly—with my 14 year old daughter at dinner the other night, and she had a surprising reaction:

“That’s great! Then we can all be artists, designer, and inventors!”

I probed:

“Why won’t those be automated too?”

“Because computers aren’t creative, and even when they are, they don’t have any taste.”

What an interesting hypothesis! We spoke on this a bit longer, and arrived at an interesting conclusion. Computers may be able to generate a lot of ideas (because of their speed and scalability), but when it comes time to selecting which of those ideas are good, they will always struggle, since notions of what makes an idea good are so subtle, multidimensional, and often subjective. This is especially true in, where emotional response has primacy over functionality. For evidence, look at any review of a movie, album, or exhibit. Could a machine predict the critiques, let alone act upon them to improve the art?

Now, even if a computer were able to leverage humanity to make these judgements (posting its ideas on Mechanical Turk for feedback), and even if it were able to synthesize this feedback into new ideas, would humanity tolerate the scale of critique necessary for computers to independently arrive at good designs and good art? It’s hard to imagine. Furthermore, wouldn’t it still be humanity making the judgements of what is right? We would still need critics to offer feedback and constructive critique. Without us, computers would not know what to choose.

Perhaps the implication of this little thought experiment is that the asymptote of computational automation leads to a society of people who do not create, but do critique, constructively. In some domains, we already see this. For example, in electronic dance music, much of the sonic material comes from other pre-existing recordings. Or in DJing, where much of the art is in selecting what to play. Algorithms may take over the task of generating the new art and designs, but we will be the editors and critics.

Off the grid

IMG_7077

My brother got married at Burning Man this last Thursday to a wonderful woman. It was a beautiful ceremony, next to fragmented metallic heart and a 150 foot elegantly posed naked metallic woman in 100 F heat with thumping EDM pumping from a double decker art car with a tattooed female DJ who refused to turn the volume down so that the newlyweds could say their vows. It was exactly the wedding my brother wanted: participatory, organic, and epic.

There’s a lot I could say about Burning Man as a first timer, but that’s for another blog. This is a blog about an academic perspective on software and behavior, and so I’m going to focus on the fact that I was entirely off the grid for four straight days.

There aren’t many places in the world that you can truly disconnect, with no possibility of communication through any medium other than speech and sight. There are a few: ten minutes on takeoff and landing, remote regions in third world countries, and perhaps a few harrowing places such as the top of mountains and deep under the ocean. But Burning Man is one of the few places with no access to communication media where one can feel safe and still have access to all of the abundance of modern society.

Burning Man is also one of the few places where there’s also nothing to accomplish. There’s no work that’s really necessary, no one to call, no one to coordinate with, and no schedule, and to be truly in line with the cultural norms of a burn, one shouldn’t even seek these things. And so communication media really have no purpose during a burn. The point is to simply be, and do so around whoever happens to be around.

I’ve never been in such a setting. Especially after an incredibly intense week of 14 hour days of paper writing for conference deadlines, product development for my startup, and a seemingly infinite list of things to prep for living in the desert for four days. It taught me a few things:

  • You’ve heard this before, but social media really is pointless. We use it to both create purpose and fulfill it, but not to satisfy some essential need that couldn’t be satisfied in some other way. I didn’t miss all of the fleeting conversations I have on Twitter and Facebook while on the playa; in fact, not having them made me eagerly anticipate reconnecting with friends face to face, and not through social media, to share my stories in high fidelity, in real life. It didn’t help that after leaving Burning Man and getting a signal, my phone screamed at me with 500 emails and hundreds of notifications: the rich interpersonal interactions I had on the desert made my phone feel like a facsimile of real life. Liking someone’s post on Facebook now feels dishonest in a way.
  • The intense drive that I usually have at work, the one that fuels my 10 hour work days and endless stream of email replies, was completely extinguished by four days on the desert. There’s something about the minimalism of Burning Man life—where every basic need is satisfied, but nothing more—that clarifies the fleeting nature of most of the purpose we create in our lives. My job is important, but it is just a job. The visions for the future of computing I pursue in my research are valuable, but they ultimately twiddle the less significant bits. This first day back at work is really hard: I feel like I’m having to reconstruct a motivation that took years erect but days to demolish.
  • I’ve always believed this to some extent, but Burning Man reinforced it: computation is rarely, if ever, the important part of software; it’s the information that flows through it and the communication it enables that are significant. I saw this everywhere on the playa, as nearly everything had software in it. Art cars used software to bring automobiles to life, DJs used software to express emotion to hot and thirsty thousands, nightriders used digital lights to say to wayfarers, “I’m here and this is who I am”. In a city stripped down, software is truly only a tool. A holistic education of computer scientists would make it clear that as much as computing is an end in itself for the computer scientist, it is almost almost always a means.
  • Burning Man reminded me that it’s only been about a century since humanity has measured time and dislocated communication through ICTs. It was pleasing to see that when you take them both away, not only do people thrive, but they actually seem more human (just as when we do measure time and dislocate communication, we seem less so).

Of course, this was just my experience. I actually brought my daughter along too. She’s recently become addicted to texting and Instagram, as in any young middle schooler’s life, friends are everything, and so with respect to being off the grid, Ellen was miserable. She enjoyed herself in other ways, but I do think that being disconnected, whether through face to face encounters or photo sharing, was a major loss for her. Had her friends been in the desert with her, I think she would have had a very different experience.

I don’t know if this means anything in particular for the future of software. I do think that as we continue to digitize every aspect of human experience, however, the hunger for material experience, face to face interaction, and off the grid experiences will grow, which will eventually shift culture back to a more balanced use of communication media, and in turn, create new types of software systems that accommodate being disconnected without living in the desert for a week.

 

a personal note on public funding for education

my stance on public education

my stance on public education


Yesterday while I was walking to campus I was listening to a Fresh Air podcast on how Congressman Paul Ryan is Shaping the GOP. One of Ryan’s favorite ideas appears to be that of Ayn Rand, that to be truly free, we must avoid depending on others and find a way to support ourselves. He’s applying these same beliefs to policy on precisely what size the government should be.

This angered me. I came from a middle income household; my mom was a 5th grade teacher and my dad worked in quality assurance for food and lenses, and neither were paid particularly well for their trade. The only way I was going to make it to college was to work my ass off in high school, work a part time job to pay for AP exams, get a lot of scholarships and borrow a lot of money. So that’s what I did. And when I made it to college, I worked part time, I accrued massive debt, and I made it into a great Ph.D. program. I was lucky enough to have chosen a field where Ph.D. students get paid out of public research funds, but I wasn’t paid much (certainly not enough to support myself, my wife and my newborn daughter). So I borrowed more, I earned two public fellowships from the National Science Foundation and the Department of Defense, and we squeaked by for six years financially. After 23 years of public education, I’d used $91,000 in Oregon taxpayer’s money to fund my K-12 education, $36,000 of Oregon taxpayer’s money to subsidize my Oregon State tuition, $7,000 in Pell grants and free interest from U.S. taxpayers, $76,000 from an NSF Fellowship, and another $187,500 from an NDSEG Fellowship. And even with all of this help from public funding, I still had to work part time during high school and college and was still left with $50,000 of student loans to repay.

When you start to look at the cost of educating U.S. citizens—whether someone like me who goes for a terminal degree, or someone who simply wants a college degree—it becomes immediately clear that a person can work incredibly hard to become a valuable contributor to society, fully realizing Ryan and Rand’s vision, and still depend a great deal of support from taxpayers. This idea that people are either self-supporting or dependent leeches is an entirely false dichotomy.

The real question we should be asking is whether sharing the cost of educating our youth is something worthwhile and something to be shared. I know that in my own case that without public funding, I simply could not have gone to college. I’m sure I would have been successful in some other way; I would have taught myself, perhaps going to a community college. Or perhaps my parents would accrued their own massive debt to send me anyway. Either way, the 100 million taxpayers who each gave me less than half a penny of their money probably don’t miss it. And I hope that the work I do to educate our children, advance science, and invent new technologies that make our lives easier is worth that small investment. After all, after a time, the world we live in is not the one we make, but the one our children and grandchildren make for us.

does automation free us or enslave us?

In his new book Shop Class as Soulcraft, Michael Crawford shares a number of fascinating insights about the nature of work, its economic history, and its role in the maintenance of our individual moral character. I found it a captivating read, encouraging me to think about the distant forces of tenure and reputation that impact my judgments as a teacher and researcher and to reconsider to what extent I let them intrude upon what I know my work demands.

Buried throughout his enlightening discourse, however, is a strike at the heart of computing—and in particular, automation—as a tool for human good.

His argument is as follows:

“Representing states of the world in a merely formal way, as “information” of the sort that can be coded, allows them to be entered into a logical syllogism of the sort that computerized diagnostics can solve. But this is to treat states of the world in isolation from the context in which their meaning arises, so such representations are especially liable to nonsense.”

This nonsense often gives machine, rather than man, the authority:

“Consider the angry feeling that bubbles up in this person when, in a public bathroom, he finds himself waving his hands under the faucet, trying to elicit a few seconds of water from it in a futile rain dance of guessed-at mudras. This man would like to know: Why should there not be a handle? Instead he is asked to supplicate invisible powers. It’s true, some people fail to turn off a manual faucet. With its blanket presumption of irresponsibility, the infrared faucet doesn’t merely respond to this fact, it installs it, giving it the status of normalcy. There is a kind of infantilization at work, and it offends the spirited personality.”

It’s not just a lack of accurate contextual information, however, that is missing from the infrared faucet, thieving our control to save water. Crawford argues that there is something unique that we do as human beings that is critical to sound judgment, but inimitable in machines:

“… in the real world, problems don’t present themselves in this predigested way; usually there is too much information, and it is difficult to know what is pertinent and what isn’t. Knowing what kind of problem you have on hand means knowing what features of the situation can be ignored. Even the boundaries of what counts as “the situation” can be ambiguous; making discriminations of pertinence cannot be achieved by the application of rules, and requires the kind of judgment that comes with experience.”

Crawford goes on to assert that this human experience, and more specifically, human expertise, is something that must be acquired through situated engagement in work. He describes his work as a motorcycle mechanic, articulating the role of mentorship and failure in acquiring this situated experience, and argues that “the degradation of work is often based on efforts to replace the intuitive judgments of practitioners with rule following, and codify knowledge into abstract systems of symbols that then stand in for situated knowledge.”

The point I found most damning was the designer’s role in all of this:

“Those who belong to a certain order of society—people who make big decisions that affect all of us—don’t seem to have much sense of their own fallibility. Being unacquainted with failure, the kind that can’t be interpreted away, may have something to do with the lack of caution that business and political leaders often display in the actions they undertake on behalf of other people.”

Or software designers, perhaps. Because designers and policy makers are so far removed from the contexts in which their decisions will manifest, it is often impossible to know when software might fail, or even what failure might mean to the idiosyncratic concerns of the individuals who use it.

Crawford’s claim that software degrades human agency is difficult to contest, and yet at odds with many core endeavors in HCI. As with the faucet, deficient models of the world are often at the root of usability problems and yet we persist in believing we can rid of them with the right tools and methods. Context-aware computing, as much as we try, is still in its infancy in trying to create systems that come remotely close in making facsimiles of human judgments. Our efforts to bring machine learning to the fold may help us reason about problems that were before unreasonable, but in doing so, will we inadvertently compel people, as Crawford puts it, “to be that of a cog … rather than a thinking person”? Even information systems, with their focus on representation, rather than reasoning, frame and fix data in ways that we never intended (as in Facebook’s recent release of phone numbers to marketers).

As HCI researchers, we also have some role to play in Crawford’s paradox about technology and consumerism:

“There seems to be an ideology of freedom at the heart of consumerist material culture; a promise to disburden us of mental and bodily involvement with our own stuff so we can pursue ends we have freely chosen. Yet this disburdening gives us fewer occasions for the experience of direct responsibility… It points to a paradox in our experience of agency: to be master of your own stuff entails also being mastered by it.”

Are there types of software technology that enhance human agency, rather than degrade it? And to what extent are we, as HCI researchers, furthering or fighting this trend by trying to make computing more accessible, ubiquitous, and context-aware? These are moral questions that we should all consider, as they are at the core of our community’s values and our impact on society.

Mozilla Summit 2010 and dev culture

The Mozilla Summit opening reception

men, men, men

One thing that’s always interested me about software design is the inescapable bias of the designer. Whether we like it or not, designers’ perspectives always color what they think makes sense, what they think is useful, and what they think is good.

Never has this been more apparent to me than at the 2010 Mozilla Summit. I couldn’t help but notice that every session I visited, every reception I attended, and every conversation I had was dominated by male hacker stereotypes. The game room was full of obscure board games, first person shooters, caffeine and candy. Group conversations inevitably drifted towards the finer details of an API or a technical discussion of the merits of one platform or another. I had many short-lived and terse conversations with shy and introverted but incredibly proud geeks like myself.

It’s not that there’s anything wrong with the typical Mozillian—it’s that Mozillians are such a surprisingly typical group. It didn’t matter what country I came from, whether I was speaking to a man or a woman, or whether the contributor was a developer, tester, localizer or other form of contributor, there was a somewhat shocking homogeneity to the personalities and value systems of the people I met.

And it’s not even that these personalities or value systems are wrong: in fact, I share many traits and values with the people I met. I’m shy; I’m introverted; I believe in standards, open communication, transparency. As an academic, I may have learned to overcome these traits for the benefit of my career and to foster other values, but at my core, I identify strongly with Mozillians in both personality and beliefs.

No, the troubling thing was the lack of opposing traits and beliefs. Where are the technically disinterested Mozillians? The gregarious? The empathizing? Where are the Mozillians who are interested in people, society, history, diversity?

The answer, of course, is probably quite obvious: they’re Mozillians because they’re interested in technology. The ones interested in people have self-selected out of this group and are contributing to society in other ways and other places.

What this means, however, is that a comparably small group of people with similar goals, similar interests, similar viewpoints, and similar skills have a disproportionate influence on how the rest of the world experiences the web. And unsurprisingly, the experiences that Mozillians create are the ones that propagate and reinforce Mozillians’ own viewpoints.

None of this is very controversial either. In fact, I spoke with many Mozilla employees who believe that Firefox and Mozilla’s other mature products are really products for power users, despite the organizations unique user-facing stance relative to other open source communities. They believed that while it may be possible for the rest of the world to use Firefox as an alternative to other browsers, the Mozilla community ultimately builds for itself and its own perspectives because it knows no other way.

What is this way, then, that Mozillians view the world? Throughout my many discussions, I noticed a number of recurring beliefs (many of which are general to engineers and developers, and not just open source communities):

  • There’s always a right answer. Unlike most professional designers, I noticed that developers like to use the word “right” a lot when designing solutions. Understandings of tradeoffs seem to be limited.
  • My answer is right. Most of the Mozillians I met like to believe they have the right answer. There appears to be a joy on defending this position as well.
  • If a rationale argument can’t be made for a solution, the solution is invalid. Rational thought is the only valid means of obtaining knowledge or solving a problem.
  • Proof by existence, not by evidence. Prototype it and then I’ll believe you.
  • Ambiguity is unacceptable. Messy or noisy problems need not be solved. Solve the solvable problems.

Another recurring stance I noticed was that developers are special, privileged class. Obviously this isn’t the first time I’ve see this, but it did make me wonder where it comes from. So I probed. What I found was that every story of how someone learned to program and become part of the community was one of competitive selection. It’s hard to learn to program, it’s hard to get into CS, it’s hard to get a development job, and it’s hard to become a Mozilla developer. In fact, many told me that with all of these trials by fire, they learned quickly to act confident, to act certain, and to act as if one is right. One developer described this as a form of elitism, which brings with it a disdain for other view points and other more easily acquired skill sets (hence the apparent lesser status of localizers, testers, and support).

What no one said, but what I gleaned, is that this culture of elitism is as much an identity thing as it is a social thing. Perhaps the competitive processes by which developers attain status creates an identity that must be fed by being right. And what do we know about identities? People reinforce them, they defend them and they seek experiences that keep them intact.

What is the impact of all of these on the design of software, or at least Mozilla software? For one, design culture itself appears in direct conflict with how developers view the world. There is often an ambiguity, or mysticism to how designers learn to cope with ambiguity, and at least with respect to developers, I can see how this ambiguity is disconcerting and unconvincing. Moreover, it disempowers conceptual designers by requiring functioning prototypes as a ticket to entry.

The particular mission of Mozilla, to support the open web, also has interesting interactions with this developer culture. For example, many developers I spoke to believe that the public ought to care about their ability to control their online experience and own their data. I asked them, as devil’s advocate, why Mozillian’s had the right to impose these values through software, and many made a free market argument: people group together to espouse their values and those groups that persuade best, win. I saw little room in most stances for the possibility that users might not value the freedom espoused by Mozilla, and that the very espousing of openness might in fact oppose other values, such as simplicity, humanity, and beauty.

Are these trends in developer culture inescapable, or just an ephemeral aspect of a relatively young trade? Is it possible that as more people with more diverse perspectives learn to code, this imbalance in perspective will correct itself? Or are there only certain types of people drawn to code? Perhaps the market will ultimately force developers to empathize with other viewpoints, because society will cease to tolerate the engineered design of today and demand designs that respect their own values. I do not know—but I’ll be interested to find out!

Mozilla Summit 2010, day 0

Bus, a train, a plain, later, I arrived in Vancouver, B.C., ready to depart for Whistler and look for the answer to one simple question: what does one do at a gathering of 600 people from around the world, all working towards the same vision of the web? The answer became clear as soon as I arrived at the airport. One talks, one befriends, one learns about the fellow geek’s world, and above all, one discusses common ground, whether it be city life, weather, food, or the latest point release of Android. Geeks are people too and today proved no exceptions.

Of course, there are a few things that’s making this particular gathering unique. For example, one of my bus mates was particularly proud of her backpack designed for toting roller skates (just as I am proud of my slim wallet and matching laptop bag and case). Another was proud of weaning himself off the mac to more open linux and Google platforms. There was also fervent discussion of accessibility barriers imposed by IRC but also of the richness of the immediacy enabled by the waves of logins, logouts and rapid, near instant replies. My lunch friend came all the way from India to get a masters degree in software engineering in the bay area, leaving friends and family for a career in quality assurance. At the evening reception, I chatted with engineers on the JavaScript engine and HTML layout, learning about the subtle distinctions between the invariants in both and speculation about the role of C in trashing comprehensibility. These are people that love things to death, but most of all, love code and all the things around it.

The people aren’t the only thing that makes this crowd unique. The crowd itself is unique. As an academic in a field as diverse as HCI, I’m used to conferences with a fairly even balance of men and women. But this is, without a doubt, a gathering of men. The women stand out as rare breeds, something to behold. This line of thought led to discomfort as I realized how easily difference led to objectification. It was only after mentioning this to some of the women that I realized I was in the minority: this disproportion was an everyday fact for the people in the room, and not something so relevant to the topic at hand.

The days to come should prove interesting and revealing. I want to understand what this community values and how they express those values. I want to see how it’s culture breeds its strengths and weaknesses, and its biases. I want to see of what use 4 days in the great north sunshine really means to a group of collaborators already so close in vision and values. Do they really need this to be productive, or is this just to feel human?

the semblance of objectivity in numbers

I just received my first ever first-authored conference paper rejection from FSE. The primary reasons, quoted from the reviews, include:

  • “The qualitative nature of the study … is liable to misinterpretation and bias.”
  • “I was expecting a quantitative analysis: is there any correlation between some of the characteristics and between [the results] and the time a bug takes to resolve and its resolution status?
  • “I would have thought that what types of elements to look for in discussion should be decided before by the researchers as it should be based on the problem”
  • “I was expecting concrete advice on HOW the tools should structure the discussion.”

I was hoping the reviewers would have been more epistemologically informed. For example, the first and second quotes are quite telling: they imply that some forms of empiricism are not subject to misinterpretation or bias. But quantitative empirical measures are just as subject to bias as any other measure. For example, if I had counted certain kinds of data and run correlations between these counts and other outcome measures, not only would one in twenty of them be “statistically significant” by chance, but whether there was any real meaning in the variables depends on the construct validity of the quantitative measurements. For example, if I had correlated hyperboles with bug resolution time, not only would the hyperbole measure have the same limitations as it did as a qualitative classification, but the bug resolution time would have any number of contextual factors that could influence its true reflection of the hyperbole’s impact on consensus. Transforming empirical observations into numbers does NOT make them objective, nor does it prevent bias and misinterpretation.

The third quote is ironic: this reviewer seems to believe that the only way to analyze a problem is to make some assumption about its nature upfront. The whole point of qualitative research is that the more you make upfront assumptions, the more you bias your findings. What this reviewer is proposing would have lessened the objectivity of the results and prevented us from uncovering the trends we did.

The last quote reveals the systemic bias in software engineering research (and also some HCI venues): qualitative studies are only valuable if they explicitly inform design. What this really reduces to is a view that material goods are real work, but the production of knowledge comes for free. Building a system or automating some activity, even if the system and automation are entirely impractical in the real world, is more valuable than understanding the real world. The comment also reveals the reviewer’s lack of understanding about design: innovations don’t come from studies, they come from people. Studies can support design decisions (and the results throughout our rejected submission have been quite valuable in our current design efforts), but they cannot generate ideas. People generate ideas.

Had I really wanted the paper in, I would have littered the submission with arbitrary, but seemingly objective quantifications and correlations of our data (which is what most quantifications are in software engineering papers). This has worked in past papers and is a tried and true workaround for the software engineering community’s lack of experience with qualitative methods. Reviewers would have thought, “I don’t get all of this qualitative stuff, but these numbers are great.” I decided not to do this on principle, since doing so would have only made the results seem more objective without adding any real objectivity.

So much for principle. Time to start correlating things!