machining is now coding

Marketplace has a brief, but intriguing story about how computing is transforming manufacturing in the United States. As they explain, machinists used to work with their hands, physically manipulating mechanical machines to shape and shred metal and other materials into the basic components of all kinds of engineered materials, from small plastic trinkets to airplane parts.

Today, however, machining is less about operating machines, and more about writing code that operates machines (CNC machines, in particular, standing for computer numerically controlled). To learn the CNC programming language, workers typically take an 18-week course before their ready to operate CNC machines, but then they can make a reasonable manufacturing wage without getting their hands dirty or risking injury. This is a classic example of end-user programming, where someone has to write code as a means to an end (a physical object).

What’s even more fascinating is the economic discussion surrounding this jobs. Apparently, the problem isn’t training the machinists, but finding people who want to be trained. The Manufacturing Institute found in a survey that there are as many 600,000 manufacturing jobs going unfilled, the majority of which are jobs that require these kinds of technical computing skills. This is therefore as much a training problem as it is a recruiting problem.

the double-edged sword of efficiency

The big software defect story of the past couple of days is definitely Vassar’s accidental sending of acceptance notifications to several students. It’s a great example of one of the consequences of putting an algorithm (and indirectly, a programmer), in charge of disseminating information. On the one hand, I’m sure this saved Vassar a lot time and perhaps a job or two, completely eliminating their need for post and paper. On the other hand, they’ve adopted a system that is going to fail from time to time, and not in graceful ways that paper does, but in big, dramatic, and unpredictable ways.

The unpredictability of software defects is one of the most interesting properties of software as a medium. It’s inherent complexity means that even the people who develop it are going to have a hard time knowing what part of the system will fail and how dramatically. In fact, if the developer follows best practices by modularizing the system and enabling it to scale gracefully, it will actually guarantee that the failures will be more dramatic: whether it’s a list of 1, 100, or 1,000,000, I’m sure the Vassar notification system algorithm will do the exact same thing.

I wonder how software might be built to better account for the significance of the information it transmits and computes. At the moment, I suppose this is captured in the software tests that teams perform. Perhaps a better way might be to tag the data that moves through software systems and propagate things like the confidence, credibility, and integrity of data as algorithms munge and manipulate it.

what’s in a frame?

A few days ago in the NY Times, there was a story reflecting on Amber Case’s idea that we are all cyborgs, using a wide range of tools for both physical and mental modification. The key idea in the story is lamenting the loss of memories that have physical embodiments, such as a photograph that has both meaning in what it contains, but also meaning in it’s physical container. In contrast, the digital photographs of today still have their meaning, but the container is meaningless, because it’s virtual. It could just as easily be opened on one of a hundred photo viewing applications and displayed in an infinite number of ways and devices.

To me, the divorce of information from embodiment¬†is one of the most powerful but subversive aspects of software as a medium. It underlies nearly every major change in industry currently under debate, including music, print, libraries, publishing, journalism, movies, and every other kind of media. But the question that I still puzzle over is whether this divorce is a necessary part of preserving the power of computing. Does the ability to change a photo’s container require¬†that the container doesn’t have meaning? Or, put another way, do people ascribe meaning to their cell phones and digital photo frames, even though they can now display any photo in the world?

An interesting case of this happened a few months ago when my iPhone’s USB port died and I could no longer charge it. It had a few identifiable scuffs on it, and I certainly had a memory for all of the places that I’d been with it and all of the photos I’d taken with it. But when I exchanged it for a nearly identical replacement phone, it only felt foreign for a few days. In fact, sometimes I mistake it for my old phone. This special case of an identical but different container is an interesting ones, because it speaks directly to the question at hand: what meaning, if any, is there in physical objects, other than our memories of them?

abstraction appropriation

Abstractions and the ability to create them are what make us human. Our ability to reason abstractly and symbolically, to represent what we see and do and to capture and utilize knowledge is fundamental to all forms of human progress and communication. And when we think carefully about what role abstractions have played in human society, we see that our ability to reduce the incredible complexities of the world to their essential natures is behind nearly everything we do as humans.

When looking back on recent history, however, it is possible that humanity has made a fundamental shift in its use of abstractions. We have always used abstractions to communicate and talk, to coordinate, to understand nature and build technologies, from weapons to printing presses to computers, to conceptualize the essential nature of nature, and bend it to our wills and desires. Abstractions, I would argue, have been the primary mediators between humanity and nature, besides our bodies. The axe or the hammer are not simply wood and metal, they are instantiations of abstract ideas that humanity has carried from generation to generation. It is only through the idea of a hammer that a hammer can exist.

In just the past century, however, our use of abstractions has evolved. We now regularly use abstractions not only to mediate our relationship to nature, but also to mediate relationships with ourselves and others. Take, for example, the notion of IQ tests. These use of these tests is not simply to assess: the takers of these tests consume the results of the test and use such information to change perceptions of themselves. Or, consider any modern communication medium, such as e-mail or text messaging. These abstract forms of face to face communication mediate, constrain and mold our conversations in very specific ways.

This in itself isn’t problematic. After all, abstractions, by definition, eliminate detail in order to facilitate communication and action, so there are bound to be abstraction failures and mismatch; their inherent minimalism is also what makes abstractions useful, by helping us to manage the complexity of the world.

But there is a more nefarious way in which our use of abstractions may change human behavior: in many situations, we view abstractions not as a means to an end, but an end in themselves. We begin to mistake the abstraction for the thing it represents.

There are several cultural memes that highlight this phenomena. “Gaming the system,” for example, is the idea that someone will exploit properties of a system of rules or policies in order to effect results that violate the intent of the rules; Baker et al. documented this behavior in educational tutoring software, where students would learn the conditions in which the software would provide aid or answers, and do precisely the actions necessary to most quickly acquire the aid or answers.

Other examples are not about exploitation, but pragmatism. For example, students in high schools and universities want to acquire knowledge and skills, but perceive that it is scores, grades, and degrees—our abstractions of learning in modern education—that are truly important, and not the learning itself. The danger here is less at the individual level (as an individual student may overcome this through reflection), and more at a societal and cultural level: over time, it is possible that the abstractions representing knowledge become so institutionalized that society forgets what they were intended to represent.

I see these abstraction appropriate every day when I teach. Just yesterday I had an enjoyable, but disheartening discussion with a couple of students near graduation who were disappointed in their final grades for a course I taught last quarter. Their concern was that the grade points they received, which were one or two tenths lower than the grade points they typically receive, would lower their GPA several hundredths. I assured them I understood their concern, but also pointed out to them that there was probably not a single person who would ever look at that grade, nor the tenths place of their grade, ever again in their lives. One of them mentioned graduate school applications and I insisted, if they were above a 3.7, what would really matter were their letters, publications, and experience, since the number doesn’t really mean much of anything.

This was disappointing to them, to say the least. I reassured them that it was the products of their work, and the experience they had gained in the course, that would be the truly lasting parts of their education, and that the numbers meant nothing. They thanked me for my time and walked away slightly confused, unsure about what other strange quantitative incentive structures might be in store for them post graduation.

Every educator knows what I’m talking about. Every middle manager who’s had to quantify or categorize their employees’ performance knows what I mean. And while these abstractions may help us facilitate decision making, we rarely think about their side effects on human behavior and the larger incentive structures we propagate through society.

Where else do you see the abstraction misappropriation? And what are the consequences of embedding these abstractions in the software throughout our communications and infrastructure? Is all of this just a manifestation of Campbell’s law, or does this idea go beyond social planning? And what is it about human cognition that leads this phenomena, if it is as widespread as it seems?

does automation free us or enslave us?

In his new book Shop Class as Soulcraft, Michael Crawford shares a number of fascinating insights about the nature of work, its economic history, and its role in the maintenance of our individual moral character. I found it a captivating read, encouraging me to think about the distant forces of tenure and reputation that impact my judgments as a teacher and researcher and to reconsider to what extent I let them intrude upon what I know my work demands.

Buried throughout his enlightening discourse, however, is a strike at the heart of computing—and in particular, automation—as a tool for human good.

His argument is as follows:

“Representing states of the world in a merely formal way, as “information” of the sort that can be coded, allows them to be entered into a logical syllogism of the sort that computerized diagnostics can solve. But this is to treat states of the world in isolation from the context in which their meaning arises, so such representations are especially liable to nonsense.”

This nonsense often gives machine, rather than man, the authority:

“Consider the angry feeling that bubbles up in this person when, in a public bathroom, he finds himself waving his hands under the faucet, trying to elicit a few seconds of water from it in a futile rain dance of guessed-at mudras. This man would like to know: Why should there not be a handle? Instead he is asked to supplicate invisible powers. It’s true, some people fail to turn off a manual faucet. With its blanket presumption of irresponsibility, the infrared faucet doesn’t merely respond to this fact, it installs it, giving it the status of normalcy. There is a kind of infantilization at work, and it offends the spirited personality.”

It’s not just a lack of accurate contextual information, however, that is missing from the infrared faucet, thieving our control to save water. Crawford argues that there is something unique that we do as human beings that is critical to sound judgment, but inimitable in machines:

“… in the real world, problems don’t present themselves in this predigested way; usually there is too much information, and it is difficult to know what is pertinent and what isn’t. Knowing what kind of problem you have on hand means knowing what features of the situation can be ignored. Even the boundaries of what counts as “the situation” can be ambiguous; making discriminations of pertinence cannot be achieved by the application of rules, and requires the kind of judgment that comes with experience.”

Crawford goes on to assert that this human experience, and more specifically, human expertise, is something that must be acquired through situated engagement in work. He describes his work as a motorcycle mechanic, articulating the role of mentorship and failure in acquiring this situated experience, and argues that “the degradation of work is often based on efforts to replace the intuitive judgments of practitioners with rule following, and codify knowledge into abstract systems of symbols that then stand in for situated knowledge.”

The point I found most damning was the designer’s role in all of this:

“Those who belong to a certain order of society—people who make big decisions that affect all of us—don’t seem to have much sense of their own fallibility. Being unacquainted with failure, the kind that can’t be interpreted away, may have something to do with the lack of caution that business and political leaders often display in the actions they undertake on behalf of other people.”

Or software designers, perhaps. Because designers and policy makers are so far removed from the contexts in which their decisions will manifest, it is often impossible to know when software might fail, or even what failure might mean to the idiosyncratic concerns of the individuals who use it.

Crawford’s claim that software degrades human agency is difficult to contest, and yet at odds with many core endeavors in HCI. As with the faucet, deficient models of the world are often at the root of usability problems and yet we persist in believing we can rid of them with the right tools and methods. Context-aware computing, as much as we try, is still in its infancy in trying to create systems that come remotely close in making facsimiles of human judgments. Our efforts to bring machine learning to the fold may help us reason about problems that were before unreasonable, but in doing so, will we inadvertently compel people, as Crawford puts it, “to be that of a cog … rather than a thinking person”? Even information systems, with their focus on representation, rather than reasoning, frame and fix data in ways that we never intended (as in Facebook’s recent release of phone numbers to marketers).

As HCI researchers, we also have some role to play in Crawford’s paradox about technology and consumerism:

“There seems to be an ideology of freedom at the heart of consumerist material culture; a promise to disburden us of mental and bodily involvement with our own stuff so we can pursue ends we have freely chosen. Yet this disburdening gives us fewer occasions for the experience of direct responsibility… It points to a paradox in our experience of agency: to be master of your own stuff entails also being mastered by it.”

Are there types of software technology that enhance human agency, rather than degrade it? And to what extent are we, as HCI researchers, furthering or fighting this trend by trying to make computing more accessible, ubiquitous, and context-aware? These are moral questions that we should all consider, as they are at the core of our community’s values and our impact on society.

decision making in software engineering

I just finished reading Jonah Lehrer’s How We Decide, a fascinating survey of recent (and not so recent) scholarly literature on decision making, behavioral economics, and neuroscience. The central thesis, or perhaps, the central extract from the large body of work on the subject, is that while we often think of the rational parts of our minds as central to effective decision making, the emotional parts of our minds are in fact often more objective. Jonah argues, through the extensive overview of several hundreds of studies spanning economics, psychology, marketing, and medicine, that this is because our brains actually take in much more information that the working memory our rational mind depends on could ever hope to process. Therefore, when our instincts (assuming our instincts come from practiced, expert behavior) are often the better informed of the two. There are obviously may subtleties to this point (and Lehrer does a great job explaining them), but the bottom line comes down to a fairly straightforward to understand (though difficult to execute) rubric for decision making:

  • If the problem is novel (to you), instincts will not suffice. This is a job not only for the rational part of your mind, but for our creativity. Our instincts can often help with the subproblems of novel problems, but the rational mind must integrate them.
  • If the problem is routine and can be fully characterized with a few well defined variables (or simplified in this way because the decision’s consequence matters little), let reason carefully assess and analyze the options.
  • If the problem is routine, but cannot be simplified to a few well defined variables, use your rationale mind to identify what information is and is not available, but let the emotional part of your mind process and analyze it. The instinct resulting from this is your mind’s expert judgement.

These ideas about human decision making have many fascinating implications for software engineering. For one, the rubric above suggests that software engineers need a keen ability to know when a problem is new to them, so that they may apply different strategies to solving it. I’ve seen in many of the courses I’ve taught involving software engineering decisions that novice engineers often view every problem as routine, where some prior solution is likely to solve the new problem. Recognizing when this is not the case is a crucial skill. This might involve, for example, stepping away from a problem after trying to apply some known solutions, and using the rational mind to judge whether the problem has novel characteristics that deserve creative solutions.

Not only should software engineers be able to recognize a problem as novel, but they must be able to judge whether it is novel to them or novel to everyone. Novice software developers quickly realize that most problems they encounter have been solved already; the challenge thus is not to create solutions, but to find existing solutions whose assumptions fit the problem at hand. The same is true in solution spaces of a smaller scope; for example, some problems may be new to an individual, but old hat to an organization. Novice software engineers should know when to consult coworkers for this expertise.

Both of the above abilities probably come with experience; one issue that may not, however, is knowing what you don’t know. For example, I routinely see experts struggle with bug triage decisions, making quick emotional judgements about a bug report’s legitimacy, fixability, or impact, and then ignore information in the report that would disconfirm their judgement. What’s missing from these decisions is a process by which software engineers use their rational mind to carefully enumerate what information they don’t have, or what information is suspect. Fighting this confirmation bias and getting comfortable with uncertainty is a fundamental part of making effective decisions about complex problems.

In his discussion of aviation, Lehrer makes an interesting point about how computers can help pilots with such biases:

The reason planes are so safe, even though both the pilot and autopilot are fallible, is that both systems are constantly working to correct each other. Mistakes are fixed before they spiral out of control.

What would bug triage look like if humans and computers collaborated together to judge and analyze information about bugs? Would it help if bug reports highlighted what information was missing from bug reports, such as details about the defect’s impact on users, details about who those users are, and information about what components a bug concerns?

Of course, bug triage is just one of many decision making processes in software engineering. Requirements engineering involves a great deal of tradeoff and analysis and knowledge about human decision making might improve designers’ confidence that what they are specifying will satisfy user needs. Debugging and other types of diagnostic activity share the same diversity of novel and routine problems and may benefit from the same kind of metacognitive strategies. Bug fixing is also rife with choices about the scope of a change and the implications of a change on both users, interacting systems, and other components. In all of these, it may be of the utmost importance to provide novice software developers practice in recognizing and characterizing the problems encounter so that they may choose effective strategies to solve them.

what makes code different than other media?

I was having a discussion today with one of my Ph.D. students about examples of humanities style, conceptual, critical analysis work in the domain of software design and couldn’t think of many examples. Certainly there’s work in HCI conceptualizing interaction, infrastructure, design. There’s also Michael Jackson’s Problem Frames, conceptualizing the software world, problem world, and how they relate. None of these really felt like satisfying answers to the question, what makes code different than other media? And what are the implications of these differences on society?

I’m not usually one to get mired in such debates; most of my research is fairly practical and atheoretical. But this one has always compelled me. I think its the question that drives many of the questions I ask as a scholar and much of what I teach.

One standard answer to this question is that code differs from other media in that it is rigid. Oftentimes, people describe distinguish it from the physical world by describing it as discrete or binary, where other media are fluid and continuous. The problem with these characterizations of code is that they aren’t usefully explanatory or predictive. If code is rigid, what does that say about people’s interactions with it? That they will be similarly rigid? In what way? People will struggle to do what they need to do when they need to do it because the media will not allow it? How is that different from not being able to write on paper because of the moisture in the air? Is paper not rigid in the environments in which it may be used?

We have to go further. Let’s go to the source: maybe it all boils down to the precision of numbers. Maybe what we mean when we say software is rigid is that it models the world through numbers, most of which fail to fully represented what they intend, or cannot fully represent what they intend to represent because what they intend to represent is not one clearly defined thing. For example, any effort to store someone’s age is inherently wrong because it’s a construct that has no precisely defined starting point, is always being updated, and is, in many cases, determined socially. That a variable knows your date of birth isn’t really enough, because in some situations, a person may want to present their to others in other ways (I’m below 40). What would make this rigid, then, is that the decision of how to model and communicate age is a universal one, made by a designer, independent of the context in which the age will be communicated. Any particular software’s way of dealing with age will therefore be inherently rigid because the modeling of the world and its information will not be determined by the individual situation in which it is conveyed, but long before in a uniform way. Even attempts to have the use of information react dynamically to context will require context that models the world in similarly discrete ways, ensuring that there are at least some circumstances in which the adaptive techniques will not have enough information to make the choice a human would have made.

But it’s not just numbers. Another fundamental part of the medium of code is branching, a dichotomous form of decision making that only leaves room for finality. The machine needs to know, unambiguously, what it should do next, because delay is the utmost failure. How does branching make machines rigid? In one sense, its not the branching itself that leads to rigidity, but the inability of programs to easily change their decisions. When a person begins to walk down a path and realizes its the wrong direction, the person can change their mind, turn around, and walk down a different path. When a program follows a branch, modifying this decision and returning to a previous state is no simple matter. The designer must have anticipated this need to return to a state and architect the software in a manner that facilitates this reversal. Just like with numbers, it is the machine’s modeling of the state of the world, and its orientation toward reaching future states, rather than prior states, that makes software rigid.

Other media are rigid in similar ways. When you put pen to paper, the medium does little to allow you to return to prior states. In this case, software-based sketching tools are far more flexible. But perhaps this is apples and oranges, since the paper hasn’t made a decision, it’s the person that’s made a decision. The same is true for some mechanical media. If you dent a piece of aluminum, it’s non-trivial to get it to return to its prior state.

Both of these examples conflate the designer expressing something in the media and the person using the expressed thing. With software, it is the consumer that feels the rigidity of the medium; the producer of software experiences rigidity in conforming to syntactic and semantic rules. Where each of these parties experiences rigidity has more to do with whether the program is being written or used.

Perhaps then there is some meaningful distinction between the medium of code and the medium of execution. To this point, I have largely been discussing the rigidity of program execution and its relationship to branching and numbers, overlooking the rigidity in the expression of code. I suppose that the rigidity in the expression of code may have some role in biasing software developers into creating program executions that are also rigid. For example, because it is difficult to teach a program to make some state changes reversible, the resulting execution of software is rigid in its ability to return to prior states.

Obviously there’s much more thinking to do here. What are the implications of these conceptualizations, as they are now? If one chooses software over some other media for facilitating information transfer, one should expect that the way the software models the world, and the biases and oversights inherent to that model, will leak through to people’s experiences, biasing and restricting people’s use of the software to conform to its model of the world. People will recognize these biases and go to great efforts to workaround them, to overcome them, and to have them changed. No model will be perfect because many things that are modeled are not well defined or must change to fit the situation, and therefore these reactions to software are inevitable. A science that allows software developers to analyze the limitations of the models inherent in a particular program and predict the reactions to its limitations would help software teams and society better anticipate software’s limited ability to facilitate and support human activity.

software quality and ideology

It’s been an interesting weekend. My post on cultural homogeneity at the Mozilla Summit ruffled some feathers and led to a flurry of fascinating responses on reddit. I’ve replied to many of the commenters who work at Mozilla, trying to understand the anger, confusion, and disbelief in their statements, with the following gist: I love Mozilla, Mozilla’s great, but there are a lot of people in the community who are elitist and condescending towards anyone without coding chops. I still stand by that observation, no matter how hard it is to hear.

One of the interesting points of contention in the discussion was the claim I made about the tradeoffs between openness and simplicity. There was a lot of push back on that one; most of the commenters believed that the two were much more compatible than I suggested. However, rather than try to defend it, I think it would be much more interesting to point out several other tradeoffs between software qualities that I’ve observed, to stir the pot a bit more:

  • security and simplicity. Almost by definition, it’s difficult to design something that is both impenetrable and accessible. This is obviously a gross oversimplification, but easily demonstrated by the concept of a password. What’s simpler, going to gmail.com and reading your mail or going to gmail.com, entering your password, and opening your mail? The two tradeoff with one another in a variety of ways.
  • performance and comprehensibility. The Knuthian aphorism, “premature optimization is the root of all evil” comes to mind. By designing systems that are time and space efficient, we often make systems’ designs more difficult to understand (and I’d argue, more buggy).
  • privacy and parsimony. The most frugal and sparing of solutions, often described by developers as the most elegant and beautiful of solutions, are often in direct opposition to each other. Take Facebook’s privacy setting schemas; the most sparing of schemas are often wholly inadequate for expressing the complexity of individuals’ expectations about who has access to what.
  • configurability and learnability. One way to think of learnability is as the difficulty of learning the mapping between a system’s inputs and outputs. The more inputs you expose, the more mappings to learn.

Of course, these are very weakly defined: it could take decades of writing, thought and experience for to really understand the nature of these tradeoffs and whether they actually exist. I raise them because I think each represents an opposition between two values, and that blind devotion to one over another often leads to problems. Consider security, for example. We have whole communities of researchers and practitioners who amass a great deal of expertise in securing systems. Many of them are my friends. But what I often find is that security is often treated by developers and analysts in these communities as a cause, perhaps even an ideology, rather than just one of many possible software qualities. Case in point: I deployed an internal prototype last winter and had several students use the prototype while I gathered usage information. After an hour, my database was full of lots of fascinating data which would help propel my research on the prototype forward. However, one enterprising user decided that rather than use the prototype, he would test it for security flaws. He eventually found a potential SQL injection attack and decided to test it by dropping my tables— so much for my data. Rather than writing and apologizing for destroying my database, he wrote proudly, declaring that he’d found a vulnerability in my code and he’d be happy to help me look for others. Of course, security was the last thing on my mind, and his single-minded purpose toward espousing secure software systems had led to actual damage to my research.

Now I’m not saying that security doesn’t matter. My claim is that what matters depends on the situation. I did care about security in the story above, but not as much as I cared about getting the data and saving time. Given limited resources, this all developers have to make tough choices between competing values. What I find problematic about many of today’s developer cultures is the belief that any one software quality matters more than another in all situations. It’s been true for performance in the academic computer science research community; it’s been true for parsimony and elegance; it’s increasingly true for security, privacy, and yes, even usability. The world isn’t that simple, and any belief that one value ought to always take precedent over another does a disservice to the users who operate in situations with different priorities.

I’m certainly not claiming that all developers believe this. If there’s anything I’ve observed amongst all of the developers I know, its that the more experience they’ve gained, the more they realize just how difficult it is to clarify the priorities for a project in a way that is acted upon consistently across a team and over time. In all of the research that I’ve done studying developer discussions around design decisions, this seems to be the central struggle: how do you clearly communicate to everyone involved how various software qualities rank with one another? This is especially difficult when we don’t even have definitions of these qualities that people agree upon, let alone an understanding for how they interact.

If there was any point to my rambling about the Mozilla culture, it was that more developers need more knowledge about the differing priorities of their users and how this differing priorities interact with Mozilla’s goal of espousing openness. I wish that as a researcher I had more to offer on this subject, but alas, I have not. At least for now.

Mozilla Summit 2010 and dev culture

The Mozilla Summit opening reception

men, men, men

One thing that’s always interested me about software design is the inescapable bias of the designer. Whether we like it or not, designers’ perspectives always color what they think makes sense, what they think is useful, and what they think is good.

Never has this been more apparent to me than at the 2010 Mozilla Summit. I couldn’t help but notice that every session I visited, every reception I attended, and every conversation I had was dominated by male hacker stereotypes. The game room was full of obscure board games, first person shooters, caffeine and candy. Group conversations inevitably drifted towards the finer details of an API or a technical discussion of the merits of one platform or another. I had many short-lived and terse conversations with shy and introverted but incredibly proud geeks like myself.

It’s not that there’s anything wrong with the typical Mozillian—it’s that Mozillians are such a surprisingly typical group. It didn’t matter what country I came from, whether I was speaking to a man or a woman, or whether the contributor was a developer, tester, localizer or other form of contributor, there was a somewhat shocking homogeneity to the personalities and value systems of the people I met.

And it’s not even that these personalities or value systems are wrong: in fact, I share many traits and values with the people I met. I’m shy; I’m introverted; I believe in standards, open communication, transparency. As an academic, I may have learned to overcome these traits for the benefit of my career and to foster other values, but at my core, I identify strongly with Mozillians in both personality and beliefs.

No, the troubling thing was the lack of opposing traits and beliefs. Where are the technically disinterested Mozillians? The gregarious? The empathizing? Where are the Mozillians who are interested in people, society, history, diversity?

The answer, of course, is probably quite obvious: they’re Mozillians because they’re interested in technology. The ones interested in people have self-selected out of this group and are contributing to society in other ways and other places.

What this means, however, is that a comparably small group of people with similar goals, similar interests, similar viewpoints, and similar skills have a disproportionate influence on how the rest of the world experiences the web. And unsurprisingly, the experiences that Mozillians create are the ones that propagate and reinforce Mozillians’ own viewpoints.

None of this is very controversial either. In fact, I spoke with many Mozilla employees who believe that Firefox and Mozilla’s other mature products are really products for power users, despite the organizations unique user-facing stance relative to other open source communities. They believed that while it may be possible for the rest of the world to use Firefox as an alternative to other browsers, the Mozilla community ultimately builds for itself and its own perspectives because it knows no other way.

What is this way, then, that Mozillians view the world? Throughout my many discussions, I noticed a number of recurring beliefs (many of which are general to engineers and developers, and not just open source communities):

  • There’s always a right answer. Unlike most professional designers, I noticed that developers like to use the word “right” a lot when designing solutions. Understandings of tradeoffs seem to be limited.
  • My answer is right. Most of the Mozillians I met like to believe they have the right answer. There appears to be a joy on defending this position as well.
  • If a rationale argument can’t be made for a solution, the solution is invalid. Rational thought is the only valid means of obtaining knowledge or solving a problem.
  • Proof by existence, not by evidence. Prototype it and then I’ll believe you.
  • Ambiguity is unacceptable. Messy or noisy problems need not be solved. Solve the solvable problems.

Another recurring stance I noticed was that developers are special, privileged class. Obviously this isn’t the first time I’ve see this, but it did make me wonder where it comes from. So I probed. What I found was that every story of how someone learned to program and become part of the community was one of competitive selection. It’s hard to learn to program, it’s hard to get into CS, it’s hard to get a development job, and it’s hard to become a Mozilla developer. In fact, many told me that with all of these trials by fire, they learned quickly to act confident, to act certain, and to act as if one is right. One developer described this as a form of elitism, which brings with it a disdain for other view points and other more easily acquired skill sets (hence the apparent lesser status of localizers, testers, and support).

What no one said, but what I gleaned, is that this culture of elitism is as much an identity thing as it is a social thing. Perhaps the competitive processes by which developers attain status creates an identity that must be fed by being right. And what do we know about identities? People reinforce them, they defend them and they seek experiences that keep them intact.

What is the impact of all of these on the design of software, or at least Mozilla software? For one, design culture itself appears in direct conflict with how developers view the world. There is often an ambiguity, or mysticism to how designers learn to cope with ambiguity, and at least with respect to developers, I can see how this ambiguity is disconcerting and unconvincing. Moreover, it disempowers conceptual designers by requiring functioning prototypes as a ticket to entry.

The particular mission of Mozilla, to support the open web, also has interesting interactions with this developer culture. For example, many developers I spoke to believe that the public ought to care about their ability to control their online experience and own their data. I asked them, as devil’s advocate, why Mozillian’s had the right to impose these values through software, and many made a free market argument: people group together to espouse their values and those groups that persuade best, win. I saw little room in most stances for the possibility that users might not value the freedom espoused by Mozilla, and that the very espousing of openness might in fact oppose other values, such as simplicity, humanity, and beauty.

Are these trends in developer culture inescapable, or just an ephemeral aspect of a relatively young trade? Is it possible that as more people with more diverse perspectives learn to code, this imbalance in perspective will correct itself? Or are there only certain types of people drawn to code? Perhaps the market will ultimately force developers to empathize with other viewpoints, because society will cease to tolerate the engineered design of today and demand designs that respect their own values. I do not know—but I’ll be interested to find out!

Mozilla Summit 2010, day 0

Bus, a train, a plain, later, I arrived in Vancouver, B.C., ready to depart for Whistler and look for the answer to one simple question: what does one do at a gathering of 600 people from around the world, all working towards the same vision of the web? The answer became clear as soon as I arrived at the airport. One talks, one befriends, one learns about the fellow geek’s world, and above all, one discusses common ground, whether it be city life, weather, food, or the latest point release of Android. Geeks are people too and today proved no exceptions.

Of course, there are a few things that’s making this particular gathering unique. For example, one of my bus mates was particularly proud of her backpack designed for toting roller skates (just as I am proud of my slim wallet and matching laptop bag and case). Another was proud of weaning himself off the mac to more open linux and Google platforms. There was also fervent discussion of accessibility barriers imposed by IRC but also of the richness of the immediacy enabled by the waves of logins, logouts and rapid, near instant replies. My lunch friend came all the way from India to get a masters degree in software engineering in the bay area, leaving friends and family for a career in quality assurance. At the evening reception, I chatted with engineers on the JavaScript engine and HTML layout, learning about the subtle distinctions between the invariants in both and speculation about the role of C in trashing comprehensibility. These are people that love things to death, but most of all, love code and all the things around it.

The people aren’t the only thing that makes this crowd unique. The crowd itself is unique. As an academic in a field as diverse as HCI, I’m used to conferences with a fairly even balance of men and women. But this is, without a doubt, a gathering of men. The women stand out as rare breeds, something to behold. This line of thought led to discomfort as I realized how easily difference led to objectification. It was only after mentioning this to some of the women that I realized I was in the minority: this disproportion was an everyday fact for the people in the room, and not something so relevant to the topic at hand.

The days to come should prove interesting and revealing. I want to understand what this community values and how they express those values. I want to see how it’s culture breeds its strengths and weaknesses, and its biases. I want to see of what use 4 days in the great north sunshine really means to a group of collaborators already so close in vision and values. Do they really need this to be productive, or is this just to feel human?