Genie: Input Retargeting on the Web through Command Reverse Engineering

Amanda Swearngin presenting Genie

Amanda Swearngin presenting Genie

Amanda Swearngin, one of my newer Ph.D. students, just presented her work on Genie at CHI 2017, in collaboration with me and her co-advisor James Fogarty. Genie is a clever technique that applies program analysis to reverse engineer a model of any interactive website’s commands, then uses that model to create alternative interfaces for accessing those commands via other input modalities such as keyboard, mouse, or speech. The really cool thing about this work is that web sites don’t have to be built for Genie: all of this works without modification, without the coordination of website developers, and continues to work as web sites evolve.

Here’s a demo:

What’s exciting about this work is that it could allow the entire interactive web to be accessible to people with diverse abilities without requiring web developers to design for diverse abilities. Even better, if developers do design for diverse abilities, Genie works even better, extracting even more meaningful command metadata.

Read our publication for more details.

Accessibility, diversity, and software teams

For all of yesterday and half of today, about 25 industry leaders from Google, Microsoft, Salesforce, Yahoo, and a few dozen smaller companies and non-profits gathered with the University of Washington’s AccessComputing team to discuss whether and how to increase the number of people with disabilities in the software industry. We called it a “Capacity Building Institute”, but really it was basically a group of people with a deep interest (organizational or personal) in lowering barriers to participation by people who are blind, deaf, motor impaired, autistic, or have other less visible learning or reading disabilities, and leveraging each others’ knowledge to do so. I’m an expert in people’s interactions with code, but new to accessibility, and so I was eager to learn and connect with experts. But I had no idea just how much latent expertise is out there in industry just waiting to be gathered, aggregated, and shared with the rest of the world.

For me, there were two big takeaways that I think are relevant to not just software companies, product designers, and software developers, but really anyone in the world who knows little about disability.

First, why hire people with disabilities at all? Aren’t they just more work for an organization, no matter how skilled they might be at their core job functions? The group speculated about a lot of the benefits they’ve seen in practice, but this was the most compelling one to me: hiring people with disabilities improves productivity, effectiveness, and creativity via psychological safety. The argument goes like this. Recent work has found that a key to effective, functional software teams is something called psychological safety, which is the feeling that one can openly share ideas and critiques without being punished, criticized, or shamed. It turns out that psychological safety helps teams problem solve because they’re free to find the best ideas and discard the bad ones, leading to better, smarter work, and more innovative solutions to product and process problems. What does all of this have to do with disability? People with atypical abilities who can be open about this different abilities in an organization can be catalysts for establishing psychological safety. Employees are not only forced to think about accommodations for these employees, but also become freer to share their own struggles, which promotes a psychologically safe team environment. In a way, diversity of any kind, but especially the more overtly visible and stigmatized diversity in ability, is a constant embodied reminder of the need to create space for vulnerable conversations about not only ability, but the intersection between ability and work. This creates safety, which promotes sharing, which promotes business outcomes. Now, that’s a pretty extended argument, and not that’s easily grasped or shared in a quick hallway conversation, but if you’re in a hiring position, you might start thinking of diversity in ability (all other skills being equal) as a benefit to your team rather than a liability or cost.

The second big question we grappled with was why prioritize accessibility at all, in products? There were many arguments here (e.g., 1 in 7 potential customers and employees has a disability, so they’re hard to ignore). But the most convincing argument to me was that building universal accessibility into products helps all customers and all employees, not just ones with disabilities. The argument is two fold: 1) by focusing design on people with a narrower set of input and output channels is a forcing function to simplicity and clarity in design, as people with disabilities have a heightened need for reduced ambiguity in UI design; and 2) by enhancing products with multiple redundant channels of information input and output can be creatively appropriated by the whole population of potential customers. The examples of 2) abound. It turns out that text messaging was originally a form of personal messaging for deaf communities, later appropriated as a general text-based communication medium now used by billions. Annotating the web with the metadata used to support screen readers helps not only blind people, but also people with dyslexia, and even provides semantic data for tools to enhance websites with faster and more efficient web automation, including for automated software testing. Captioning videos not only helps deaf people hear audio, but also indirectly creates a scannable transcript for everyone else to quickly skim time-based media and gain random access into the video via topic. Ensuring that every user interface can be operated by keyboard not only helps blind users, but also keyboard-addicted power users who prefer the quick action of a keystroke over the clumsy movement of a mouse. Pick pretty much any access technology, and it will have some other use for typically-abled people. Think of accessibility like a stress test for your products: if it can work for someone with different abilities, it can work for anyone.

Both of these arguments are related to the idea of universal design, which is a design philosophy grounded in the idea that we can design products in ways that everyone can access. This is a powerful idea, because it shifts the problem of designing specific accommodations for specific disabilities, to a problem of finding a design that can be accessed unambiguously through a range of input and output media. That means you don’t need to necessarily advocate for a specific population, but you can advocate for serving a whole marketplace of diverse customers through the same design.

How feasible this is will become clear in time, especially as companies like those present in our meeting, and non-profit groups like AccessComputing at the University of Washington, make inroads into sharing these ideas with companies eager to compete in an inclusive way. I’m excited to watch this unfold over the next decade!

Fifty privileges

I was on a flight this past December on the “nerd bird” from Seattle to San Jose. All around me were white and asian men, every one with a laptop, and almost every one writing code. I was just about to pull out my laptop when I realized just how privileged each and every one of us was to be on this plane, with these devices, working these jobs, likely all with an immense amount of comfort, stability, and success.

How did we all get there? What had to be true about our parents, our country, our schools, our cities, and our genetics that allowed such a homogenous group of people to all be going to the same place, doing the same tasks, and all for so much money?

Instead of pulling out my laptop, I pulled out my notebook and began to brainstorm each and every privilege that led to that moment. When I got off the plane, I resolved to elaborate on each one briefly daily, sharing each with my Facebook friends, to see if I could understand more deeply which parts of my personal and professional success were due to me and which were due to my environment.

Today was my last post. I’ve had many friends ask if I’d share the posts more publicly, hence this blog post. Below you’ll find all fifty posts, unedited, in the order that I posted them.

You might notice that some of the early posts are sparse. I learned over time that the posts were more interesting to me and friends if I remarked on the privilege and linked to more information.

#1: I’ve never heard gunfire from my bedroom

#2: I’ve always had access to food when I wanted it

#3: Police protect me rather than suspect me

#4: I never had to learn to protect myself on the street (or the playground)

#5: I had two parents raising me


39% of U.S. children have only one or even zero parents actively engaged in their lives.

#6: My public schools were able to attract nationally recognized teachers by paying professional grade salaries

Until Oregon’s measure 5 gutted public education funding in 1996.

#7: Most of my peers were college-bound, and 98% of them graduated from high school

I knew college was not only an option, but expected of me. In many U.S. public schools, the graduation rates are as low as 60-70%, with less than 5% college-bound. For students in those schools, college is a fantasy.

#8: When people see me on the street, most don’t have an immediate fear response

Many of my fellow Americans with darker skin don’t have this luxury, and move through the world knowing that everyone around them is scared by their presence. This includes police, who, seeing a phone, sometimes assume it is a gun or knife, and shoot.

#9: I was lucky enough to grow up in an era and in a state where smart poor kids like me could get grants, scholarships, and part time jobs to cover most of my in-state tuition and fees, and leave with less than $10,000 in student loans.

Most kids that are in my position now look at the price tag and don’t even consider college, or if they do, spend four years stressing about the next thirty years of massive debt. The result is that most of the kids I teach at UW are from the middle class or higher socioeconomically, or the best and the brightest from other countries.

#10: For some reason, my high school allowed a community college student to come to our high school at 7 am to teach about ten students (including me) computer science

He gave us his homework assignments and asked us to complete them, then told us he’d grade them once he got them back from his teacher. He also gave us Ska mixed tapes.

#11: I could play on the street in my neighborhood(s) without feeling in danger of cars, kids, gangs, or police

For many American kids, being on the street is about protecting your body, not play.

#12: In high school, I didn’t have to work to help my family pay the bills

In many families, teens need to work part time jobs to keep the lights on and food on the table, meaning less time for homework, play, and personal development.

#13: Because I was a boy, my teachers gave me more frequent and more constructive feedback

They also were encouraged me to contribute, whereas they expected the girls in class to be more compliant, only speaking when spoken to. All of this probably gave me more confidence and aptitude than some of my equally prepared female peers.

Frawley, T. (2005). Gender bias in the classroom: Current controversies and implications for teachers. Childhood Education, 81(4), 221-227.

#14: All of family and my friends’ family had jobs if they wanted them, setting the expectation that having a job was feasible, expected, necessary, and supported

Many American children, particularly those in poverty, have parents who are chronically un- or underemployed, creating an environment of worthlessness, survival, and hopelessness. These are not the conditions in which children thrive.

#15: My skin is white(ish) and my nose is thin(ish), so white people don’t other me (much)

I did nothing to accomplish this, and yet benefit from it greatly.

#16: Because my voice is louder and deeper than most womens’, I have a biological advantage in obtaining positions of power.


I didn’t earn this, but benefit from it greatly.

#17: When I was a boy, people treated me as future professional problem solver, rather than a future parent, spouse or friend

How did that shape my identity? How did it shape the skills I chose to develop?

#18: Because I’m taller than the average male and female, I was much more likely in life to receive more social esteem, more leadership skills, higher income


It’s also probably no coincidence that I’m short among my my high performing male colleagues.

Judge, T. A., & Cable, D. M. (2004). The effect of physical height on workplace success and income: preliminary test of a theoretical model. Journal of Applied Psychology, 89(3), 428.

#19: My parents took me outside my home town, state, and country, and so I was able to see (even if only a small glimpse) the vast diversity of how people behave, live, and relate


Most Americans have never traveled outside of America, and a quarter have never even left their state, usually because they don’t have the money, time, or desire.

#20: I am a native English speaker


#21: I was born in America

Yeah, I know, that can sound somewhat nationalistic and exceptionalist, but I don’t mean it that way. I mean it in relation to the human and civil rights we take for granted. We have so many freedoms, so much wealth, so many safety nets, and so much equity relative to so many other countries in the world. It’s easy to forget just how much our laws and birthright create our conditions for successful, fulfilling lives.

Take, for example, people with motor impairments who must use wheelchairs to get around. Buildings in the US have to be accessible by wheeled chairs. This is amazing! Compare this to a journey I made to the Jade Buddhist temple in Shanghai back in 2006, where many tourists come to pray. Across the street from the temple were hundreds of homeless physically disabled Chinese. They were there to beg for food. Most did not have wheelchairs. Some were missing as many as three or four limbs. It was clear that most of them lived on that street, surviving only because of food and water offered by able-bodied tourists as they exited the temple. In some cases, I couldn’t tell if they were sleeping or dead.

This privilege we have in America, this right to survival, is precious and not to be taken for granted. This could have been me living on the street without mobility, food, or shelter, had my Chinese grandparents not moved to America, and had I been born without all four limbs. But I wasn’t.

#22: I can breathe the air around me without getting sick, short of breath, or poisoned


It’s a weird thing to say, but hundreds of millions of people in China and India are greatly disadvantaged by having to stay indoors. It’s one of many things that take attention away from growing, learning, and thriving, and instead to staying alive.

#23: My city contained both wealth and poverty

Most people in America live in more class segregated settings.

This class diversity gave me was perspective. The most visceral example of the visible class divide was the parking lots in my high school. The rich kids took the upper lot, where they parked their Mercedes and BMWs and hung out before school. The kids who bussed, and the kids who saved up for a junker and parked on the street, were down the hill, walking further to school.

It was very clear that some kids could access whatever they wanted, and others were in families just trying to survive, but these differences in wealth were not correlated at all with intelligence, worth, or identity. This helped me understand that ability was not about money.

These class divides were, however, strongly correlated with access. I remember, for example, being surprised to find out too late that there was an honors and AP English class, and that was a thing one should take to improve one’s chances of getting into college. When I asked my peers how they found out about it, they all said it was their parents. The upper class parents knew that these things were necessary and helped their kids and each other access those resources. The lower class parents barely knew anything about college, let alone the choices a student has to make to access it. Experiences like these helped me see that knowledge is power, and that wealth and stability provides access to knowledge.

#24: My parents could afford to send me to pre-school, which provided me broad and substantial benefits in life


Most families with net incomes below $60K can’t afford it, which causes lifelong inequities.

#25: My parents talked to me a lot when I was a baby


The disparities in this are huge, with many poorer families talking to them infants a quarter as much. These dIfferences have dramatic effects on language development and academic success.

#26: I have full use of my limbs, voice, eyes, and ears


About 20% of Americans don’t, most with mobility, sight, or hearing loss. And yet, the 80% of us without disabilities tend to act, vote, and design as if every American has the same abilities, which creates a vast range of inequities in access to information and infrastructure.

Fortunately, there is some fantastic research in the world that designs systems based on individual ability, rather than fixed assumptions of ability. For all you taxpayers out there who give a few pennies a year to the National Science Foundation, you’ll thank us later when you join the 20% later in life.

#27: I have access to the Internet


And this is a privilege that grows larger and larger every day, as many critical services move from brick and mortar, cash-based commerce to digital, primarily web-based services. Take, for example, Seattle’s public transit access card, the ORCA card. In 2009, the cards were rolled out, allowing customers to carry a balance, use it on any of the Puget Sound transit services, and have it auto-reload when the balance depleted. No more worries about exact change, faster onboarding, and automatic transfers. This is great stuff, right?

Unless you don’t have access to the internet. The easiest way to get a card is to buy online. If you can’t do that, you can have it mailed to you. Homeless? In downtown Seattle, you have one option: go to the Bartell Drugs on 3rd Ave. Except they recently stopped selling new cards and now only refill cards. And this is just for now: the list of participating retailers keeps growing shorter and shorter, as the vast majority of users purchase and refill online.

Once retailers stop selling new cards, the only choice is to get a reduced fare card directly from King County. Except the only way to find out how is to—you guessed it—go to their website.

Increasingly, access to the Internet is required for access to anything. Amazingly, we’ve flatlined at 84% of Americans having access, since there’s very little market incentive to offer access to rural, poor, or homeless citizens.

At what point does access become a right instead of a privilege? And how do we operationalize that right, when access requires a device and electricity?

#28: I was taught to read


An astounding 1 in 7 high school graduates in the U.S. can’t read, and this hasn’t improved in a decade.

My own experiences learning to read demonstrate just how privileged I was. I remember one night in particular from 1st grade. We had just been given a reading book. I remember this book, because it had a really cute bunny on the front, which reminded me of all of the books I’d loved as a toddler. I wanted so much to understand what was inside, and to show my teacher that I did. The first night I received the book, I remember laying in my bed with the light on, working my way through the sentences, “This is a bunny.”, “This is grass”, “The bunny is fast”. Eventually, I stumbled upon a section of the book with the sentence, “He was laughing”. Lag-hing? What’s lag-hing? I ran downstairs to ask my mom, “Mom, what is lag-hing?”

She could have yelled at me to go back to bed. She could have said, “That’s not a word, what are you doing up?” She could have discouraged me, signalling that reading was not important, that not bothering her was important, that curiosity was irrelevant. But, being the 2nd and 5th grade teacher that she was, instead she said something incredibly powerful: “That’s a really good question! Ah, yes, that’s the word ‘laughing’. It’s confusing, because ‘gh’ doesn’t look like it would make an ‘f’ sound. But you’re going to see those letters together in a lot of words, and they’ll always make a ‘ffff’ sound. English has many of strange rules that there’s no way to guess, so ask me any time you get stuck.” She sent me to bed, not only supporting my eagerness to learn, but actually providing me English reading knowledge that enhanced my ability to learn.

Most readers in the world don’t have a primary school teacher as a mother. I did, and I had a wonderful 1st grade teacher, and I had a whole community of people helping me to enter the world of knowledge through literacy. The rest of the world should have these very same privileges, and even in developed countries like our own, we’re nowhere close.

#29: I am healthy


50% of American adults have one or more chronic health conditions (e.g., heart disease, cancer, diabetes, obesity, arthritis) and 25% have two or more of these, many of these eventually leading to physical disabilities. If you’re healthy, you know what it’s like to be sick. You have less energy, less motivation to work, less hope. Can you imagine being sick all the time? What effect would that have on your daily life? Each one of these diseases has its own experience, I can’t imagine any of them. That is privilege.

I have health. I try to exercise (and mostly fail). I don’t smoke. I try (and also fail) to eat fruits and vegetables. I’ll try to avoid chronic conditions for as long as I can, but with my family history, I’ll have at least one chronic disease eventually.

(I edited the above to be less patronizing. It’s probably still patronizing. These things are really hard to write about!).

#30: As a teen, I had the unconditional love and support of my family


For many teens, the love of their parents is highly conditional. This is especially true of LGBT identified ones. One study estimates that up to 40 percent of LGBT homeless youth leave home because their families rejected them (either kicking them out of the house or creating such an unwelcoming environment, the teen leaves). Worse yet, once homeless, many shelters reject them because of their religious ideologies, leaving them on the street.

This unconditional love from my parents was a huge part of the stability in my life as a young adult. It made me feel safe to explore myself, my interests, my dreams. It ensured I had food, shelter, safety, and access to information. All of these things are so easy to take for granted, it’s easy to forget that many Americans start their lives with nothing, not even the love and support of their parents.

#31: Because I’m male, throughout my life I’ve been judged more by my accomplishments than my appearance


This shaped my self-concept in very specific ways, meaning I focused more on learning, skills, and ability than what I was wearing or how I did my hair.

It’s hard watching the opposite happen to my daughter. We talk about this a lot. She says that because she’s still searching for her identity, and doesn’t have a lot of self-confidence, her appearance (her makeup, her clothes, her hair) fill that void. And it fills that void easily, because she gets immediate positive feedback for how she looks. She says gets hardly any positive feedback about her intelligence or her personality (other than from her family, which we all know for a teen doesn’t count). She’s hopeful that as she gains more confidence in other aspects of herself that this will change, but for now, her self-worth is tightly bound to how other people feel about her appearance.

#32: Because of my income, I can mostly live where I want


In a time of rapid gentrification in cities, and rising costs of living in even suburban environments, most people have much less control over where they live, when they move, where their kids go to school, and what types of services are available to them.

When my parents divorced when I was in grade school, I almost lost this privilege, since my parents couldn’t afford to keep our house. Both of them thought that where we lived was important enough to maintain that they made big sacrifices: my mom commuted an hour to work for a decade, saved like mad, and bought a small house in the lower income neighborhood in our town. My dad took on a lot of debt to pay rent and bills and eventually went bankrupt. But my brother and I were able to stay at our schools, keep our friends, and have a sense of stability. Most of the time, this level of sacrifice wouldn’t have been enough.

#33: I have leisure time


I know, I know: I’m a professor, and to my professor friends, they’re thinking, “What??? How in the world do you have free time? I’m working 80 hours a week and can barely get everything done?” And my non-professor friends are thinking, “Typical lazy academic, teaching three hours a week. This is what’s wrong with America!”. Of course, the truth is in the middle: in my job, I have the luxury to decide what constitutes full-time effort. Long ago, I decided this was 45-50 hours per week, leaving enough time for me family, my friends, and my hobbies. And Facebook posts like these. That I have this choice is a privilege.

For most people worldwide, how long they must work to make a livable wage is not up to them. If they’re a salaried professional, they have a boss that has expectations. For hourly workers, every minute they clock in means less debt, more savings. About 5% of America works two jobs, some by choice, some by necessity. In many developing countries, people work seven days a week. People in Mexico have longer work weeks than Americans and make a fifth of the money. Time to live, to enjoy life, and to make it what you want is partly our choice, but partly circumstance.

#34: I have access to clean water


And it’s more than just clean water, I have an abundance of clean, cheap water, especially here in the Pacific Northwest, allowing me to take long showers, flush the toilet indiscriminately, and procrastinate far too long about leaky faucets.

About 1 in 10 people in the world lack access to water they can safely drink. Much of their day is spent boiling water, traveling to get water, saving up for clean water, testing water, and worrying about whether their water is clean. Flint, MI is an American tragedy that will have lasting consequences for hundreds of thousands of people, but it will be resolved swiftly. The rest of the developing world will be waiting and working a lot longer before basic access to clean water is a right and not a privilege.

#35: I have the time and income to vote


An increasing number of Americans can’t afford the time off, can’t afford the required identification, or can’t make it to a place to vote. In one of the world’s oldest democracies, this inequity and its downstream consequences is ridiculous and un-American.

#36: I live near a grocery store.


Many people in America live in what the USDA calls a “food desert”, where affordable, nutritious food is more than a mile away with no public transit in cities, or ten miles away in rural areas. About 4% of Americans live in one of these food deserts, including about 125,000 people in the Puget Sound region.

Why does it matter? Distance to a grocery store with fresh food is an independent predictor of BMI. The further away someone lives from fresh food, the less of it they eat and the more they eat fast food. And there’s not much one can do about it: to increase access, there either need to be more grocery stores (why would a store open in a poor neighborhood over a rich neighborhood?) or better public transit to the existing grocery stores.

#37: I had access to computing education in high school.


The vast majority of Americans still don’t, with only 1 in 10 U.S. high schools offering a computer science class and only 22 states allowing computer science to count toward high school graduation. This is despite the fact that a full half of STEM jobs are expected to involve computing in the next few years. Not only is computing education scarce in the United States, but it’s also extremely privileged: the vast majority of schools that offer it are wealthy and white. Of AP CS exam takers, only 22% are girls and only 13% are black or latino.

My own experience with computing education in 1994 was unique. My high school didn’t offer a course, but an enterprising community college student decided to offer a zero-period computer science course starting at 7:30 am. There were about 10 of us that enrolled. There was no curriculum; our teacher-student just brought in the assignments from his community college programming courses and challenged us to complete them. (I think we were doing his homework!). By the end of the year, there were only a few of us left, but I was excited enough by the topic that I decided to take the AP CS exam. I was the only one in my high school who ever had. I remember sitting in a janitor’s closet for three hours writing Pascal on paper, still clueless about what computer science was, but eager to find out.

In case you hadn’t heard, President Obama just announced CS For All, an initiative bring together over twenty years of computing education research, policy, and teaching efforts. The initiative is huge: $4 billion for states to train and fund K-12 CS teachers, $135 million in NSF funding for research and program development, policy efforts to allow CS to count for high school graduation, and broad participation by industry and to facilitate training. Many of my colleagues’ research, along with some of my own, will be at the foundation of these efforts.

As this policy effort unfolds, support it with everything you can. Make sure your state lets CS count for high school graduation! Elect politicians that bring computing to K-12! And if decide you want to help, be sure to read the research: there are many, many unhelpful and even harmful ways to teach computing that can leave learners with a lifelong distaste for computing. Ask me for pointers if you want advice on doing it well.

#38. I’m not depressed


But I have been. And so have 15% of people in the US at one point in their lives, and some chronically, making almost everything about life feel difficult, pointless, and hopeless. It is not a state of mind that generally leads to prosperity, growth, or self-improvement. More often it leads to the loss of friendships, jobs, and sometimes lives, through suicide.

My own experience with depression was acute and was probably inducted by my separation and divorce back in 2007. It was unfortunately timed with the middle of my dissertation writing and my academic job search, which was just about the most difficult time to be depressed, given the immense pressure to write and travel to a dozen universities and research labs to be an exciting, interesting, high-energy public intellectual. For two years, by the advice of my therapist, I put on an elaborate act, creating a persona that would allow me to finish my Ph.D. and land a job. I was a high functioning, highly depressed person.

Meanwhile, on the inside, I was rapidly decaying. I let friendships lapse. I stopped talking to my family. I was a fragile, broken father, too often leaning on my poor toddler for comfort. Most of the days I wasn’t interviewing, I only managed to write a sentence or two of my dissertation, and spent the rest of the day staring at a wall, ideating suicide. The only thing keeping me going was my daughter: she deserved a father and a financially secure future. What little motivation I had I aimed at getting my mood to a place where I could promise her that.

I survived because I had community. Grad students I barely knew reached out. My therapist was a constant source of empathy. And my daughter made me laugh. In a way, interviewing for jobs even helped, because it thrust me into the world to meet hundreds of fascinating faculty across the United States, full of new ideas, new possibilities, and new futures. I faked it and I made it.

Many people with chronic depression don’t have the privilege of community. They may be jobless. They may lack community. They may lack friendships. They may have real, substantive reasons to not have hope, such as poverty, discrimination, or a lack of daily personal safety. That I have all of these things helped me survive depression, but also helps me prevent it every day.

#39: I believe my intelligence can be developed through practice


This belief, which researcher Carol Dweck calls “growth mindset”, is a powerful one. Students that believe this do better in school. Adults that believe this do better at work. People who believe that their capacity for skills is not fixed, but changeable, through deliberate practice, are much more likely to develop the skills they want to develop.

Part of why this is a privilege is that I grew up in a community of teachers that knew to foster my growth mindset. They didn’t say I was smart, they reinforced the process I used to arrive at correct answers. They didn’t tell me to work hard, they told me to work smart, always reflecting on how I solved problems to find a better, but never perfect way. And they didn’t just tell me these things, they modeled them for me. My mother would say, “I’m going to learn how to do this right now, do you want to learn with me?” When my dad shifted from food science to optics, he dove in head first, and talked every day during is training and education about what was hard to learn, how he was learning smarter, and showing us the progress he was making.

Not every child is fortunate enough to grow up surrounded by developmental theories of intelligence. Many kids learn early on from their parents and teachers that there are “smart” kids and “dumb” kids, and quickly learn which one that others believe they are. This stops them from learning new skills, which only reinforces their fixed intelligence self-concept. Probably the worst example of this is in the widely circulated achievement gap metrics: when we show these to black and latino kids without explaining what’s behind them, we only confirm their fixed theory of intelligence.

#40. I grew up near a high-density city


I know, cities aren’t all that. They have more crime, more people, more pollution, more noise, and more stress. But recent evidence shows that they also are associated with an order of magnitude higher upward mobility than low-density rural or high sprawl areas. Economists speculate about many reasons for this huge difference; the most likely appears to be greater access to information about opportunities. The theory goes that the higher density a child’s “opportunity exposure” (knowledge, knowledge of jobs, knowledge of skills necessary for jobs, etc.), the more likely children will accumulate the benefits of those opportunities over time. All of this is despite class segregation: lower income groups see the same benefits, even though they face other barriers.

Growing up near Portland, the opportunities were abundant and apparent. My parents took me to OMSI (our science museum), showing me a broad world of opportunities in science and learning. I learned of a wide range of internships at local businesses. And even when my parents and I weren’t deeply connected to the opportunities in the region, my peers were: I heard about their parents’ jobs, their career ideas, and all of the different places they were applying to college. This doesn’t compare at all to some of the stories I’ve heard from friends who grew up in Chicago, LA, San Francisco, or New York, but it was a stark difference from my cousins that grew up in rural areas.

#41: I can access any business’s services without discrimination


I remember when I was eight or nine going to the gas station with my dad in our small town of West Linn. In Oregon, gas station attendants have to pump your gas, so we had the attendant fill it up.

When my dad reached for his wallet, he realized he’d forgotten it at home. He asked the attendant if he could just run home to get it, then come back and pay, which seemed reasonable. We lived in town, we’d come to the gas station hundreds of times and had our gas pumped by the very same guy. We lived 5 minutes away.

The attendant said no. My dad, a bit flustered, asked politely, “What do you want me to do?” The attendant asked for some collateral, and pointed to me. I looked at my dad, he looked at me, and said, “I’ll be right back. Just 10 minutes. I sat in a chair inside the attendant’s booth, sitting silently, awkwardly, while I waited for my dad to come back. When he returned, he paid the attendant, and we went about our business.

My dad didn’t know what to make of it, but he talked to me about it. Was it because he was Chinese? Would he have done the same thing to a white man with a white kid? My dad didn’t know. I didn’t know. I don’t even know if the attendant would have known. But something about it just didn’t feel right.

#42: I feel safe walking home at night alone


I might feel unsafe occasionally, but I’m sure it’s nothing like women (especially transwomen) feel, or even young boys in rough neighborhoods. When I first learned in college what it was like for women and the real danger that they were in walking home alone, I became much more aware of my own impact on their feeling of safety. And monitoring my impact on their safety is hard: I’m a fast walker, and so I regularly come up behind people at fast speeds, so when I see someone up ahead, I usually cross the street or take an alley just to avoid having to pass them. If there’s nowhere else to go, it’s even creepier to slow down and stay behind them, because then it sounds like I’m following them. Instead, I’ll often just sit on a bench and wait it out.

I don’t know if any of this makes a difference. But I know I didn’t earn the safety I feel and it’s so my job to use my privilege to help others feel safe too.

#43: I have high job security


In the past ten years, the rate of job hopping has increased to once every 1 or 2 years. And this isn’t always to find a better job, it’s because the last job is gone. For younger people, this instability might be viewed as opportunity, but for people with families it can introduce a lot of financial uncertainty, anxiety, and stress.

I didn’t pursue academia for job security; tenure was a perk, and something I still view as protecting me from the politics and ideologies of the day. It’s something that frees me to pursue truth wherever it takes me and however unpopular it is to my colleagues, my students, my board of regents, or my country. That it also happens to provide a massive form of stability to my personal life is a huge bonus. But it also removes me from the world in a way that makes it harder to empathize with everyone else, who must constantly be on the hunt for the next position, especially those in groups that face discrimination.

#44: The world designs for me

It designs for people of my (average) height, for people with ten fingers, for people who can see, for people who have money. It designs for people who have leisure time. It designs for people for speak English. It designs for the middle aged adult with a full time job and kids. It designs for the dominant attributes of society, because that’s where most of the money is.

When I teach design I try to fight this inequity. That means, in part, that I try to help students understand that they need to understand the needs, desires, and lives of who they’re designing for, especially when those groups are underserved. And it means that when I counsel students on startup or project ideas they have, I remind them that not everyone in the world is a 21 year old college student.

Even when I succeed in this, there are other market forces at work that incentivize them against this. Venture capitalists don’t want to invest in small markets or poor markets. Their obligation is to provide returns to those middle aged adults with full time jobs, kids, and retirement accounts. Product designers don’t want to design for populations that are hard to reach: it’s a lot harder to find blind users to test with then it is college students with smartphones. Sales and marketing folk don’t want to join companies where they can’t show huge progress; that harms their ability to land the next job.

The result of all of this is that the products and services in the world work better for me than they do most other groups.

#45: I have access to paid paternity leave


(NOTE: as my colleagues have informed me, I actually DON’T have access to paid leave)

No, I won’t be using it any time soon. But as paltry as my benefit is, both for mothers and fathers at UW, it’s luxurious compared to the 21% of organizations in the U.S. that offer it to mothers and the 17% that offer it to fathers. America, we are an embarrassing pro-birth, anti-quality of life nation.

When I became a father, I was lucky enough to be a poor college student. We didn’t have a lot of money, and the only responsibilities we had were finishing our final year of college. In many ways, this made it easy. We needed to pass a few courses. We were already basically broke, so there was no quality of life to maintain. We had the stress of applying to graduate school, but that was more work than passing East Asian history. And after Ellen was three months old, we both enrolled in highly informative Psychology classes such as Language Acquisition, Developmental Psychology, and Abnormal Child Psychology. It was like one long year of parent training and graduate school applications, subsidized by federal student loans (which, in effect, was paid maternity and paternity leave). As hard as it might sound to have been a 21 year old father finishing school, I actually don’t remember a less stressful, more peaceful time of life. (Kit Ko might disagree).

#46. I was born with outsized ability to concentrate


I can focus on a task for hours. I can take a list of things to do and burn through it without a moment’s distraction. This power has always allowed me to be productive at nearly any of time of day.

But there is nothing “normal” about this ability. It’s just one of many diverse forms of human attention, and it happens to be one that society values greatly at this point in history. Decomposing and accomplishing tasks is only one particular type of work, and it happens to be the one that is rewarded most. I did not earn these rewards; they were a genetic birthright.

My attentional surplus also has its downsides. When I’m engaged in work, I forget to eat, drink, and relieve myself. I don’t notice when I get injured. I have to see blood to know that I’m hurt, and even then, I have to tell myself, “you’re hurt, you should do something about it.” I have trouble enjoying the present moment because I’m always thinking about what’s next. I see the world as a big to do list instead of a rich ecology of opportunities. I struggle hopelessly with open-ended time, especially on vacation. I miss the beauty in the world.

As much as my attentional abilities are valued, I envy the people in my life with attentional “deficits”. They see things I don’t see. They savor the present in ways I can’t. They make connections between ideas I’d never make in my productive tunnel vision. They literally protect me from harm while I’m off in my head, working through a problem, structuring an argument, planning whatever’s next. I need them as much as they need me.

There’s nowhere this privilege is more apparent then in schools. Our dominant educational paradigms (lectures, classrooms, exams, etc.) were designed for people with minds like mine. That’s why I thrived in school. There’s very little room for other types of minds, and very little recognition of the value and necessity of distraction in learning and enrichment.

The same value system pervades capitalism, with its focus on efficiency and productivity, rather than beauty, connection, and meaning. So many Americans look at our society and wonder why they don’t seem to fit in. It’s not that there’s something wrong with their mind; it’s that it wasn’t designed for their mind.

#47: I am never asked to speak for everyone of my gender or race


Many people, just because of the color of their skin or their gender identity, are asked to represent women, Black America, Hispanic America, and so on. And this is, of course, ridiculous. People aren’t their race and they aren’t their gender. They’re much, much more. Why aren’t White people asked to represent all of White America? Or men asked to represent all men? Because putting everyone in an identity group into a single box is reductive and ridiculous.

One of the odd reasons why I’m rarely asked to represent my race is because no one really knows what race I am by looking at me. It’s only once people who know that I’m Danish and Chinese that they start asking me to represent. And it’s never, “As an Asian American faculty member, how do you think our Asian students view this?”, because they would never feel comfortable calling me Asian American with my obvious whiteness. But there’s definitely a steady stream of references to my Chinese lineage, and a probing curiosity about how some special understanding encoded in my DNA.

Note that I’m never asked to represent my Danish ethnicity. No one ever asks me how I feel about Danish racism, the Danish cartoons, vikings. Because Danish is white and white isn’t a race. After all, it would be ridiculous to ask a human being to represent the perspectives of a whole ethnic group, no?

#48: Students think I’m a better teacher because I’m male


I hate that this is true. I hate it not only because asking for student’s opinions of our teaching is fraught with problems, but also because it’s not really fair to anyone of any gender. Female teachers are at a disadvantage, especially when these scores are used in merit reviews, and male teachers miss out on meaningful opportunities for feedback. Who knows what kinds of biases exist for gender queer instructors.

Worst of all, the bias that’s embedded in these assessments may stem from a very real difference in who students actually listen to. This might mean that my instruction might get more attention just because I’m a man, and this attention might lead to better learning. I want instruction to be valued on its merits, not on its messenger, but it appears that gender (and probably race) is a huge part of it.

Authority emerges from strange elements bound up in history, culture, reputation, height, speech and a whole host of other factors that have nothing to do with actual expertise.

#49: I’m living in the 21st century


In my last 48 privilege posts, I’ve discussed a lot of problems in the world. Race, gender, education, health, access, literacy: there are so many inequities, it can sometimes feel like there’s no hope.

In this second to last post, I’d like to focus on how lucky I (and we, humanity) are to live in the time that we do. In the past hundred years, women around the world gained the right to vote. The United States passed Civil Rights Act of 1964 outlawing discrimination based on race, color, religion, sex, or national origin. We cured syphilis, diphtheria, measles, polio, typhoid fever, yellow fever, and smallpox. We invented radios, air conditioning, airplanes, cars, film, insulin, photocopiers, televisions, oral contraceptives, computers, pacemakers, video games, MRI, cell phones, and the internet. Life expectancy went from an average of 50 to 80 years. We learned to treat mental health in additional to physical health. We’ve begun to map the origins of the universe and the human genome. And more people live in democracy and therefore freedom than ever.

It’s easy to forget just how much easier we have it then our parents, our grandparents, and the rest of our ancestors. More than ever, we have food, we have shelter, we have safety. We have never been more free to discuss ideas about our future as a species.

#50. I am human

As much as I love science, technology, engineering, and math, I am a humanist at heart. I believe in our capacity to build, to create, to love, to share, and ultimately to discover truth. When I’m lucky enough to meet a new person, especially one that doesn’t have the same privileges as me, I try to remember this shared reality, searching for the humanity in them that’s also in me. And when I do this, they usually reciprocate, because they too are human. These moments, where we find common ground, come together, and reshape and understand our world together, are what I believe is the purest expression of humanity.
Because these moments are fleeting, I believe it is possible and even common that we lose our humanity for long stretches of time. We forget that we have the power to change things and give up. We forget that we have the capacity to understand each other and begin to distrust. We forget that we can create and begin to destroy.

When we lose our humanity, it’s rarely our fault. In fact, it’s often because we lack sufficient privileges for expressing our humanity fully. When you don’t yourself have food, shelter, safety, hope, trust, and love, it’s difficult to provide food, shelter, safety, hope, trust, and love to others.

This is why it’s so critical for those of us that have privileges to use them to be as human as we can be. Take that free time that you have because of your wealth and find a way to bring someone stability. Use the safety you feel on the street to make someone else feel safe. Use the abundance of social support you have to connect with someone isolated. Being human is one of the few privileges we can earn outright, with hard work over a lifetime. It’s also one of the few privileges that can’t be given or granted: it’s solely up to you to find it in yourself, despite all of your stress, fear, and instability. And when you do, I think you’ll find that others will be attracted to that humanity, and help grow and reinforce it.

(And thank you for tolerating this series of posts. I know Facebook isn’t really the place for such serious talk, but if not Facebook, where? I’ll be posting a blog post with all of my privilege posts soon, to make it easier to share outside of this walled garden.)

Making money versus making knowledge

I’ve spent the past three years doing two very different things. As CTO of AnswerDash, my goal was to make money. As an Associate Professor at the University of Washington, my goal was to make knowledge. What’s the difference?

In my experience, making money is fundamentally about relating to people. To convince anyone to give you money, you have to understand their needs, their desires, their fears, and their anxieties. If you’re marketing to them, you have to find words, images, and experiences that provoke these emotions and stir them to action. If you’re selling to them, you have to understand them interpersonally and find a way to influence their behavior through your words and actions. And if you’re designing product for them, you have to envision experiences that change their life in a dramatic enough way that they’re willing to give you time or money. Making money is fundamentally about changing people’s behavior by understanding their emotions.

In contrast, making knowledge is fundamentally about understanding things that are much more abstract: nature, truth, reality, humanity. To make knowledge, you have to understand how the world works, how people works, how society works. And attaining this understanding doesn’t involve having to know any people in particular or how their emotions work. Instead, you need to understand ideas, what makes good ones, how to come up with them, evaluate them, critique them, explain them. Making knowledge feels like walking around in the dark, looking around for the light switch, until you stumble upon it, fumble to flip it, and suddenly everything is clear. Making knowledge is fundamentally about bring clarity to chaos.

Now, some people who make money might argue that they’re making knowledge too. Certainly anyone in a large, well-resourced company, investigating future products is creating new know-how. And I actually believe that many people in companies are making the same discoveries that researchers are. What’s different, however, is that those discoveries are not expressed and they are not shared as often. By not expressing them, there’s not an opportunity to evaluate how clearly the ideas are understood, which leaves the ideas weak, tacit, and fragmented. By not sharing them, there’s no way for these discoveries to impact the things that people create and the choices that people make. This is changing, as more people in industry blog and share their ideas online, but the clarity of the ideas is lacking, because the people sharing are often not trained to bring clarity, and because they don’t have as much incentive to share clear ideas.

Of course, people who make knowledge make money too. I get paid to share the knowledge that I and others have made through teaching. Sometimes I get paid to share my knowledge with companies or juries. Sometimes, I don’t understand the enterprise of knowledge: how is it rational for someone to pay me money for knowledge they don’t have, they can’t describe, and they can’t imagine, on the promise that it will bring clarity to the chaos they see in the world? And why is that clarity worth so much to them? If there weren’t jobs on the other end, would they pay as much to have that clarity? Sometimes, I think that professors forget that part of their job is to bring clarity to students, and not just to themselves.

I prefer to make knowledge. I find it more personally interesting, more intellectually challenging, and more meaningful. That doesn’t mean that I think making money is bad, it’s just not something I enjoy as much. That’s partly because I don’t enjoy the puzzle of understanding someone’s emotions. I find that I can see the structure behind an idea more easily than I can see the heartbeat behind someone’s behavior.

In a way, programmers are also people who deal with ideas more than they deal with people. That’s because code is a form of knowledge: it’s an expression of how to that embodies beliefs about the world. In some ways, that’s why it’s hard for so many people who enjoy writing code to understand who they’re writing it for and why: that requires understanding the people’s feelings. It’s strange that something so logical and so formal as code is still fundamentally about feelings.

Startup good and evil

Pioneer square

“Software is eating the world.” said Marc Andreesen. This is truer ever day, especially here in Seattle, as I watch the job postings pile up, the resumes pour in, the old apartments being torn down to be replaced with expensive condos for the engineer elite. I go to parties and everyone talks about software, what they want to do with it, or if they aren’t in software, how they feel about their world being eaten by it.

As I sit here in this posh Pioneer Square coffee shop, surrounded by both poverty and a hundred and one startups full of wealthy engineers, I can’t help but wonder: what should developers and software companies in this neighborhood be doing with all of this power? Do they have any responsibility to use it for good? Or is profit motive enough?

One conclusion I’ve come to after founding a software startup of my own is that it’s certainly hard enough to profit that factoring in any other consideration is nearly impossible. There’s just not time once a business already exists.

There is a magical point at which there is time to consider good and evil, and that’s before someone has chosen an opportunity pursue. We don’t have to choose the “evil” opportunities. We can choose the “good” ones. (We’ll save definitions of good and evil for another post).

Yes, knowing which opportunities are good and which are bad is hard. Unintended side effects abound. Finding opportunities that are both good and profitable is often an over-constrained problem. And what’s good to one person isn’t always good to another.

But we software people solve hard problems all the time, don’t we? Why can’t we solve the problem of finding profitable, “good” businesses?

One reason is that the software industry doesn’t like to talk about the opportunities its pursuing. We operate in stealth, fearing theft of our precious opportunities, and wait until they’re fully formed and ready to share before we get feedback from the world.

But is there really all that much risk on someone capitalizing before us? There are tons of profitable ideas. There are only so many people with the timing, resources, and risk profile to pursue them, and so few do.

Take the energy industry. Saving energy is “good” by some definitions, for us, for the planet, and for business. There are lots of ideas about how to do this, many of them shared publicly in the academic archive; many are even disclosed in patents. That industry seems to be doing just fine. Growing many business, with lots of competition, seems to help everyone, even if most businesses go under.

So the next time you’re thinking about starting a business, consider talking to people about your opportunity. Yes, learn about its profit potential, but also learn about its other potential. What harm will it do? What jobs will it kill? What joy will it bring? What are the full spectrum of side effects it will have, good or bad? Are there ways to tweak the opportunity to bend the curve for the better, while also encouraging profitability?

And don’t just ask the software people in your software bubble. Ask an ethicist. Solicit a sociologist. Argue with an anthropologist. Inquire an information scientist. Persuade a political scientist. Try getting the perspective of every discipline rather than just your discipline. Does everyone see a win? Go for it. Does someone see an evil? At least you’ll know it, going in.

The black hole of software engineering research

Over the last couple of years as a startup CTO, I’ve made a point of regularly bringing software engineering research into practice. Whether it’s been bleeding edge tools, or major discoveries about process, expertise, or bug triage, it’s been an exciting chance to show professional engineers a glimpse of what academics can bring to practice.

The results have been mixed. While we’ve managed to incorporate much of the best evidence into our tools and practices, most of what I present just isn’t relevant, isn’t believed, or isn’t ready. I’ve demoed exciting tools from research, but my team has found them mostly useless, since they aren’t production ready. I’ve referred to empirical studies that strongly suggest the adoption of particular practices, but experience, anecdote, and context have usually won out over evidence. And honestly, many of our engineering problems simply aren’t the problems that software engineering researchers are investigating.

Why is this?

I think the issue is more than just improving the quality and relevance of research. In fact, I think it’s a system-level issue between the interaction between academia and industry. Here’s my argument:

  • Developers aren’t aware of software engineering research.
  • Why aren’t they aware? Most explicit awareness of research findings comes through coursework, and most computer science students take very little coursework in software engineering.
  • Why don’t they take a lot of software engineering? Software engineering is usually a single required course, or even just an elective. There also aren’t a large number of software engineering masters programs, to whom much of the research might be disseminated.
  • Why are there so few courses? Developers don’t need a professional masters degree in order to get high paying engineering jobs (unlike other fields, like HCI, where professional masters programs are a dominant way to teach practitioners the latest research and engage them in the academic community). This means fewer needs for software engineering faculty, and fewer software engineering Ph.D. students.
  • Why don’t students need coursework to get jobs? There’s huge demand for engineers, even complete novice ones, and many of them know enough about software engineering practice through open source and self-guided projects to quickly learn software engineering skills on the job.
  • Why is it sufficient to learn on the job? Most software engineering research focuses on advanced automated tools for testing and verification. While this is part of software engineering practice, there are many other aspects of software engineering that researchers don’t investigate, limiting the relevance of the research.
  • Why don’t software engineering researchers investigate more relevant things? Many of the problems in software engineering aren’t technical problems, but people problems. There aren’t a lot of faculty or Ph.D. students with the expertise to study these people problems, and many CS departments don’t view social science on software engineering as computer science research.
  • Why don’t faculty and Ph.D. students have the expertise to study the people problems? Faculty and Ph.D. students ultimately come from undergraduate programs that inspire students to pursue a research area. Because there aren’t that many opportunities to learn about software engineering research, there aren’t that many Ph.D. students that pursue software engineering research.

The effect of this vicious cycle? There’s are few venues for disseminating software engineering research discoveries, and few reasons for engineers to study the research themselves.

How do we break the cycle? Here are a few ideas:

  1. Software engineering courses need present more research. Show off the cool things we invent and discover!
  2. Present more relevant research. Show the work that changes how engineers do their job.
  3. Present and offer opportunities to engage in research. We need more REU students!

This obviously won’t solve all of the problems above, but its a start. At the University of Washington, I think we do pretty well with 1) and 3). I occasionally teach a course in our Informatics program on software engineering that does a good job with 2). But there’s so much more we could be doing in terms of dissemination and impact.

Programming languages are the least usable, but most powerful human-computer interfaces ever invented

Really? I often think this when I’m in the middle of deciphering some cryptic error message, debugging somme silent failure, or figuring out the right parameter to send to some poorly documented function. I tweeted this exact phrase last week while banging my head against a completely inscrutable error message in PHP. But is it really the case that programming languages aren’t usable?

Yes and no (as with all declarative statements in tweets). But I do think that only some of these flaws are fundamental. Take, for example, Jakob Nielsen’s classic usability heuristics, one rough characterization of usability. One of the most prominent problems in user interfaces is a lack of visibility of system status: usable interfaces should provide clear, timely feedback about the how user input is interpreted so that users know what state a system is in and decide what to do next. When you write a program, there is often a massive gulf between the instructions one writes and the later effects of those instructions on program output and the world. In fact, even a simple program can branch in so many ways that some execution paths are never even observed by the programmer who wrote the instructions, but only by whoever later executes it. Talk about delayed feedback! There are whole bodies of literature on reproducibility, testing, and debugging that try to bridge this disconnect between command and action, better exposing exactly how a program will and will not behave when executed. At best, these tools provide information that guide programmers toward understanding, but this understanding will always require substantial effort, because of the inherent complexity in program execution that a person must comprehend to take action.

Another popular heuristic is Neilsen’s “match between system and the real world”: the system should use concepts, phrases, and metaphors that are familiar to the user. There’s really nothing more in opposition to this design principle than requiring a programmer to speak only in terms that a computer can reliably and predictably interpret. But need to express ideas in computational terms is really inherent to what programming languages are. There are some ways that this can be improved through good naming of identifiers, choice of language paradigm, and a selection of language constructs that reflect the domain that someone is trying to code against. In fact, you might consider the evolution of programming languages to be a slow but deliberate effort to define semantics that better model the abstractions found in the world. We’ll always, however, be expressing things in computational terms and not the messy, ambiguous terms of human thought.

Programming languages fail to satisfy many other heuristics, but can be made significantly more usable with tools. For example, error prevention and error actionability can often be met through careful language and API design. In fact, some might argue that what programming languages researchers are actually doing when they contribute new abstractions, semantics, and formalisms is trying to minimize errors and maximize error comprehensibility. Static type checking, for example, is fundamentally about providing concrete, actionable feedback sooner rather than later. This is very much a usability goal. Similarly, Nielsen’s “recognition rather than recall” heuristic has been met not through language design, but carefully designed and evolved features like autocomplete, syntax highlighting, source file outlines, class hierarchy views, links to callers and callees in documentation, and so on.

There are other usability heuristics for which programming languages might even surpass the usability of their graphical user interfaces. For example, what user interface better supports undo, redo, and cancel than programming languages? With modern text editors and version control, what change can’t be undone, redone, or canceled, at least during design time? Our best programming languages are also perhaps the most consistent, flexible, minimalist, and beautiful user interfaces that exist. These are design principles that most graphical user interfaces struggle to even approach, as they often have to make sacrifices in these dimensions to achieve a better fit with the messy reality of the world.

So while programming languages might lack usability along some dimensions, but with considerable effort in careful tool design, they can approach the usability of graphical interfaces (partly through the use of graphical user interfaces themselves). In fact, there’s been a resurgence of research inventing precisely these kinds of tools (some by me). These usability improvements can greatly increase the accessibility, learnability, and user efficiency of languages (even if they only ever approach the usability of graphical user interfaces).

Now to the second part of my claim: are programming languages really the most “powerful” user interfaces ever invented? This of course depends on what we mean by power. They are certainly the most expressive user interfaces we have, in that we can create more with them than we can with any other user interface (Photoshop is expressive, but we can’t make Photoshop with Photoshop). They might also be the most powerful interfaces in a political sense: the infrastructure we can create with them can shape the very structure of human communication, government, commerce, and cultural production.

But if by power we mean the ability to directly facilitate a particular human goal, there are many tasks for which programming languages are a terrible tool. Sending emails, having a video chat, playing a game, reading the news, managing a to do list, etc. are activities best supported by applications explicitly designed around these activities and their related concepts, not the lower level abstractions of programming languages (or even APIs). In this way, they are probably the least powerful kind of user interface, since they really only facilitate the creation of directly useful human tools.

If there’s any truth to the title of this post, its the implied idea that programming languages are just another type of human-computer interface and the rich and varied design space of user interface paradigms. This has some fun implications. For example, programmers are users too, and they deserve all of the same careful consideration that we give non-programmers using non-programming interfaces. This also means that programming languages researchers are really studying user interface design, like HCI researchers do. There aren’t two fields we might find more dissimilar in method or culture, but their questions and the phenomena they concern are actually remarkably aligned.

For those of you who know me and my work, none of these should be surprising claims. All of my research begins with the premise that programming languages are user interfaces. It’s why, despite the fact that I principally study code, coding, and coders, I pursued a Ph.D. in HCI and not PL, software engineering, or some other disciplined concerned with these phenomena. Programming languages are and will forever be to me, the most fascinating kind of user interface to study and reinvent.

Startup life versus faculty life

As some of you might have heard, this past summer I co-founded a company based on my former Ph.D. student Parmit Chilana‘s research on LemonAid along with her co-advisor Jake Wobbrock. I’m still part time faculty, advising Ph.D. students, co-authoring papers, and chairing a faculty search committee, but I’m not teaching, nor am I doing my normal academic service load. My dean and the upper administration have been amazingly supportive of this leave, especially given that it began in the last year of my tenure clock.

This is a fairly significant detour from my academic career, and I expected the makeup of daily activities that I’m accustomed to as a professor would change substantially. In several interesting ways, I couldn’t have been more wrong: doing a startup is remarkably similar to being a professor in a technical domain, at least with respect to the skills it requires. Here’s a list of parallels I’ve found striking as a founder of a technology company:

  • Fundraising. I spend a significant amount of my time seeking funding, carefully articulating problems with the status quo and how my ideas will solve these problems. The surface features of the work are different—in business, we pitch these ideas in slide decks, elevators, whereas in academia, we pitch them as NSF proposals and DARPA white papers—but the essence of the work is the same: it requires understanding the nature of a problem well enough that you can persuade someone to provide you resources to understand it more deeply and ultimately address it.
  • Recruiting. As a founder, I spent a lot of time recruiting talent to support my vision. As a professor, I do almost the exact same thing: I recruit undergrad RAs, Ph.D. students, faculty members, and trying to convince them that my vision, or my school or university’s vision, is compelling enough to join my team instead of someone else’s.
  • Ambiguity. In both research and startups, the single biggest cognitive challenge is dealing with ambiguity. In both, ambiguity is everywhere: you have to figure out what questions to ask, how to answer them, how to gather data that will inform these questions, how to interpret the data you get to make decisions about how to move forward. In research, we usually have more time to grapple with this ambiguity and truly understand it, but the grappling is of the same kind.
  • Experimentation. Research requires a high degree of iteration and experimentation, driven by carefully formed hypotheses. Startups are no different. We are constantly generating hypotheses about our customers, our end users, our business plan, our value, and our technology, and conducting experiments to verify whether the choice we’ve made is a positive or negative one.
  • Learning. Both academia and startups require a high degree of learning. As a professor, I’m constantly reading and learning about new discoveries and new technologies that will change the way I do my own research. As a founder, and particularly as a CTO, I find myself engaging in the same degree of constant learning, in an effort to perfect our product and our understanding of the value it provides.
  • Teaching. The teaching I do as a CTO is comparable to the teaching I do as a Ph.D. advisor in that the skills I’m teaching are less about specific technologies or processes, and more about ways of thinking about and approaching problems.
  • Service. The service that I do as a professor, which often involves reviewing articles, serving on curriculum committees, and providing feedback to students, is similar to the coffee chats I have with aspiring entrepreneurs, the feedback I provide to other startups about why I do or don’t want to adopt their technology, and the discussions I have with Seattle area angels and VCs about the type of learning that aspiring entrepreneurs need to succeed in their ventures.

Of course, there are also several sharp differences between faculty work and co-founder work:

  • The pace. In startups, time is the scarcest resource. There are always way too many things that must be done, and far too few people and hours to get things done. That makes triage and prioritization the most taxing and important parts of the work. In research, when there’s not enough time to get something done, there’s always the freedom to take an extra week to figure it out. (To my academic friends and students, it may not feel like you have extra time, you have much more flexibility than those in business do).
  • The outcomes. The result of the work is one of the most obvious differences. If we succeed at our startup, the result will be a slight shift in how the markets we’re targeting will work and hopefully a large profit demonstrating the value of this shift. In faculty life, the outcomes come in the form teaching hundreds, potentially thousands of students, lifelong thinking skills, and in producing knowledge and discoveries that have lasting value to humanity for decades, or even centuries. I still personally find the latter kinds of impact much more valuable, because I think they’re more lasting than the types of ephemeral changes that most companies achieve (unless you’re a Google, Facebook, or Twitter).
  • The consequences. When I fail at research, at worst it means that a Ph.D. student doesn’t obtain the career they wanted, or taxpayers have funded some research endeavor that didn’t lead to any new knowledge or inventions. That’s actually what makes academia so unique and important: it frees scholars to focus on truth and invention without the artificial constraint of time. If I fail at this startup, investors have lost millions of dollars, several people will lose their jobs, and I’ll have nothing to show for it (other than blog posts like this!). This also means that it’s necessary to constantly make decisions on limited information with limited confidence.

Now, you might be wondering which I prefer, given how similar the actual jobs the skills required in the jobs are. I think this is actually a matter of very personal taste that has largely to do with the form of impact one wants to have. You can change markets or you can change minds, but you generally can’t change both. I tend to find it much more personally rewarding to change minds through teaching and research, because the changes feel more permanent and personal to me. Changing a market is nice and can lead to astounding financial rewards and a shift in how millions of people conduct a part of their lives, but this change feels more fleeting and impersonal. I think I have a longing to bring meaning and understanding to people’s lives that’s fundamentally at odds with the profit motive.

That said, I’m truly enjoying my entrepreneurial stint. I’m learning an incredible amount about capitalism, about business, behavioral economics, and the limitations of research and industry. When I return as full time faculty, I think I’ll be in a much better position to do the things that only academics can do, and argue for why universities and research are of critical importance to civilization.

John Carmack discusses the art and science of software engineering

I’m not really a hard core gamer anymore, but my fascination with programming did begin with video games (and specifically, rendering algorithms). So when I saw John Carmack’s 2012 QuakeCon keynote show up in my feed, I thought I’d listen to a bit of it and learn a bit about the state of game design and development.

What I heard instead was a hacker’s hacker talk about his recent realization that software engineering is actually a social science. Across 10 minutes, he covers many human aspects of developer mistakes, programming language design, static analysis, code reviews, developer training, and cost/benefit analyses. The emphasis throughout is mine (and I also transcribed this, so I apologize for any mistakes).

(Thanks to Vlad for the Russian translation of this post: Наука программирования via Android Recovery).

In trying to make the games faster, which has to be our priority going forward, we’ve made a lot of mistakes already with Doom 4, a lot of it is water under the bridge, but prioritizing that can help us get the games done faster, just has to be where we go. Because we just can’t do this going, you know, six more years, whatever, between games.

On the software development side, you know there was an interesting thing at E3, one of the interviews I gave, I had mentioned something about how, you I’ve been learning a whole lot, and I’m a better programmer now than I was a year ago and the interviewer expressed a lot of surprise at that, you know after 20 years and going through all of this that you’d have it all figured out by now, but I actually have been learning quite a bit about software development, both on the personal craftsman level but also paying more attention by what it means on the team dynamics side of things. And this is something I probably avoided looking at squarely for years because, it’s nice to think of myself as a scientist engineer sort, dealing in these things that are abstract or provable or objective on there and there.

In reality in computer science, just about the only thing that’s really science is when you’re talking about algorithms. And optimization is an engineering. But those don’t actually occupy that much of the total time spent programming. You know, we have a few programmers that spend a lot of time on optimizing and some of the selecting of algorithms on there, but 90% of the programmers are doing programming work to make things happen. And when I start to look at what’s really happening in all of these, there really is no science and engineering and objectivity to most of these tasks. You know, one of the programmers actually says that he does a lot of monkey programming—you know beating on things and making stuff happen. And I, you know we like to think that we can be smart engineers about this, that there are objective ways to make good software, but as I’ve been looking at this more and more, it’s been striking to me how much that really isn’t the case.

Aside from these that we can measure, that we can measure and reproduce, which is the essence of science to be able to measure something, reproduce it, make an estimation and test that, and we get that on optimization and algorithms there, but everything else that we do, really has nothing to do with that. It’s about social interactions between the programmers or even between yourself spread over time. And it’s nice to think where, you know we talk about functional programming and lambda calculus and monads and this sounds all nice and sciency, but it really doesn’t affect what you do in software engineering there, these are all best practices, and these are things that have shown to be helpful in the past, but really are only helpful when people are making certain classes of mistakes. Anything that I can do in a pure functional language, you know you take your most restrictive scientific oriented code base on there, in the end of course it all comes down to assembly language, but you could exactly the same thing in BASIC or any other language that you wanted to.

One of the things that’s also fed into that is my older son’s starting to learn how to program now. I actually tossed around the thought of should I maybe have him try to learn Haskell as a 7 year old or something and I decided not to, that I, you know, I don’t think that I’m a good enough Haskell programmer to want to instruct anybody in anything, but as I start thinking about how somebody learns programming from really ground zero, it was opening my eyes a little bit to how much we take for granted in the software engineering community, really is just layers of artifice upon top a core fundamental thing. Even when you go back to structured programming, whether it’s while loops and for loops and stuff, at the bottom when I’m sitting thinking how do you explain programming, what does a computer do, it’s really all the way back to flow charts. You do this, if this you do that, if not you do that. And, even trying to explain why do you do a for loop or what’s this while loop on here, these are all conventions that help software engineering in the large when you’re dealing with mistakes that people make. But they’re not fundamental about what the computer’s doing. All of these are things that are just trying to help people not make mistakes that they’re commonly making.

One of the things that’s been driven home extremely hard is that programmers are making mistakes all the time and constantly. I talked a lot last year about the work that we’ve done with static analysis and trying to run all of our code through static analysis and get it to run squeaky clean through all of these things and it turns up hundreds and hundreds, even thousands of issues. Now its great when you wind up with something that says, now clearly this is a bug, you made a mistake here, this is a bug, and you can point that out to everyone. And everyone will agree, okay, I won’t do that next time. But the problem is that the best of intentions really don’t matter. If something can syntactically be entered incorrectly, it eventually will be. And that’s one of the reasons why I’ve gotten very big on the static analysis, I would like to be able to enable even more restrictive subsets of languages and restrict programmers even more because we make mistakes constantly.

One of the things that I started doing relatively recently is actually doing a daily code review where I look through the checkins and just try to find something educational to talk about to the team. And I annotate a little bit of code and say, well actually this is a bug discovered from code review, but a lot of it is just, favor doing it this way because it’s going to be clearer, it will cause less problems in other cases, and it ruffled, there were a few people that got ruffled feathers early on about that with the kind of broadcast nature of it, but I think that everybody is appreciating the process on that now. That’s one of those scalability issues where there’s clearly no way I can do individual code reviews with everyone all the time, it takes a lot of time to even just scan through what everyone is doing. Being able to point out something that somebody else did and say well, everybody should pay attention to this, that has some real value in it. And as long as the team is agreeable to that, I think that’s been a very positive thing.

But what happens in some cases, where you’re arguing a point where let’s say we should put const on your function parameters or something, that’s hard to make an objective call on, where lots of stuff we can say, this indirection is a cache miss, that’s going to cost us, it’s objective, you can measure it, there’s really no arguing with it, but so many of these other things are sort of style issues, where I can say, you know, over the years, I’ve seen this cause a lot problems, but a lot of people will just say, I’ve never seen that problem. That’s not a problem for me, or I don’t make those mistakes. So it has been really good to be able to point out commonly on here, this is the mistake caused by this.

But as I’ve been doing this more and more and thinking about it, that sense that this isn’t science, this is just trying to deal with all of our human frailties on it, and I wish there were better ways to do this. You know we all want to become better developers and it will help us make better products, do a better job with whatever we’re doing, but the fact that it’s coming down to training dozens of people to do things in a consistent way, knowing that we have programmer turnover as people come and go, new people coming and looking at the code base and not understanding the conventions, and there are clearly better and worse ways of doing things but it’s frustratingly difficult to quantify.

That’s something that I’m spending more and more time looking at. I read NASA’s software engineering laboratory reports and I can’t seem to get any real value out of a lot of those things. The things that have been valuable have been automated things, things that don’t require a human to have some analysis, have some evaluation of it, but just say, enforced or not enforced. And I think that that’s where really where things need to go as larger and larger software gets developed. And it is striking the scale of what we’re doing now. If you look back at the NASA reports and the scale of things and they considered large code bases to be things with three or four hundred thousand lines of code. And we have far more than that in our game engines now. It’s kind of fun to think that the game engines, things that we’re playing games on, have more sophisticated software than certainly the things that launch people to the moon and back and flew the shuttle, ran Skylab, run the space station, all of these massive projects on there are really outdone in complexity by any number of major game engine projects.

And the answer is as far as I can tell really isn’t out there. With the NASA style development process, they can deliver very very low bug rates, but it’s at a very very low productivity rate. And one of the things that you wind up doing in so many cases is cost benefit analyses, where you have to say, well we could be perfect, but then we’ll have the wrong product and it will be too late. Or we can be really fast and loose, we can go ahead and just be sloppy but we’ll get something really cool happening soon. And this is one of those areas where there’s clearly right tools for the right job, but what happens is you make something really cool really fast and then you live with it for years and you suffer over and over with that. And that’s something that I still don’t think that we do the best job at.

We know our code is living for, realistically, we’re looking at a decade. I tell people that there’s a good chance that whatever you’re writing here, if it’s not extremely game specific, may well exist a decade from now and it will have hundreds of programmers, looking at the code, using it, interacting with it in some way, and that’s quite a burden. I do think that it’s just and right to impose pretty severe restrictions on what we’ll let past analysis and what we’ll let into it, but there are large scale issues at the software API design levels and figuring out things there, that are artistic, that are craftsman like on there. And I wish that there were more quantifiable things to say about that. And I am spending a lot of time on this as we go forward.

computing, jobs, and lumps of labor

For a while now, there have been two competing narratives around jobs and computing. One is that computing will bring an amazing influx of new jobs by creating new opportunities, new markets, and new ideas. The other is that computing, far from being a job source, is actually a job sink, replacing manufacturing and information services jobs with machines. Thomas Edsall discusses these two narratives in a recent opinion piece on the NY Times, bringing together several essays and blog posts on the subject.

The most compelling idea from this post (and most of it was compelling), was the “lump of labor” fallacy, which I hadn’t heard of before. This is the idea that there is a fixed amount of work available in the world and it just gets shifted around between cities, companies, and countries. Economists apparently show little support for this idea, as history has repeatedly shown that innovations typically create more work, rather than less.

Andrew McAfee, argues that information technology is different. All past innovations, he argues, automated things that humans primarily could not do (for example, reaching places we could not reach or lifting things we could not lift, transporting us places we could not reach). In contrast, information technology is beginning to be capable of doing many information related things that humans can do, in addition to all of the information related things we can’t do. Therefore, the only rational thing for employers to do as software becomes functional and cheap enough is to replace people with machines.

Is he right? It’s certainly compatible with a rejection of the lump of labor fallacy. Computing can create more work just like any other innovation, but McAfee might argue that the new work can also be done computers. For example, the fact that I buy a new iPhone every two years means that there does need to be people to manufacture it and fix it when it breaks. But the very technologies embedded in the device are the same ones that enable its manufacturing to be almost completely automated and allow me to get a substantial amount of support from Q&A forums archived on the web, rather than using human technical support. On the other hand, that automation and information access requires a lot of energy, a lot more manufacturing, and a great deal of human time to maintain the Internet. It seems possible that much of this could be taken over by machines eventually.

Can all of the work really be shifted to non-humans? Let’s do a thought experiment to see. Consider a small remote village of 100 humans run entirely by robots and powered by an effectively infinite supply of solar energy. One of the human at any given time is an expert roboticist who can maintain and repair all of the robots independently. This roboticist trains one of the village children to replace her, so that when that roboticist dies, there is another to take over. The roboticist has immense power because the other 99 people depend on her to keep the robotic work force functional (including the robotic work force that keeps the rest of the robotic workforce functional). The result is that the 99 people don’t work (because there’s no work to do), and live a life of leisure. The only reason that everyone survives is because of the roboticist’s knowledge and benevolence and that nearly all of the work has been shifted to the robots. In fact, the robots may even become intelligent enough to fix and maintain themselves one day, making even the roboticist obsolete.

There are most certainly things missing from this little story that make it implausible. For example, the population wouldn’t stay at 100 people, especially with everyone living such a life of leisure. Assuming the robots could reproduce themselves and gather their own natural resources, the population would continue to grow until Earth was out of resources, as it does now.

The more significant missing element, I think, is boredom. In such a life of leisure, wouldn’t people create work for themselves, just to be entertained or to find meaning? I could imagine for example, one particularly inquisitive villager deciding to write a book on the meaning of life in a world where there is no human work. She might outsource the editing, printing, and binding of her book to the robots, but would she outsource the audience? The critical reflections? The impassioned rebuttals? Surely the villages would create work for themselves, if only to create meaningful social bonds and avoid listlessness.

Perhaps the reason that “lump of labor” is a fallacy, even for computing, is that work isn’t a separate entity from humanity that can be shifted to and from humanity. Humanity is the source of work. Computing many eliminate forms of work that we are used to in present day society, but we will inevitably find ways to occupy ourselves otherwise. Perhaps its just the disruptive transitions that are painful, where the middle class struggles, starves, and loses, only to be motivated by their hunger to create new work with which to fill their bellies.