I am tenured

I am now a tenured professor.

After work­ing towards this for 15 years, it’s sur­real to type that sim­ple sen­tence. When I first applied to Ph.D. pro­grams in 2001, it felt like a mas­sive, nearly unat­tain­able goal, with thou­sands of lit­tle chal­lenges and an equal num­ber of opin­ions about how I should approach them. Tenured pro­fes­sors offered con­flict­ing advice about how to sur­vive the slog. I watched untenured pro­fes­sors crawl towards the goal, work­ing con­stant nights and week­ends, only to get their grant pro­posal declined or paper rejected, or worst of all, tenure case declined. I had con­ver­sa­tions with count­less burned out stu­dents, turned off by the relent­less, pun­ish­ing mer­i­toc­racy, regret­ting the time they put into a sys­tem that doesn’t reward peo­ple, but ideas.

Post-tenure, what was a back-breaking objec­tive has quickly become a hard earned state of mind. Tenure is the free­dom to fol­low my intel­lec­tual curios­ity with­out con­se­quence. It is the lib­erty to find the right answers to ques­tions rather than the quick ones. Its that first step out of the car, after a long drive towards the ocean, behold­ing the grand expanse of unknown pos­si­bil­i­ties know­able only with time and inge­nu­ity. It is a type of lib­erty that exists in no other pro­fes­sion, and now that I have and feel it, it seems an unques­tion­ably nec­es­sary part of being an effec­tive sci­en­tist and scholar.

I’ve talked fre­quently with my col­leagues about the costs and ben­e­fits of tenur­ing researchers. Before hav­ing tenure, it always seemed unnec­es­sary. Give peo­ple ten year con­tracts, pro­vid­ing enough sta­bil­ity to allow for explo­ration, but reserv­ing the right to boot unpro­duc­tive schol­ars. Or per­haps do away with it alto­gether, requir­ing researchers con­tin­u­ally prove their value, as untenured pro­fes­sors must. A true mer­i­toc­racy requires con­tin­ued merit, does it not?

These ideas seem naive now. If I were to lose this intel­lec­tual free­dom, it would con­strain my cre­ativ­ity, politi­cize my pur­suits, and in a strange way, deper­son­al­ize my schol­ar­ship, requir­ing it to be accept­able by my col­leagues, in all the ways that it threat­ened to do, and some­times did, before tenure. Fear warps knowl­edge. Tenure is free­dom from fear.

For my non-academic audi­ence, this reflec­tion must seem awfully priv­i­leged. With or with­out tenure, pro­fes­sors have vastly more free­dom than really any other pro­fes­sion. But hav­ing more free­dom isn’t the same as hav­ing enough. Near absolute free­dom is an essen­tial ingre­di­ent of the work of dis­cov­ery, much like a teacher must have prep time, a ser­vice worker must have lunch breaks, an engi­neer must have instru­ments, and a doc­tor must have knowledge.

And finally, one caveat: tenure for researchers is not the same as tenure for teach­ers. Free­dom may also be an ingre­di­ent for suc­cess­ful teach­ing, in that it allows teach­ers to dis­cuss unpop­u­lar ideas, and even opin­ions, with­out ret­ri­bu­tion. But it may be nec­es­sary for dif­fer­ent rea­sons: whereas fear of ret­ri­bu­tion warps researchers’ cre­ation of knowl­edge, it warps the minds of teach­ers’ dis­sem­i­na­tion of that knowledge.

Gidget, a 21st century approach to programming literacy

Over the past two years, my Ph.D. stu­dent Mike Lee has been work­ing on Gid­get, a new way to give teens this self effi­cacy in pro­gram­ming. I’m proud to announce that Gid­get is now avail­able for any­one to play at www.helpgidget.com. Give it a try!

The game takes a very dif­fer­ent approach than exist­ing learn­ing tech­nolo­gies for pro­gram­ming. Rather than try­ing to moti­vate kids through cre­ativ­ity (as in Scratch and Alice), pro­vide instruc­tion through tuto­ri­als (like Kahn Acad­emy and Codecad­emy), or inject pro­gram­ming into tra­di­tional game mechan­ics (as in Code­Com­bat or Light­Bot), Gid­get attempts to trans­late pro­gram­ming itself into a game, pro­vid­ing a sequence of debug­ging puz­zles for learn­ers to solve. It does this, how­ever, with a par­tic­u­lar learn­ing objec­tive in mind: teach play­ers that com­put­ers are not omni­scient, flaw­less, and intel­li­gent machines, but rather fast, reli­able, and mostly igno­rant machines that incred­i­bly pow­er­ful prob­lem solv­ing tools. The game’s goal is not nec­es­sary for play­ers to learn to code (though this does hap­pen in spades), but to teach play­ers that pro­gram­mers are the ones that give soft­ware their magic, and that they could be a code magician.

Our efforts are part of a much larger national con­ver­sa­tion about pro­gram­ming and dig­i­tal lit­er­acy. The basic obser­va­tion, which many have noted over the past two decades, is that pro­fes­sional pro­gram­mers aren’t the only peo­ple who pro­gram. Any­one who has to manip­u­late large vol­umes of infor­ma­tion is at some point going to write a pro­gram. Gid­get is explic­itly designed to give chil­dren, teens, and really any­one with an inter­est in know­ing more about pro­gram­ming, the con­fi­dence they need to learn more.

Try the game your­self. Share it with your kids. If you teach a CS1 class, try giv­ing it to your stu­dents as their first assign­ment. Send us feed­back in the game directly or write me with ideas.

Computer science, information science, and the TI-82

calc-ti82

I first learned to code in 7th grade. Our math teacher required a graph­ing cal­cu­la­tor and in the first few weeks of class, he showed us briefly how it could solve math prob­lems if we gave it the right set of instruc­tions. Awe­some, right? Not really. What could pos­si­bly be more bor­ing than learn­ing a cryp­tic, unfor­giv­ing set of machine instruc­tions to sim­ply do the math I could already do by hand?

That all changed one day when a class­mate showed me how to play Tetris on his TI-82. When I asked him how this was pos­si­ble, he said his brother had made it using that “Pro­gram” but­ton that seemed so use­less. Sud­denly, the cal­cu­la­tor became much more to me. It wasn’t a device for doing cal­cu­la­tions, it was a Game Boy. I found my own­ers man­ual, cut out and mailed the form a LINK cable so that I could trans­fer the pro­gram to my cal­cu­la­tor (my class­mate had tran­scribed the pro­gram by hand from his brother’s cal­cu­la­tor!). Four weeks later the cable arrived in the mail, and I was in busi­ness, ready to play my very own poor man’s ver­sion of a Game Boy in math class!

Unfor­tu­nately, the game was abysmally slow. I could watch the pieces erase and redraw them­selves, pixel by pixel. The glacial pace of the ren­der­ing made the game impos­si­ble to play. Some­thing in me found this unac­cept­able and I spent the next sev­eral weeks try­ing to fig­ure out pre­cisely how the game worked, hop­ing that I might find some way to make it faster and more playable.

I learned about vari­ables, which were for stor­ing infor­ma­tion about the state of the game. I learned about con­trol state­ments, which allowed the game to change its response based on state. I learned about user inter­faces, which gov­ern how infor­ma­tion from the player could be pro­vided and struc­tured and re-presented. I learned about data struc­tures, which helped orga­nize infor­ma­tion about the shape of Tetris pieces and the game grid. Most of all, I learned about soft­ware archi­tec­ture, which helped keep the mon­strous 5,000 lines of TI-BASIC, viewed through the 8 line dis­play, orga­nized and under­stand­able, but also deter­mined how infor­ma­tion flowed from the player, to the game, and back to the player.

I emerged from those ardu­ous weeks not only with a much faster ver­sion of the game (using the text con­sole instead of the graph to reach inter­ac­tive speeds), but also a real­iza­tion that has shaped my life for over two decades: infor­ma­tion is a thing, and algo­rithms and data struc­tures are how we cre­ate, struc­ture, process, search and rea­son about it.

But what is the rela­tion­ship between com­put­ing and infor­ma­tion? For instance, is all pro­gram­ming really about infor­ma­tion, or are there some kinds of pro­gram­ming that are about some­thing else? How much over­lap is there between com­puter sci­ence and infor­ma­tion sci­ence as aca­d­e­mic dis­ci­plines? Should depart­ments of com­puter sci­ence and infor­ma­tion sci­ence really be sep­a­rate? As a 12 year old, the rela­tion­ship between these two per­spec­tives was pow­er­ful and excit­ing, because they rep­re­sented untapped poten­tial for cre­ativ­ity. Twenty one years later, as a pro­fes­sor of infor­ma­tion sci­ence fun­da­men­tally con­cerned with com­puter sci­ence, the rela­tion­ship between com­put­ing and infor­ma­tion is some­thing deeply fas­ci­nat­ing as an intel­lec­tual perspective.

Let’s con­sider the dis­ci­pli­nary rela­tion­ship first. Both com­puter sci­ence and infor­ma­tion sci­ence think about search, both study how to orga­nize infor­ma­tion, both are con­cerned with cre­at­ing sys­tems that help peo­ple access, use, and cre­ate infor­ma­tion. Many of the fun­da­men­tal ques­tions in com­puter sci­ence are fun­da­men­tally about infor­ma­tion. What infor­ma­tion is com­putable? How can infor­ma­tion be stored and retrieved? How can infor­ma­tion be processed? If com­puter sci­ence is about more than infor­ma­tion, what else is it about?

Per­haps the dif­fer­ence isn’t the phe­nom­ena they inves­ti­gate, but the media and meth­ods through which the phe­nom­ena flow. Com­puter sci­en­tists are strictly con­cerned with infor­ma­tion as it is rep­re­sented and processed in Tur­ing machine equiv­a­lents, whereas infor­ma­tion sci­en­tists are more con­cerned with how infor­ma­tion is rep­re­sented in minds, on paper, through meta­data, and other, older media. Com­puter sci­en­tists also often view infor­ma­tion as con­text free, whereas infor­ma­tion sci­en­tists are often more inter­ested in the con­text of the infor­ma­tion than the infor­ma­tion itself. Because of this dif­fer­ent empha­sis, the dis­ci­plines meth­ods are also quite dif­fer­ent, with com­puter sci­en­tists rea­son­ing about infor­ma­tion through math­e­mat­i­cal abstrac­tions, and infor­ma­tion sci­en­tists rea­son­ing about it through the empir­i­cal meth­ods of social sci­ence. It’s not that the fields study some­thing dif­fer­ent, they just study it differently.

And what of the role of infor­ma­tion in pro­gram­ming and soft­ware engi­neer­ing? I’ve been writ­ing soft­ware for twenty years now, and the more code I write, the more it becomes clear that great soft­ware ulti­mately arises from a deep under­stand­ing of the infor­ma­tion that will flow through it and the con­texts in which that infor­ma­tion is pro­duced and con­sumed. The rest of the details in programming—language, archi­tec­ture, paradigm—they’re all con­ve­niences that facil­i­tate the man­age­ment of this infor­ma­tion flow. The right pro­gram­ming lan­guage, frame­work, or library is the one that best rep­re­sents the infor­ma­tion being processed and the types of rea­son­ing that must be done about that infor­ma­tion. Per­haps its not sur­pris­ing that the biggest, most impact­ful soft­ware orga­ni­za­tions in the world such as Google, Face­book, and Baidu, are explic­itly inter­ested in orga­niz­ing the worlds infor­ma­tion, fac­tual, social, or otherwise.

I don’t know yet whether intel­lec­tual ques­tions like these mat­ter so much to the world. Some­thing inside me, how­ever, tells me that they do, and that under­stand­ing the nature of com­put­ing, infor­ma­tion, and their over­lap might be key to under­stand­ing how and why these two ideas are hav­ing such a dra­matic impact on our world. Some­how, it also feels like these ideas aren’t sim­ply human tools for thought, but some­thing much more fun­da­men­tal, some­thing more nat­ural. Give me twenty more years and maybe I’ll have the words for it.

Programming languages are the least usable, but most powerful human-computer interfaces ever invented

Really? I often think this when I’m in the mid­dle of deci­pher­ing some cryp­tic error mes­sage, debug­ging somme silent fail­ure, or fig­ur­ing out the right para­me­ter to send to some poorly doc­u­mented func­tion. I tweeted this exact phrase last week while bang­ing my head against a com­pletely inscrutable error mes­sage in PHP. But is it really the case that pro­gram­ming lan­guages aren’t usable?

Yes and no (as with all declar­a­tive state­ments in tweets). But I do think that only some of these flaws are fun­da­men­tal. Take, for exam­ple, Jakob Nielsen’s clas­sic usabil­ity heuris­tics, one rough char­ac­ter­i­za­tion of usabil­ity. One of the most promi­nent prob­lems in user inter­faces is a lack of vis­i­bil­ity of sys­tem sta­tus: usable inter­faces should pro­vide clear, timely feed­back about the how user input is inter­preted so that users know what state a sys­tem is in and decide what to do next. When you write a pro­gram, there is often a mas­sive gulf between the instruc­tions one writes and the later effects of those instruc­tions on pro­gram out­put and the world. In fact, even a sim­ple pro­gram can branch in so many ways that some exe­cu­tion paths are never even observed by the pro­gram­mer who wrote the instruc­tions, but only by who­ever later exe­cutes it. Talk about delayed feed­back! There are whole bod­ies of lit­er­a­ture on repro­ducibil­ity, test­ing, and debug­ging that try to bridge this dis­con­nect between com­mand and action, bet­ter expos­ing exactly how a pro­gram will and will not behave when exe­cuted. At best, these tools pro­vide infor­ma­tion that guide pro­gram­mers toward under­stand­ing, but this under­stand­ing will always require sub­stan­tial effort, because of the inher­ent com­plex­ity in pro­gram exe­cu­tion that a per­son must com­pre­hend to take action.

Another pop­u­lar heuris­tic is Neilsen’s “match between sys­tem and the real world”: the sys­tem should use con­cepts, phrases, and metaphors that are famil­iar to the user. There’s really noth­ing more in oppo­si­tion to this design prin­ci­ple than requir­ing a pro­gram­mer to speak only in terms that a com­puter can reli­ably and pre­dictably inter­pret. But need to express ideas in com­pu­ta­tional terms is really inher­ent to what pro­gram­ming lan­guages are. There are some ways that this can be improved through good nam­ing of iden­ti­fiers, choice of lan­guage par­a­digm, and a selec­tion of lan­guage con­structs that reflect the domain that some­one is try­ing to code against. In fact, you might con­sider the evo­lu­tion of pro­gram­ming lan­guages to be a slow but delib­er­ate effort to define seman­tics that bet­ter model the abstrac­tions found in the world. We’ll always, how­ever, be express­ing things in com­pu­ta­tional terms and not the messy, ambigu­ous terms of human thought.

Pro­gram­ming lan­guages fail to sat­isfy many other heuris­tics, but can be made sig­nif­i­cantly more usable with tools. For exam­ple, error pre­ven­tion and error action­abil­ity can often be met through care­ful lan­guage and API design. In fact, some might argue that what pro­gram­ming lan­guages researchers are actu­ally doing when they con­tribute new abstrac­tions, seman­tics, and for­malisms is try­ing to min­i­mize errors and max­i­mize error com­pre­hen­si­bil­ity. Sta­tic type check­ing, for exam­ple, is fun­da­men­tally about pro­vid­ing con­crete, action­able feed­back sooner rather than later. This is very much a usabil­ity goal. Sim­i­larly, Nielsen’s “recog­ni­tion rather than recall” heuris­tic has been met not through lan­guage design, but care­fully designed and evolved fea­tures like auto­com­plete, syn­tax high­light­ing, source file out­lines, class hier­ar­chy views, links to callers and callees in doc­u­men­ta­tion, and so on.

There are other usabil­ity heuris­tics for which pro­gram­ming lan­guages might even sur­pass the usabil­ity of their graph­i­cal user inter­faces. For exam­ple, what user inter­face bet­ter sup­ports undo, redo, and can­cel than pro­gram­ming lan­guages? With mod­ern text edi­tors and ver­sion con­trol, what change can’t be undone, redone, or can­celed, at least dur­ing design time? Our best pro­gram­ming lan­guages are also per­haps the most con­sis­tent, flex­i­ble, min­i­mal­ist, and beau­ti­ful user inter­faces that exist. These are design prin­ci­ples that most graph­i­cal user inter­faces strug­gle to even approach, as they often have to make sac­ri­fices in these dimen­sions to achieve a bet­ter fit with the messy real­ity of the world.

So while pro­gram­ming lan­guages might lack usabil­ity along some dimen­sions, but with con­sid­er­able effort in care­ful tool design, they can approach the usabil­ity of graph­i­cal inter­faces (partly through the use of graph­i­cal user inter­faces them­selves). In fact, there’s been a resur­gence of research invent­ing pre­cisely these kinds of tools (some by me). These usabil­ity improve­ments can greatly increase the acces­si­bil­ity, learn­abil­ity, and user effi­ciency of lan­guages (even if they only ever approach the usabil­ity of graph­i­cal user interfaces).

Now to the sec­ond part of my claim: are pro­gram­ming lan­guages really the most “pow­er­ful” user inter­faces ever invented? This of course depends on what we mean by power. They are cer­tainly the most expres­sive user inter­faces we have, in that we can cre­ate more with them than we can with any other user inter­face (Pho­to­shop is expres­sive, but we can’t make Pho­to­shop with Pho­to­shop). They might also be the most pow­er­ful inter­faces in a polit­i­cal sense: the infra­struc­ture we can cre­ate with them can shape the very struc­ture of human com­mu­ni­ca­tion, gov­ern­ment, com­merce, and cul­tural production.

But if by power we mean the abil­ity to directly facil­i­tate a par­tic­u­lar human goal, there are many tasks for which pro­gram­ming lan­guages are a ter­ri­ble tool. Send­ing emails, hav­ing a video chat, play­ing a game, read­ing the news, man­ag­ing a to do list, etc. are activ­i­ties best sup­ported by appli­ca­tions explic­itly designed around these activ­i­ties and their related con­cepts, not the lower level abstrac­tions of pro­gram­ming lan­guages (or even APIs). In this way, they are prob­a­bly the least pow­er­ful kind of user inter­face, since they really only facil­i­tate the cre­ation of directly use­ful human tools.

If there’s any truth to the title of this post, its the implied idea that pro­gram­ming lan­guages are just another type of human-computer inter­face and the rich and var­ied design space of user inter­face par­a­digms. This has some fun impli­ca­tions. For exam­ple, pro­gram­mers are users too, and they deserve all of the same care­ful con­sid­er­a­tion that we give non-programmers using non-programming inter­faces. This also means that pro­gram­ming lan­guages researchers are really study­ing user inter­face design, like HCI researchers do. There aren’t two fields we might find more dis­sim­i­lar in method or cul­ture, but their ques­tions and the phe­nom­ena they con­cern are actu­ally remark­ably aligned.

For those of you who know me and my work, none of these should be sur­pris­ing claims. All of my research begins with the premise that pro­gram­ming lan­guages are user inter­faces. It’s why, despite the fact that I prin­ci­pally study code, cod­ing, and coders, I pur­sued a Ph.D. in HCI and not PL, soft­ware engi­neer­ing, or some other dis­ci­plined con­cerned with these phe­nom­ena. Pro­gram­ming lan­guages are and will for­ever be to me, the most fas­ci­nat­ing kind of user inter­face to study and reinvent.

Startup life versus faculty life

As some of you might have heard, this past sum­mer I co-founded a com­pany based on my for­mer Ph.D. stu­dent Par­mit Chi­lana’s research on Lemon­Aid along with her co-advisor Jake Wob­brock. I’m still part time fac­ulty, advis­ing Ph.D. stu­dents, co-authoring papers, and chair­ing a fac­ulty search com­mit­tee, but I’m not teach­ing, nor am I doing my nor­mal aca­d­e­mic ser­vice load. My dean and the upper admin­is­tra­tion have been amaz­ingly sup­port­ive of this leave, espe­cially given that it began in the last year of my tenure clock.

This is a fairly sig­nif­i­cant detour from my aca­d­e­mic career, and I expected the makeup of daily activ­i­ties that I’m accus­tomed to as a pro­fes­sor would change sub­stan­tially. In sev­eral inter­est­ing ways, I couldn’t have been more wrong: doing a startup is remark­ably sim­i­lar to being a pro­fes­sor in a tech­ni­cal domain, at least with respect to the skills it requires. Here’s a list of par­al­lels I’ve found strik­ing as a founder of a tech­nol­ogy company:

  • Fundrais­ing. I spend a sig­nif­i­cant amount of my time seek­ing fund­ing, care­fully artic­u­lat­ing prob­lems with the sta­tus quo and how my ideas will solve these prob­lems. The sur­face fea­tures of the work are different—in busi­ness, we pitch these ideas in slide decks, ele­va­tors, whereas in acad­e­mia, we pitch them as NSF pro­pos­als and DARPA white papers—but the essence of the work is the same: it requires under­stand­ing the nature of a prob­lem well enough that you can per­suade some­one to pro­vide you resources to under­stand it more deeply and ulti­mately address it.
  • Recruit­ing. As a founder, I spent a lot of time recruit­ing tal­ent to sup­port my vision. As a pro­fes­sor, I do almost the exact same thing: I recruit under­grad RAs, Ph.D. stu­dents, fac­ulty mem­bers, and try­ing to con­vince them that my vision, or my school or university’s vision, is com­pelling enough to join my team instead of some­one else’s.
  • Ambi­gu­ity. In both research and star­tups, the sin­gle biggest cog­ni­tive chal­lenge is deal­ing with ambi­gu­ity. In both, ambi­gu­ity is every­where: you have to fig­ure out what ques­tions to ask, how to answer them, how to gather data that will inform these ques­tions, how to inter­pret the data you get to make deci­sions about how to move for­ward. In research, we usu­ally have more time to grap­ple with this ambi­gu­ity and truly under­stand it, but the grap­pling is of the same kind.
  • Exper­i­men­ta­tion. Research requires a high degree of iter­a­tion and exper­i­men­ta­tion, dri­ven by care­fully formed hypothe­ses. Star­tups are no dif­fer­ent. We are con­stantly gen­er­at­ing hypothe­ses about our cus­tomers, our end users, our busi­ness plan, our value, and our tech­nol­ogy, and con­duct­ing exper­i­ments to ver­ify whether the choice we’ve made is a pos­i­tive or neg­a­tive one.
  • Learn­ing. Both acad­e­mia and star­tups require a high degree of learn­ing. As a pro­fes­sor, I’m con­stantly read­ing and learn­ing about new dis­cov­er­ies and new tech­nolo­gies that will change the way I do my own research. As a founder, and par­tic­u­larly as a CTO, I find myself engag­ing in the same degree of con­stant learn­ing, in an effort to per­fect our prod­uct and our under­stand­ing of the value it provides.
  • Teach­ing. The teach­ing I do as a CTO is com­pa­ra­ble to the teach­ing I do as a Ph.D. advi­sor in that the skills I’m teach­ing are less about spe­cific tech­nolo­gies or processes, and more about ways of think­ing about and approach­ing problems.
  • Ser­vice. The ser­vice that I do as a pro­fes­sor, which often involves review­ing arti­cles, serv­ing on cur­ricu­lum com­mit­tees, and pro­vid­ing feed­back to stu­dents, is sim­i­lar to the cof­fee chats I have with aspir­ing entre­pre­neurs, the feed­back I pro­vide to other star­tups about why I do or don’t want to adopt their tech­nol­ogy, and the dis­cus­sions I have with Seat­tle area angels and VCs about the type of learn­ing that aspir­ing entre­pre­neurs need to suc­ceed in their ventures.

Of course, there are also sev­eral sharp dif­fer­ences between fac­ulty work and co-founder work:

  • The pace. In star­tups, time is the scarcest resource. There are always way too many things that must be done, and far too few peo­ple and hours to get things done. That makes triage and pri­or­i­ti­za­tion the most tax­ing and impor­tant parts of the work. In research, when there’s not enough time to get some­thing done, there’s always the free­dom to take an extra week to fig­ure it out. (To my aca­d­e­mic friends and stu­dents, it may not feel like you have extra time, you have much more flex­i­bil­ity than those in busi­ness do).
  • The out­comes. The result of the work is one of the most obvi­ous dif­fer­ences. If we suc­ceed at our startup, the result will be a slight shift in how the mar­kets we’re tar­get­ing will work and hope­fully a large profit demon­strat­ing the value of this shift. In fac­ulty life, the out­comes come in the form teach­ing hun­dreds, poten­tially thou­sands of stu­dents, life­long think­ing skills, and in pro­duc­ing knowl­edge and dis­cov­er­ies that have last­ing value to human­ity for decades, or even cen­turies. I still per­son­ally find the lat­ter kinds of impact much more valu­able, because I think they’re more last­ing than the types of ephemeral changes that most com­pa­nies achieve (unless you’re a Google, Face­book, or Twitter).
  • The con­se­quences. When I fail at research, at worst it means that a Ph.D. stu­dent doesn’t obtain the career they wanted, or tax­pay­ers have funded some research endeavor that didn’t lead to any new knowl­edge or inven­tions. That’s actu­ally what makes acad­e­mia so unique and impor­tant: it frees schol­ars to focus on truth and inven­tion with­out the arti­fi­cial con­straint of time. If I fail at this startup, investors have lost mil­lions of dol­lars, sev­eral peo­ple will lose their jobs, and I’ll have noth­ing to show for it (other than blog posts like this!). This also means that it’s nec­es­sary to con­stantly make deci­sions on lim­ited infor­ma­tion with lim­ited confidence.

Now, you might be won­der­ing which I pre­fer, given how sim­i­lar the actual jobs the skills required in the jobs are. I think this is actu­ally a mat­ter of very per­sonal taste that has largely to do with the form of impact one wants to have. You can change mar­kets or you can change minds, but you gen­er­ally can’t change both. I tend to find it much more per­son­ally reward­ing to change minds through teach­ing and research, because the changes feel more per­ma­nent and per­sonal to me. Chang­ing a mar­ket is nice and can lead to astound­ing finan­cial rewards and a shift in how mil­lions of peo­ple con­duct a part of their lives, but this change feels more fleet­ing and imper­sonal. I think I have a long­ing to bring mean­ing and under­stand­ing to people’s lives that’s fun­da­men­tally at odds with the profit motive.

That said, I’m truly enjoy­ing my entre­pre­neur­ial stint. I’m learn­ing an incred­i­ble amount about cap­i­tal­ism, about busi­ness, behav­ioral eco­nom­ics, and the lim­i­ta­tions of research and indus­try. When I return as full time fac­ulty, I think I’ll be in a much bet­ter posi­tion to do the things that only aca­d­e­mics can do, and argue for why uni­ver­si­ties and research are of crit­i­cal impor­tance to civilization.

Off the grid

IMG_7077

My brother got mar­ried at Burn­ing Man this last Thurs­day to a won­der­ful woman. It was a beau­ti­ful cer­e­mony, next to frag­mented metal­lic heart and a 150 foot ele­gantly posed naked metal­lic woman in 100 F heat with thump­ing EDM pump­ing from a dou­ble decker art car with a tat­tooed female DJ who refused to turn the vol­ume down so that the new­ly­weds could say their vows. It was exactly the wed­ding my brother wanted: par­tic­i­pa­tory, organic, and epic.

There’s a lot I could say about Burn­ing Man as a first timer, but that’s for another blog. This is a blog about an aca­d­e­mic per­spec­tive on soft­ware and behav­ior, and so I’m going to focus on the fact that I was entirely off the grid for four straight days.

There aren’t many places in the world that you can truly dis­con­nect, with no pos­si­bil­ity of com­mu­ni­ca­tion through any medium other than speech and sight. There are a few: ten min­utes on take­off and land­ing, remote regions in third world coun­tries, and per­haps a few har­row­ing places such as the top of moun­tains and deep under the ocean. But Burn­ing Man is one of the few places with no access to com­mu­ni­ca­tion media where one can feel safe and still have access to all of the abun­dance of mod­ern society.

Burn­ing Man is also one of the few places where there’s also noth­ing to accom­plish. There’s no work that’s really nec­es­sary, no one to call, no one to coor­di­nate with, and no sched­ule, and to be truly in line with the cul­tural norms of a burn, one shouldn’t even seek these things. And so com­mu­ni­ca­tion media really have no pur­pose dur­ing a burn. The point is to sim­ply be, and do so around who­ever hap­pens to be around.

I’ve never been in such a set­ting. Espe­cially after an incred­i­bly intense week of 14 hour days of paper writ­ing for con­fer­ence dead­lines, prod­uct devel­op­ment for my startup, and a seem­ingly infi­nite list of things to prep for liv­ing in the desert for four days. It taught me a few things:

  • You’ve heard this before, but social media really is point­less. We use it to both cre­ate pur­pose and ful­fill it, but not to sat­isfy some essen­tial need that couldn’t be sat­is­fied in some other way. I didn’t miss all of the fleet­ing con­ver­sa­tions I have on Twit­ter and Face­book while on the playa; in fact, not hav­ing them made me eagerly antic­i­pate recon­nect­ing with friends face to face, and not through social media, to share my sto­ries in high fidelity, in real life. It didn’t help that after leav­ing Burn­ing Man and get­ting a sig­nal, my phone screamed at me with 500 emails and hun­dreds of noti­fi­ca­tions: the rich inter­per­sonal inter­ac­tions I had on the desert made my phone feel like a fac­sim­ile of real life. Lik­ing someone’s post on Face­book now feels dis­hon­est in a way.
  • The intense drive that I usu­ally have at work, the one that fuels my 10 hour work days and end­less stream of email replies, was com­pletely extin­guished by four days on the desert. There’s some­thing about the min­i­mal­ism of Burn­ing Man life—where every basic need is sat­is­fied, but noth­ing more—that clar­i­fies the fleet­ing nature of most of the pur­pose we cre­ate in our lives. My job is impor­tant, but it is just a job. The visions for the future of com­put­ing I pur­sue in my research are valu­able, but they ulti­mately twid­dle the less sig­nif­i­cant bits. This first day back at work is really hard: I feel like I’m hav­ing to recon­struct a moti­va­tion that took years erect but days to demolish.
  • I’ve always believed this to some extent, but Burn­ing Man rein­forced it: com­pu­ta­tion is rarely, if ever, the impor­tant part of soft­ware; it’s the infor­ma­tion that flows through it and the com­mu­ni­ca­tion it enables that are sig­nif­i­cant. I saw this every­where on the playa, as nearly every­thing had soft­ware in it. Art cars used soft­ware to bring auto­mo­biles to life, DJs used soft­ware to express emo­tion to hot and thirsty thou­sands, nightrid­ers used dig­i­tal lights to say to way­far­ers, “I’m here and this is who I am”. In a city stripped down, soft­ware is truly only a tool. A holis­tic edu­ca­tion of com­puter sci­en­tists would make it clear that as much as com­put­ing is an end in itself for the com­puter sci­en­tist, it is almost almost always a means.
  • Burn­ing Man reminded me that it’s only been about a cen­tury since human­ity has mea­sured time and dis­lo­cated com­mu­ni­ca­tion through ICTs. It was pleas­ing to see that when you take them both away, not only do peo­ple thrive, but they actu­ally seem more human (just as when we do mea­sure time and dis­lo­cate com­mu­ni­ca­tion, we seem less so).

Of course, this was just my expe­ri­ence. I actu­ally brought my daugh­ter along too. She’s recently become addicted to tex­ting and Insta­gram, as in any young mid­dle schooler’s life, friends are every­thing, and so with respect to being off the grid, Ellen was mis­er­able. She enjoyed her­self in other ways, but I do think that being dis­con­nected, whether through face to face encoun­ters or photo shar­ing, was a major loss for her. Had her friends been in the desert with her, I think she would have had a very dif­fer­ent experience.

I don’t know if this means any­thing in par­tic­u­lar for the future of soft­ware. I do think that as we con­tinue to dig­i­tize every aspect of human expe­ri­ence, how­ever, the hunger for mate­r­ial expe­ri­ence, face to face inter­ac­tion, and off the grid expe­ri­ences will grow, which will even­tu­ally shift cul­ture back to a more bal­anced use of com­mu­ni­ca­tion media, and in turn, cre­ate new types of soft­ware sys­tems that accom­mo­date being dis­con­nected with­out liv­ing in the desert for a week.

 

The economics of computing for all

Code.org has been get­ting some great press, and right­fully so: it’s full of great videos, great stats, and great resources. I also think it has a great mis­sion: there are hun­dreds of thou­sands of busi­nesses who need tal­ented soft­ware devel­op­ers in order to grow and pro­vide value, but these busi­nesses can’t find the engi­neers they need. More­over, peo­ple need jobs and soft­ware devel­op­ment jobs are abun­dant and high qual­ity. Hence the need for more stu­dents, more teach­ers, and more classes in com­put­ing. Win, win, right?

I don’t think so. I do believe in this mis­sion. I do research on this mis­sion. I feel strongly that if we don’t mas­sively increase the num­ber of teach­ers in com­put­ing, we’ll get nowhere. But I don’t think that by sim­ply increas­ing the num­ber of peo­ple who can code, we’ll address this gap. This is because the prob­lem, as code.org frames it, is one of quan­tity, where as the prob­lem is actu­ally about qual­ity.

To put it sim­ply, com­pa­nies don’t need more devel­op­ers, they need bet­ter devel­op­ers. The Googles, Face­books, Apples, and Microsofts of the world get plenty of appli­cants for jobs, they just don’t get appli­cants that are good enough. And the rest of the com­pa­nies in the world, while they can hire, are forced to hire devel­op­ers who often lack the tal­ent to cre­ate great soft­ware, lead­ing to a world of poor qual­ity, bro­ken soft­ware. Sure, just train­ing more devel­op­ers might increase the tiny frac­tion who are great, but that seems like a ter­ri­bly inef­fi­cient way of train­ing more great developers.

This brings us back to teach­ing. We absolutely need more teach­ers, but more impor­tantly we need more excel­lent teach­ers and excel­lent learn­ing oppor­tu­ni­ties. We need the kind of learn­ing infra­struc­ture that empow­ers every 15 year old who’s never seen a line of code to become as good as your typ­i­cal CMU, Stan­ford, Berkely, or UW CS grad, with­out nec­es­sar­ily hav­ing to go to those spe­cific schools. (They don’t have the capac­ity for the kind of growth, nor should they). We need to under­stand what excel­lent soft­ware devel­op­ment is, so we can dis­cover ways to help devel­op­ers achieve it.

This infra­struc­ture is going to be dif­fi­cult to cre­ate. For one, there are going to be a tiny frac­tion of excel­lent devel­op­ers who choose to choose to take a 50% pay cut to teach in a high school or uni­ver­sity, and yet we need those engi­neers to impart their exper­tise some­how. We need to under­stand how to cre­ate excel­lent com­put­ing teach­ers and how to empower them to cre­ate excel­lent devel­op­ers. We need to learn how to make com­put­ing edu­ca­tion effi­cient, so that grad­u­ates in com­put­ing and infor­ma­tion sci­ences have 4 years of actual prac­tice, rather than 4 years of inef­fec­tive lec­tures. We need an aca­d­e­mic cli­mate that respects cur­rent modes of com­put­ing edu­ca­tion as largely bro­ken and inef­fec­tive for all but the best and bright­est self-taught.

Unfor­tu­nately, all of this is going to take sig­nif­i­cant invest­ment. The pub­lic and the most prof­itable of our tech­nol­ogy com­pa­nies must reach deep into their pock­ets to fund this research, this train­ing, and this growth that they and our world so des­per­ately needs. And so kudos to code.org and every other bot­tom up effort to democ­ra­tize com­put­ing, but it’s not enough: we need real resources from the top to cre­ate real change.

No, the new iOS 6 Maps is not as good as Google Maps in sev­eral ways. There’s no end of miss­ing data, mis­placed land­marks, poorly con­structed 3D mod­els, miss­ing tran­sit infor­ma­tion, and because of the sig­nif­i­cant down­grade in infor­ma­tion qual­ity, there’s no end of hate for what many describe as a mas­sive mis­step by Apple. Some even describe it as the begin­ning of the end for the com­pany.

Of course, all of this is a bit overblown. The maps application’s user inter­face itself is much more usable than the pre­vi­ous ver­sion and in many ways, the maps them­selves are more read­able. The tran­sit plug-in fea­ture, while com­pletely use­less at the moment, might actu­ally pro­vide a bet­ter expe­ri­ence in the long term, as local apps might be bet­ter able to account for sub­tle dif­fer­ences in tran­sit infor­ma­tion accu­racy and avail­abil­ity. And while Apple is cer­tainly sev­eral years behind in devel­op­ing com­pre­hen­sive and accu­rate map infor­ma­tion, its com­plete­ness and accu­racy will inevitably improve.

The real story here is how Apple com­mu­ni­cated the change, and how soft­ware com­pa­nies com­mu­ni­cate change more gen­er­ally. If you looked only at Apple’s com­mu­ni­ca­tion, you’d think that the new maps was supe­rior in every way, rather than supe­rior in some ways and tem­porar­ily flawed in oth­ers. But most users prob­a­bly didn’t read any­thing about the change at all. They sim­ply pressed “okay” when their phone asked if they wanted to update and sud­denly their whole map­ping expe­ri­ence was different.

My girl­friend had this exact expe­ri­ence, even though I’d told her it had changed. She didn’t rec­og­nize it as the maps app at all; she thought it was a dif­fer­ent app alto­gether and won­dered where Google Maps had gone. For an exist­ing user, there are dozens of new things to learn to do even basic things and Apple pro­vided vir­tu­ally no guid­ance on what these changes were.

The larger ques­tion here is what soft­ware com­pa­nies should com­mu­ni­cate to avoid dra­matic out­bursts of vit­ri­olic hate every time they make a major change. Are release notes enough? Do appli­ca­tions need a stan­dard model for intro­duc­ing and explain­ing changes to users? To what extent should com­pa­nies be respon­si­ble for com­mu­ni­cat­ing neg­a­tive changes, such as aban­doned fea­tures and poorer accu­racy, and the ratio­nale for them? As soft­ware change becomes more inevitable and more rapid, so will the need for more care­fully explained tran­si­tions to new plat­forms, apps, and functionality.

(On a per­sonal note, I’ve found the new Maps to be quite good around Seat­tle. Yes­ter­day I asked Siri for direc­tions to my daughter’s friend’s house and not only did she find her name in the notes in the con­tact for the friend’s par­ents, but Siri found direc­tions that not only routed me around the SR-520 week­end clo­sure, but explained to me that the bridge was closed. The turn-by-turn direc­tions were fast, clear, and accu­rate and the con­tin­u­ously updat­ing ETA was quite help­ful in decid­ing whether to run the errand we’d planned on doing before we knew about the bridge clo­sure. Over­all, a vast improve­ment over the old maps.)

reflections on conference papers and journals

For the first time in my aca­d­e­mic career this week, I was work­ing on a jour­nal paper and a con­fer­ence paper at the same time. This wasn’t entirely inten­tional; both of these papers were going to be CHI papers, but as the results and writ­ing for one of them mate­ri­al­ized, it became clear that not only was the audi­ence not a fit, but I actu­ally couldn’t fit all of the impor­tant results into 10 page SIGCHI for­mat. This real­iza­tion, and the fact that I was work­ing on both simul­ta­ne­ously, led sev­eral real­iza­tions about how the two kinds of sub­mis­sions differ.

First and fore­most, the lack of a strict length restric­tion on the jour­nal paper was sur­pris­ingly free­ing. While on the CHI paper, every other dis­cus­sion with my stu­dent was what to cut and what to keep, dis­cus­sions about the jour­nal paper were much more about what details and results were miss­ing. Obvi­ously, there are advan­tages to each: with the CHI paper we were prob­a­bly forced to be much more con­cise and selec­tive about the most sig­nif­i­cant results; sim­i­larly, the jour­nal paper was slightly more ver­bose than it needed to be, because I didn’t have the threat of desk rejec­tion to force more care­ful edit­ing. At the same time, there were many inter­est­ing things that we had to leave out of the CHI paper that could have fit into just one addi­tional page. With jour­nal paper, the ques­tion was not “what’s most sig­nif­i­cant?” but “is this complete?”

The length dif­fer­ences also had a sig­nif­i­cant effect on how much space we gave to details nec­es­sary for repro­ducibil­ity. For the jour­nal paper, I felt like our task was to enable other researchers under­stand exactly what we did and how we did it. With the CHI paper, our task was to pro­vide enough detail for review­ers to see the rigor of what we did, but the amount of detail we ended up includ­ing really wasn’t enough to actu­ally repro­duce our study. In the long term, this is not good science.

Although the jour­nal paper didn’t have a dead­line, I did impose one on my lab in order to align with the end of sum­mer, since the under­grad research assis­tants on the paper would have to resume classes (as would I). The dead­line worked well enough to moti­vate us to fin­ish the paper, but it also freed us to take an extra day or two to read through the man­u­script a few extra times, improve some of the fig­ures, and ver­ify some of the results that we felt may have been done too hastily. The CHI paper, in con­trast, was rushed, as most CHI sub­mis­sions are. There was just enough time to edit thor­oughly yes­ter­day and sub­mit today, but there’s an exten­sive list of to do’s that we have if the paper is accepted. Sure, we could do them now, but why not wait until review­ers pro­vide more feed­back? With the jour­nal paper, we sub­mit­ted when we felt it was ready.

Of course, the biggest dif­fer­ence between the two sub­mis­sions has yet to come. In Novem­ber, we’ll get CHI reviews back and likely know with some cer­tainty whether the paper will be accepted or rejected. There will be no major revi­sions, no guid­ance from review­ers about what they’d like to see added or changed, and cer­tainly no oppor­tu­nity for major improve­ments if it is accepted. Instead, the reviews will focus on artic­u­lat­ing a ratio­nale for accep­tance or rejec­tion. With the jour­nal paper, I’ll (hope­fully) get three exten­sive posi­tions in a few months on what is miss­ing or wrong with the paper and what they’d like me to change in a revi­sion. The process will likely take longer, but in trade, I hope the paper will be much bet­ter than orig­i­nal manuscript.

One of these processes is designed for speed, the other is designed for qual­ity. I’ll let you guess which is which. And let me be clear: I’m a big fan of con­fer­ences. Most of my work is pub­lished at major HCI and soft­ware engi­neer­ing venues and not jour­nals and I truly enjoy the fact that nearly every­one in our com­mu­nity ral­lies together at the same time of year to con­tribute our lat­est and great­est for review. But as some­one who has the free­dom to really pub­lish in either, I’m really start­ing to ques­tion whether the aver­age con­fer­ence paper can actu­ally be of com­pa­ra­ble qual­ity to the aver­age jour­nal paper. There might just be inher­ent lim­its to a review process that is opti­mized for select­ing papers for pre­sen­ta­tion rather than improv­ing them.

Of course, this isn’t a nec­es­sary dichotomy. I’ve talked to many peo­ple in my research com­mu­nity about blend­ing the two. For exam­ple, if we sim­ply had jour­nals of infi­nite capac­ity and no con­fer­ence papers, and instead put all of our review­ing effort into our jour­nals, we could eas­ily design an annual con­fer­ence where peo­ple present the best work from recent jour­nal pub­li­ca­tions (and work in progress, as we already do). In fact, CHI already lets ToCHI authors present their recently pub­lished papers, so we’re part way there. With changes like this, we might find a nice bal­ance between a review process designed for improv­ing papers and a con­fer­ence designed for fos­ter­ing dis­cus­sion about them.

John Carmack discusses the art and science of software engineering

I’m not really a hard core gamer any­more, but my fas­ci­na­tion with pro­gram­ming did begin with video games (and specif­i­cally, ren­der­ing algo­rithms). So when I saw John Carmack’s 2012 Quake­Con keynote show up in my feed, I thought I’d lis­ten to a bit of it and learn a bit about the state of game design and development.

What I heard instead was a hacker’s hacker talk about his recent real­iza­tion that soft­ware engi­neer­ing is actu­ally a social sci­ence. Across 10 min­utes, he cov­ers many human aspects of devel­oper mis­takes, pro­gram­ming lan­guage design, sta­tic analy­sis, code reviews, devel­oper train­ing, and cost/benefit analy­ses. The empha­sis through­out is mine (and I also tran­scribed this, so I apol­o­gize for any mistakes).

In try­ing to make the games faster, which has to be our pri­or­ity going for­ward, we’ve made a lot of mis­takes already with Doom 4, a lot of it is water under the bridge, but pri­or­i­tiz­ing that can help us get the games done faster, just has to be where we go. Because we just can’t do this going, you know, six more years, what­ever, between games.

On the soft­ware devel­op­ment side, you know there was an inter­est­ing thing at E3, one of the inter­views I gave, I had men­tioned some­thing about how, you I’ve been learn­ing a whole lot, and I’m a bet­ter pro­gram­mer now than I was a year ago and the inter­viewer expressed a lot of sur­prise at that, you know after 20 years and going through all of this that you’d have it all fig­ured out by now, but I actu­ally have been learn­ing quite a bit about soft­ware devel­op­ment, both on the per­sonal crafts­man level but also pay­ing more atten­tion by what it means on the team dynam­ics side of things. And this is some­thing I prob­a­bly avoided look­ing at squarely for years because, it’s nice to think of myself as a sci­en­tist engi­neer sort, deal­ing in these things that are abstract or prov­able or objec­tive on there and there.

In real­ity in com­puter sci­ence, just about the only thing that’s really sci­ence is when you’re talk­ing about algo­rithms. And opti­miza­tion is an engi­neer­ing. But those don’t actu­ally occupy that much of the total time spent pro­gram­ming. You know, we have a few pro­gram­mers that spend a lot of time on opti­miz­ing and some of the select­ing of algo­rithms on there, but 90% of the pro­gram­mers are doing pro­gram­ming work to make things hap­pen. And when I start to look at what’s really hap­pen­ing in all of these, there really is no sci­ence and engi­neer­ing and objec­tiv­ity to most of these tasks. You know, one of the pro­gram­mers actu­ally says that he does a lot of mon­key programming—you know beat­ing on things and mak­ing stuff hap­pen. And I, you know we like to think that we can be smart engi­neers about this, that there are objec­tive ways to make good soft­ware, but as I’ve been look­ing at this more and more, it’s been strik­ing to me how much that really isn’t the case.

Aside from these that we can mea­sure, that we can mea­sure and repro­duce, which is the essence of sci­ence to be able to mea­sure some­thing, repro­duce it, make an esti­ma­tion and test that, and we get that on opti­miza­tion and algo­rithms there, but every­thing else that we do, really has noth­ing to do with that. It’s about social inter­ac­tions between the pro­gram­mers or even between your­self spread over time. And it’s nice to think where, you know we talk about func­tional pro­gram­ming and lambda cal­cu­lus and mon­ads and this sounds all nice and sci­ency, but it really doesn’t affect what you do in soft­ware engi­neer­ing there, these are all best prac­tices, and these are things that have shown to be help­ful in the past, but really are only help­ful when peo­ple are mak­ing cer­tain classes of mis­takes. Any­thing that I can do in a pure func­tional lan­guage, you know you take your most restric­tive sci­en­tific ori­ented code base on there, in the end of course it all comes down to assem­bly lan­guage, but you could exactly the same thing in BASIC or any other lan­guage that you wanted to.

One of the things that’s also fed into that is my older son’s start­ing to learn how to pro­gram now. I actu­ally tossed around the thought of should I maybe have him try to learn Haskell as a 7 year old or some­thing and I decided not to, that I, you know, I don’t think that I’m a good enough Haskell pro­gram­mer to want to instruct any­body in any­thing, but as I start think­ing about how some­body learns pro­gram­ming from really ground zero, it was open­ing my eyes a lit­tle bit to how much we take for granted in the soft­ware engi­neer­ing com­mu­nity, really is just lay­ers of arti­fice upon top a core fun­da­men­tal thing. Even when you go back to struc­tured pro­gram­ming, whether it’s while loops and for loops and stuff, at the bot­tom when I’m sit­ting think­ing how do you explain pro­gram­ming, what does a com­puter do, it’s really all the way back to flow charts. You do this, if this you do that, if not you do that. And, even try­ing to explain why do you do a for loop or what’s this while loop on here, these are all con­ven­tions that help soft­ware engi­neer­ing in the large when you’re deal­ing with mis­takes that peo­ple make. But they’re not fun­da­men­tal about what the computer’s doing. All of these are things that are just try­ing to help peo­ple not make mis­takes that they’re com­monly making.

One of the things that’s been dri­ven home extremely hard is that pro­gram­mers are mak­ing mis­takes all the time and con­stantly. I talked a lot last year about the work that we’ve done with sta­tic analy­sis and try­ing to run all of our code through sta­tic analy­sis and get it to run squeaky clean through all of these things and it turns up hun­dreds and hun­dreds, even thou­sands of issues. Now its great when you wind up with some­thing that says, now clearly this is a bug, you made a mis­take here, this is a bug, and you can point that out to every­one. And every­one will agree, okay, I won’t do that next time. But the prob­lem is that the best of inten­tions really don’t mat­ter. If some­thing can syn­tac­ti­cally be entered incor­rectly, it even­tu­ally will be. And that’s one of the rea­sons why I’ve got­ten very big on the sta­tic analy­sis, I would like to be able to enable even more restric­tive sub­sets of lan­guages and restrict pro­gram­mers even more because we make mis­takes con­stantly.

One of the things that I started doing rel­a­tively recently is actu­ally doing a daily code review where I look through the check­ins and just try to find some­thing edu­ca­tional to talk about to the team. And I anno­tate a lit­tle bit of code and say, well actu­ally this is a bug dis­cov­ered from code review, but a lot of it is just, favor doing it this way because it’s going to be clearer, it will cause less prob­lems in other cases, and it ruf­fled, there were a few peo­ple that got ruf­fled feath­ers early on about that with the kind of broad­cast nature of it, but I think that every­body is appre­ci­at­ing the process on that now. That’s one of those scal­a­bil­ity issues where there’s clearly no way I can do indi­vid­ual code reviews with every­one all the time, it takes a lot of time to even just scan through what every­one is doing. Being able to point out some­thing that some­body else did and say well, every­body should pay atten­tion to this, that has some real value in it. And as long as the team is agree­able to that, I think that’s been a very pos­i­tive thing.

But what hap­pens in some cases, where you’re argu­ing a point where let’s say we should put const on your func­tion para­me­ters or some­thing, that’s hard to make an objec­tive call on, where lots of stuff we can say, this indi­rec­tion is a cache miss, that’s going to cost us, it’s objec­tive, you can mea­sure it, there’s really no argu­ing with it, but so many of these other things are sort of style issues, where I can say, you know, over the years, I’ve seen this cause a lot prob­lems, but a lot of peo­ple will just say, I’ve never seen that prob­lem. That’s not a prob­lem for me, or I don’t make those mis­takes. So it has been really good to be able to point out com­monly on here, this is the mis­take caused by this.

But as I’ve been doing this more and more and think­ing about it, that sense that this isn’t sci­ence, this is just try­ing to deal with all of our human frail­ties on it, and I wish there were bet­ter ways to do this. You know we all want to become bet­ter devel­op­ers and it will help us make bet­ter prod­ucts, do a bet­ter job with what­ever we’re doing, but the fact that it’s com­ing down to train­ing dozens of peo­ple to do things in a con­sis­tent way, know­ing that we have pro­gram­mer turnover as peo­ple come and go, new peo­ple com­ing and look­ing at the code base and not under­stand­ing the con­ven­tions, and there are clearly bet­ter and worse ways of doing things but it’s frus­trat­ingly dif­fi­cult to quan­tify.

That’s some­thing that I’m spend­ing more and more time look­ing at. I read NASA’s soft­ware engi­neer­ing lab­o­ra­tory reports and I can’t seem to get any real value out of a lot of those things. The things that have been valu­able have been auto­mated things, things that don’t require a human to have some analy­sis, have some eval­u­a­tion of it, but just say, enforced or not enforced. And I think that that’s where really where things need to go as larger and larger soft­ware gets devel­oped. And it is strik­ing the scale of what we’re doing now. If you look back at the NASA reports and the scale of things and they con­sid­ered large code bases to be things with three or four hun­dred thou­sand lines of code. And we have far more than that in our game engines now. It’s kind of fun to think that the game engines, things that we’re play­ing games on, have more sophis­ti­cated soft­ware than cer­tainly the things that launch peo­ple to the moon and back and flew the shut­tle, ran Sky­lab, run the space sta­tion, all of these mas­sive projects on there are really out­done in com­plex­ity by any num­ber of major game engine projects.

And the answer is as far as I can tell really isn’t out there. With the NASA style devel­op­ment process, they can deliver very very low bug rates, but it’s at a very very low pro­duc­tiv­ity rate. And one of the things that you wind up doing in so many cases is cost ben­e­fit analy­ses, where you have to say, well we could be per­fect, but then we’ll have the wrong prod­uct and it will be too late. Or we can be really fast and loose, we can go ahead and just be sloppy but we’ll get some­thing really cool hap­pen­ing soon. And this is one of those areas where there’s clearly right tools for the right job, but what hap­pens is you make some­thing really cool really fast and then you live with it for years and you suf­fer over and over with that. And that’s some­thing that I still don’t think that we do the best job at.

We know our code is liv­ing for, real­is­ti­cally, we’re look­ing at a decade. I tell peo­ple that there’s a good chance that what­ever you’re writ­ing here, if it’s not extremely game spe­cific, may well exist a decade from now and it will have hun­dreds of pro­gram­mers, look­ing at the code, using it, inter­act­ing with it in some way, and that’s quite a bur­den. I do think that it’s just and right to impose pretty severe restric­tions on what we’ll let past analy­sis and what we’ll let into it, but there are large scale issues at the soft­ware API design lev­els and fig­ur­ing out things there, that are artis­tic, that are crafts­man like on there. And I wish that there were more quan­tifi­able things to say about that. And I am spend­ing a lot of time on this as we go forward.