Mind / Matter
things that count
DN Logo



Click here to send an email. DN
Click here to send an email. SOK


-11494
150244
Commentary: Paul Graham on `Hackers and Painters'

This is another of Graham's papers, and my parallel-ogue of remarks about it.

DN Title Graham David Ness
A demur Graham, for reasons that I find both unexplained and unsupportable, insists on using `makers' and `hackers' as, I believe, synonyms for `programmers'. If he wants these terms to mean something else, then it should be explained more clearly. If not, then I find that the use of the unusual words is an unnecessary distraction from understanding the content of the discussion.
Opening (This essay is derived from a guest lecture at Harvard, which incorporated an earlier talk at Northeastern.)
Introduction When I finished grad school in computer science I went to art school to study painting. A lot of people seemed surprised that someone interested in computers would also be interested in painting. They seemed to think that hacking and painting were very different kinds of work---that hacking was cold, precise, and methodical, and that painting was the frenzied expression of some primal urge. My friends and family who are in the music business would have been, I guess, a good deal less surprised. Computer programming and music share a rather obvious notion of precision. I don't want to claim that the same is necessarily true of painting, but it doesn't surprise me much.

However, what does surprise me is Graham's choice to use the word `hacking' when most of this paper---at least the parts of it I understand---are about computer programming. `Hacking' is rather something else. Ask any MIT freshman. Does Graham really mean `hacker' as it is used in common speech, or is the using it as some odd sort of (self-deprecating, perhaps) synonym for programmer. If he means something other than `programmer' it isn't exactly clear what---to me at least. And if he doesn't, then perhaps he should use the word programmer instead to avoid the potential confusion.

Hacking and Painting Both of these images are wrong. Hacking and painting have a lot in common. In fact, of all the different types of people I've known, hackers and painters are among the most alike. I find the phrasing odd. `most alike' is a phrase I have difficulty with. Are football players and basketball players `among the most alike'? Or are college football players and professional football players? Or perhaps backfield players? Or quarterbacks? Whatever. I'll concede that hackers and painters are more alike than cats and elephants, but I'm not sure I'd go all that much further.
Common Elements What hackers and painters have in common is that they're both makers. Along with composers, architects, and writers, what hackers and painters are trying to do is make good things. They're not doing research per se, though if in the course of trying to make good things they discover some new technique, so much the better. I'm unclear about who isn't a `maker' in Graham's usage. We commonly say that a businessman `makes' a product, even if his hands never actually touch the material from which the product is made. We also say that lawyers `make' arguments on behalf of their clients. The number of people doing what Graham calls `research' is, I should think, very small.
Computer Science I've never liked the term "computer science." The main reason I don't like it is that there's no such thing. Computer science is a grab bag of tenuously related areas thrown together by an accident of history, like Yugoslavia. At one end you have people who are really mathematicians, but call what they're doing computer science so they can get DARPA grants. In the middle you have people working on something like the natural history of computers-- studying the behavior of algorithms for routing data through networks, for example. And then at the other extreme you have the hackers, who are trying to write interesting software, and for whom computers are just a medium of expression, as concrete is for architects or paint for painters. It's as if mathematicians, physicists, and architects all had to be in the same department. Again, `engineering' is a term that applies to a large number of `tenuously related areas'. Indeed, to pose a perhaps rediculous example, `fishing' is also such a word---covering activities that range from a quiet soul out standing in a creek with a fly rod all the way to someone who pulls nets with huge tuna on to the decks of huge boats. Just because a wide range of different activities are covered is no reason to suggest that a particular name is inappropriate or free of content.
Software Engineering Sometimes what the hackers do is called "software engineering," but this term is just as misleading. Good software designers are no more engineers than architects are. The border between architecture and engineering is not sharply defined, but it's there. It falls between what and how: architects decide what to do, and engineers figure out how to do it. The designer / engineer distinction seems to me to be quite reasonable. In my experience most of the good designers I happen to know are very good engineers, but the reverse is certainly not the case. I know lots of passable to good engineers who are perfectly hopeless at design.
The Spec What and how should not be kept too separate. You're asking for trouble if you try to decide what to do without understanding how to do it. But hacking can certainly be more than just deciding how to implement some spec. At its best, it's creating the spec---though it turns out the best way to do that is to implement it. I disagree with this. Many of the people that I know who are very good at deciding what to do are perfectly hopeless at knowing how to do it. And the reverse case is even stronger. Many of the people I know who are extremely good at doing things are hopeless at having a clue about what is really worth doing. Of course, carried to extremes, design that is not at all grounded in reality generally leads to very poor results. But that is just `bad design'. It is not so much a necessary consequence of the structure used in the design process than a comment on the competence of the people who are involved in that process.
Splitting into Parts Perhaps one day "computer science" will, like Yugoslavia, get broken up into its component parts. That might be a good thing. Especially if it meant independence for my native land, hacking. I'm afraid I haven't got a clue what this might mean. Some clue about what the `component parts' might be would probably help. But even more important it would be useful to have some clue about what `independence' is needed. Of course, this might all just be an unsuccessful attempt at a bon mot.
Similar / Different Bundling all these different types of work together in one department may be convenient administratively, but it's confusing intellectually. That's the other reason I don't like the name "computer science." Arguably the people in the middle are doing something like an experimental science. But the people at either end, the hackers and the mathematicians, are not actually doing science. I suppose we have now shifted to an `academic environment' and the `department' mentioned is name for a collection of academics. Departments of History often cover times from Egyptian through Modern. Departments of Economics cover material from economic history to game theory. Even for seemingly more prescribed areas, music might be an example, cover material from the earliest of instruments and chants into modern computer-based tonal schemes. In my experience at academic institutions, everyone would love to have a department of their own---just for their subject---so long as someone else would handle the teaching and the administrative work.
OK with Mathematicians The mathematicians don't seem bothered by this. They happily set to work proving theorems like the other mathematicians over in the math department, and probably soon stop noticing that the building they work in says ``computer science'' on the outside. But for the hackers this label is a problem. If what they're doing is called science, it makes them feel they ought to be acting scientific. So instead of doing what they really want to do, which is to design beautiful software, hackers in universities and research labs feel they ought to be writing research papers. Surely this point is misdirected. The penchant for `proofs' is not a result of some labelling exercise, but rather a natural consequence of a `law' suggested by Herb Simon: structured activity drives out non-structured activity.. If there were time, here, we could go into a long discourse that would explain how computer matters, an intellectual subject in its very infancy, might have found a temporary home in the university. But the discussion is too long for this parallelogue.
Papers In the best case, the papers are just a formality. Hackers write cool software, and then write a paper about it, and the paper becomes a proxy for the achievement represented by the software. But often this mismatch causes problems. It's easy to drift away from building beautiful things toward building ugly things that make more suitable subjects for research papers. I guess this is a statement about academic software writing. While that may be of some all-consuming interest to the author of the paper, it doesn't strike me as a very important segment of the software production process. It is, at best, of modest significance. While I disagree with the point anyway, I don't think it describes a domain of sufficient importance to waste much time and energy over it.
Beautiful Things: Unfortunately, beautiful things don't always make the best subjects for papers. Number one, research must be original---and as anyone who has written a PhD dissertation knows, the way to be sure that you're exploring virgin territory is to to stake out a piece of ground that no one wants. Number two, research must be substantial---and awkward systems yield meatier papers, because you can write about the obstacles you have to overcome in order to get things done. Nothing yields meaty problems like starting with the wrong assumptions. Most of AI is an example of this rule; if you assume that knowledge can be represented as a list of predicate logic expressions whose arguments represent abstract concepts, you'll have a lot of papers to write about how to make this work. As Ricky Ricardo used to say, "Lucy, you got a lot of explaining to do." Again, this is a lot of `ado' about very little. While academic papers may be of little importance, and have, at best, distractive significance, most programming work is done in circumstances where no such papers and no such explanations are ever even bothered with.
Subtle Tweaks: The way to create something beautiful is often to make subtle tweaks to something that already exists, or to combine existing ideas in a slightly new way. This kind of work is hard to convey in a research paper. There are surely lots of different ways to make beautiful things. The strategy of Subtle Tweaks is only one of them, and---in my experience at least---rarely very successful. Perhaps Graham finds `beauty' in the `subtle tweaks' that make Heathrow such a great airport. Or perhaps he finds Howard Roark's `architecture by dynamite' to be `subtle tweaks', I don't know.
Judgement by Publication So why do universities and research labs continue to judge hackers by publications? For the same reason that "scholastic aptitude" gets measured by simple-minded standardized tests, or the productivity of programmers gets measured in lines of code. These tests are easy to apply, and there is nothing so tempting as an easy test that kind of works. This seems to me to be a rather complete, and highly confounded, way of looking at the academic world. Writing academic papers would hardly be suggested by anyone as a likely measurement of programmer effectiveness or productivity.
What Hackers Do Measuring what hackers are actually trying to do, designing beautiful software, would be much more difficult. You need a good sense of design to judge good design. And there is no correlation, except possibly a negative one, between people's ability to recognize good design and their confidence that they can. `You need a good sense of design to judge good design' sounds tautologous to me. I'm not sure that I have any grip on what it means to have a good sense of design without any ability to judge it, or, vice-versa to `judge good design' without having a sense of what it was. So this mostly seems like nonsense to me.
Test over Time The only external test is time. Over time, beautiful things tend to thrive, and ugly things tend to get discarded. Unfortunately, the amounts of time involved can be longer than human lifetimes. Samuel Johnson said it took a hundred years for a writer's reputation to converge. You have to wait for the writer's influential friends to die, and then for all their followers to die. Wrong on at least two counts---as was Samuel Johnson. As far as I can seen, there isn't much useful work that suggests that opinions or reputatios converge over time. Reputations rise and fall. Van Gogh wasn't popular during his lifetime, but now is certainly among the elect few of `greatest painters'. But, will he always be there? I doubt it.

As to the argument that beautiful things tend to thrive while ugly things tend to get discarded, I wonder where this comes from? I don't find much evidence, other than in the well of wishful thinking that it is true. Lots of lousy looking junk survives. And where are the Hanging Gardens of Babylon, or the Mausoleum, or the Temple of Diana?

Random Reputation I think hackers just have to resign themselves to having a large random component in their reputations. In this they are no different from other makers. In fact, they're lucky by comparison. The influence of fashion is not nearly so great in hacking as it is in painting. I don't know why this point is being made, but it isn't a very profound point. In the history of man, hackers---to use Graham's inappropriate term---have been around for the blink of an eye. Not much time for the `reputations' to have been formed, much less gone through any substantial form of re-evaluation. An interesting case to study is that all reputations are principally composed of luck---there's often a Leibnitz around when Newton getting all the credit---but that is not a case that is either made or researched here.
Misunderstanding Work There are worse things than having people misunderstand your work. A worse danger is that you will yourself misunderstand your work. Related fields are where you go looking for ideas. If you find yourself in the computer science department, there is a natural temptation to believe, for example, that hacking is the applied version of what theoretical computer science is the theory of. All the time I was in graduate school I had an uncomfortable feeling in the back of my mind that I ought to know more theory, and that it was very remiss of me to have forgotten all that stuff within three weeks of the final exam. Again I don't see much evidence that authors of great works understand them in any very effective way. The quality of work seems to me to be rather separable from the amount of understanding that the author evidences. I think, for example, of some of Durrell's descriptions of the `significance' of the Alexandria Quartet. I loved the books and thought them to be a near work of genius, but most of what Durrell himself said about them seemed to lack insight or comprehensibility. The work was wonderful. His (after the fact) rationalizations of it, weren't.
A Mistake Now I realize I was mistaken. Hackers need to understand the theory of computation about as much as painters need to understand paint chemistry. You need to know how to calculate time and space complexity and about Turing completeness. You might also want to remember at least the concept of a state machine, in case you have to write a parser or a regular expression library. Painters in fact have to remember a good deal more about paint chemistry than that. If this is a lot more than saying that it's important to understand a vocabulary before you attempt to write using it, then I am missing the point. But at least this far, I'll agree with it.
Non-Computer Sources I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. I don't find anything wrong with this point, so long as Graham offers it only as an example of his feelings and state of being. I am, of course, quite willing to accept that he finds `the best sources of ideas are not the other fields that have "computer" in ehir names. But no one should suggest that this applies to much more than Graham.
Work on Paper? For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging. A continuation of a self-revelatory exposition on how Graham likes to program. Nothing wrong (or right) about it. Just a statement of fact as it applies to him. I accept him at his word.
Bad Teaching For a long time I felt bad about this, just as I once felt bad that I didn't hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do. I don't doubt that it was wrong for Graham. My college physics book happened to be `right' for me, but `wrong' for just about everyone else in my class. This didn't make me guilty of anything other than having `good luck'. Since Graham's teachers taught in a way that was ineffective for him, I'd call this `bad luck'. Neither the students nor the teachers need be wrong. All it takes is that the wind may be blowing in the wrong direction during the time he passed by.
Test over Time The only external test is time. Over time, beautiful things tend to thrive, and ugly things tend to get discarded. Unfortunately, the amounts of time involved can be longer than human lifetimes. Samuel Johnson said it took a hundred years for a writer's reputation to converge. You have to wait for the writer's influential friends to die, and then for all their followers to die. Wrong on at least two counts---as was Samuel Johnson. As far as I can seen, there isn't much useful work that suggests that opinions or reputatios converge over time. Reputations rise and fall. Van Gogh wasn't popular during his lifetime, but now is certainly among the elect few of `greatest painters'. But, will he always be there? I doubt it.

As to the argument that beautiful things tend to thrive while ugly things tend to get discarded, I wonder where this comes from? I don't find much evidence, other than in the well of wishful thinking that it is true. Lots of lousy looking junk survives. And where are the Hanging Gardens of Babylon, or the Mausoleum, or the Temple of Diana?

Random Reputation I think hackers just have to resign themselves to having a large random component in their reputations. In this they are no different from other makers. In fact, they're lucky by comparison. The influence of fashion is not nearly so great in hacking as it is in painting. I don't know why this point is being made, but it isn't a very profound point. In the history of man, hackers---to use Graham's inappropriate term---have been around for the blink of an eye. Not much time for the `reputations' to have been formed, much less gone through any substantial form of re-evaluation. An interesting case to study is that all reputations are principally composed of luck---there's often a Leibnitz around when Newton getting all the credit---but that is not a case that is either made or researched here.
Misunderstanding Work There are worse things than having people misunderstand your work. A worse danger is that you will yourself misunderstand your work. Related fields are where you go looking for ideas. If you find yourself in the computer science department, there is a natural temptation to believe, for example, that hacking is the applied version of what theoretical computer science is the theory of. All the time I was in graduate school I had an uncomfortable feeling in the back of my mind that I ought to know more theory, and that it was very remiss of me to have forgotten all that stuff within three weeks of the final exam. Again I don't see much evidence that authors of great works understand them in any very effective way. The quality of work seems to me to be rather separable from the amount of understanding that the author evidences. I think, for example, of some of Durrell's descriptions of the `significance' of the Alexandria Quartet. I loved the books and thought them to be a near work of genius, but most of what Durrell himself said about them seemed to lack insight or comprehensibility. The work was wonderful. His (after the fact) rationalizations of it, weren't.
A Mistake Now I realize I was mistaken. Hackers need to understand the theory of computation about as much as painters need to understand paint chemistry. You need to know how to calculate time and space complexity and about Turing completeness. You might also want to remember at least the concept of a state machine, in case you have to write a parser or a regular expression library. Painters in fact have to remember a good deal more about paint chemistry than that. If this is a lot more than saying that it's important to understand a vocabulary before you attempt to write using it, then I am missing the point. But at least this far, I'll agree with it.
Non-Computer Sources I've found that the best sources of ideas are not the other fields that have the word "computer" in their names, but the other fields inhabited by makers. Painting has been a much richer source of ideas than the theory of computation. I don't find anything wrong with this point, so long as Graham offers it only as an example of his feelings and state of being. I am, of course, quite willing to accept that he finds `the best sources of ideas are not the other fields that have "computer" in ehir names. But no one should suggest that this applies to much more than Graham.
Work on Paper? For example, I was taught in college that one ought to figure out a program completely on paper before even going near a computer. I found that I did not program this way. I found that I liked to program sitting in front of a computer, not a piece of paper. Worse still, instead of patiently writing out a complete program and assuring myself it was correct, I tended to just spew out code that was hopelessly broken, and gradually beat it into shape. Debugging, I was taught, was a kind of final pass where you caught typos and oversights. The way I worked, it seemed like programming consisted of debugging. A continuation of a self-revelatory exposition on how Graham likes to program. Nothing wrong (or right) about it. Just a statement of fact as it applies to him. I accept him at his word.
Bad Teaching For a long time I felt bad about this, just as I once felt bad that I didn't hold my pencil the way they taught me to in elementary school. If I had only looked over at the other makers, the painters or the architects, I would have realized that there was a name for what I was doing: sketching. As far as I can tell, the way they taught me to program in college was all wrong. You should figure out programs as you're writing them, just as writers and painters and architects do. I don't doubt that it was wrong for Graham. My college physics book happened to be `right' for me, but `wrong' for just about everyone else in my class. This didn't make me guilty of anything other than having `good luck'. Since Graham's teachers taught in a way that was ineffective for him, I'd call this `bad luck'. Neither the students nor the teachers need be wrong. All it takes is that the wind may be blowing in the wrong direction during the time he passed by.
Implications Realizing this has real implications for software design. It means that a programming language should, above all, be malleable. A programming language is for thinking of programs, not for expressing programs you've already thought of. It should be a pencil, not a pen. Static typing would be a fine idea if people actually did write programs the way they taught me to in college. But that's not how any of the hackers I know write programs. We need a language that lets us scribble and smudge and smear, not a language where you have to sit with a teacup of types balanced on your knee and make polite conversation with a strict old aunt of a compiler. I'm tempted to weigh in with Alice at my side and suggest that a `programming language' is for exactly what I want it to be for, no more no less. Graham may want his programming languages to help him think about programs, but I don't. Some sculptors like to carve in clay and others prefer stone. What works for clay-carvers may be ineffective for stone-cutters, and yet there's nothing wrong with either approach. They're just different.

Even if we want our programming language to be helpful in guiding our thinking, it is not at all clear that malleability is the right way to accomplish this. Knuth, for example, goes about it a rather different (and for me, more effective) way.

Static Type While we're on the subject of static typing, identifying with the makers will save us from another problem that afflicts the sciences: math envy. Everyone in the sciences secretly believes that mathematicians are smarter than they are. I think mathematicians also believe this. At any rate, the result is that scientists tend to make their work look as mathematical as possible. In a field like physics this probably doesn't do much harm, but the further you get from the natural sciences, the more of a problem it becomes. I guess I wasn't clear that malleability and static typing were so inextricably linked, so this seems somewhat non sequitur to me. And then we jump, wholesale, into math envy. This is a topic that, having been educated at a technological institute, has been with me since late adolescence. And I think it's\ overrated. Math envy may be fun to talk about, but in my experience it never proves to be much of a real motivation to actually do anything.
Impressions Count A page of formulas just looks so impressive. (Tip: for extra impressiveness, use Greek variables.) And so there is a great temptation to work on problems you can treat formally, rather than problems that are, say, important. A page of formulas may look impressive to some, but so does a display of good typesetting. And so do nice sketches, and pictures taken from astronomical telescopes. Anyway, I'm not sure what I am supposed to conclude from this observation, even if it were right. And it isn't.
Makers If hackers identified with other makers, like writers and painters, they wouldn't feel tempted to do this. Writers and painters don't suffer from math envy. They feel as if they're doing something completely unrelated. So are hackers, I think. While I am happy to concede that I have never met anyone who suffered from COBOL envy, I don't think many of my programming colleagues in the last 50 years have suffered from math envy either. Perhaps there have been occasional cases of APL or K envy, but even in these cases it was mostly `envy within the group' and not very observable across any wider population. `My APL is littler that yours' may be a resounding shout among the few people who attend APL meetings anymore, but it isn't a matter of much consequence to the rest of the world.
Universities and Companies If universities and research labs keep hackers from doing the kind of work they want to do, perhaps the place for them is in companies. Unfortunately, most companies won't let hackers do what they want either. Universities and research labs force hackers to be scientists, and companies force them to be engineers. Another odd non sequitur. But now that we're on the point, what's so special about hackers anyway? Most companies don't let writers or painters do that they want either. Most universities reserve only a few slots for `artist in residence' programs. And even when they do, only a very few of the great artists would consider them. Most people have to `work' for a living. As my father said `The reason they call it `work' is because it is work. If it wasn't they'd call it `play'.
What is Hacking? I only discovered this myself quite recently. When Yahoo bought Viaweb, they asked me what I wanted to do. I had never liked the business side very much, and said that I just wanted to hack. When I got to Yahoo, I found that what hacking meant to them was implementing software, not designing it. Programmers were seen as technicians who translated the visions (if that is the word) of product managers into code. If this is a literal version of the conversation, then I am not surprised---and Graham is, at a minimum, guilty of poor communication. The way he has used `hack' so far in this paper doesn't conform to any broad usage that I know of. So I wouldn't be surprised if the Yahoo people didn't quite get it either. And Yahoo's genius---if there was one---was that they recognized that the vision of a good product manager might well be much more significant in terms of impact on their company than the views of programmer/hackers about what they would like to code.
Big Companies This seems to be the default plan in big companies. They do it because it decreases the standard deviation of the outcome. Only a small percentage of hackers can actually design software, and it's hard for the people running a company to pick these out. So instead of entrusting the future of the software to one brilliant hacker, most companies set things up so that it is designed by committee, and the hackers merely implement the design. I think I have heard this argument at least hundreds of times over the past 50 years. But here it takes what is an unclear form anyway. First, I don't see what the `default plan' is. Does he mean `have product managers design'? If so, then I can virtually guarantee that this path is not chosed because it `decreases the standard deviation of the outcome'. Second, entrusting the future of an organization to `one brilliant hacker' may not be a very good strategy. One of the most brilliant I ever knew decided---in the middle of a project---to become a rabbi instead. Kiss the project goodbye. The `let's hand the project over to a brilliant programmer looks like a fine strategy to the brilliant programmer (for sure), but it may not look half as interesting to others, particularly unless they feel they have some way of devining the separation of the brilliant programmers from the crazies who just happen to behave like they think brilliant programmers behave.
Making Money If you want to make money at some point, remember this, because this is one of the reasons startups win. Big companies want to decrease the standard deviation of design outcomes because they want to avoid disasters. But when you damp oscillations, you lose the high points as well as the low. This is not a problem for big companies, because they don't win by making great products. Big companies win by sucking less than other big companies. This is an economic view which surely wouldn't be worth much more than a C- in any decent introductory economics class. Startups win precisely because there are so many of them that lose, and those that lose manage to lose so disasterously that they completely disappear. Take a field, look at the startups. The names of the great railroad companies of England are enshrined in the granite cornices of their grand London Hotels. None of them exist any longer. And GM, put together and run by a committee is the largest survivor in a field where hundreds of startups all failed---often quite quickly.
Fighting the War So if you can figure out a way to get in a design war with a company big enough that its software is designed by product managers, they'll never be able to keep up with you. These opportunities are not easy to find, though. It's hard to engage a big company in a design war, just as it's hard to engage an opponent inside a castle in hand to hand combat. It would be pretty easy to write a better word processor than Microsoft Word, for example, but Microsoft, within the castle of their operating system monopoly, probably wouldn't even notice if you did. Anyone who thinks that it is structually impossible for a large company to be an effective fighter in a design war must not have much experience with the many cases where skunk works have proven to be effective. It may be difficult for Microsoft to change its core product---for example, Word---to compete effectively with your product. But they surely have the resources to create a small group with a charge to `beat you up' won't cost them much and they may deal very effectively with the task of preventing you from achieving any substantial growth. This is all predicated on the notion that they actually care about you, of course. In most you are likely to have such inconsequential impact that they are not likely to care.
In New Markets The place to fight design wars is in new markets, where no one has yet managed to establish any fortifications. That's where you can win big by taking the bold approach to design, and having the same people both design and implement the product. Microsoft themselves did this at the start. So did Apple. And Hewlett-Packard. I suspect almost every successful startup has. I'm not sure about this proposition. On the surface it seems true, but then so does the time-honored adage Pioneers get arrows in their back. Surely CPM `owned' the OS market at the start. So the notion that `Microsoft themselves did this at the start' seems just plain wrong. And Apple's design was almost completely pirated from the design work done at Xerox PARC. And much of HP's `pioneering' work followed along well after TI had already occupied the field. So the notion that `new markets' are the place to begin design efforts seems questionnable at the very best.
A Startup So one way to build great software is to start your own startup. There are two problems with this, though. One is that in a startup you have to do so much besides write software. At Viaweb I considered myself lucky if I got to hack a quarter of the time. And the things I had to do the other three quarters of the time ranged from tedious to terrifying. I have a benchmark for this, because I once had to leave a board meeting to have some cavities filled. I remember sitting back in the dentist's chair, waiting for the drill, and feeling like I was on vacation. There's no such thing as a free lunch, remember? Either you do the work, and thus have the freedom, or someone else does the work and you have to answer to them. No mysteries in that.
Money vs. Interest The other problem with startups is that there is not much overlap between the kind of software that makes money and the kind that's interesting to write. Programming languages are interesting to write, and Microsoft's first product was one, in fact, but no one will pay for programming languages now. If you want to make money, you tend to be forced to work on problems that are too nasty for anyone to solve for free. It's a little hard to regard this as a deep or profound observation of any sort. At the `beginning' anyone is happy to have anything done. Then as more alternatives become available people can get downright picky. And the author is careless about whether `programming languages are interesting to write' refers to language design, complier implementation or language use. The point is too jumbled to understand. And isn't the statement that `if you want to make money you tend to be forced to work on problems that are too nasty for anyone to solve for free' is surely a rather vacuous tautology.
Supply and Demand All makers face this problem. Prices are determined by supply and demand, and there is just not as much demand for things that are fun to work on as there is for things that solve the mundane problems of individual customers. Acting in off-Broadway plays just doesn't pay as well as wearing a gorilla suit in someone's booth at a trade show. Writing novels doesn't pay as well as writing ad copy for garbage disposals. And hacking programming languages doesn't pay as well as figuring out how to connect some company's legacy database to their Web server. This seems pretty much true. Simplistic, but true. Probably a B- answer, and at least that's better than the earlier C-.
Don't give up Your Day Job I think the answer to this problem, in the case of software, is a concept known to nearly all makers: the day job. This phrase began with musicians, who perform at night. More generally, it means that you have one kind of work you do for money, and another for love. An alternative is to find a way to let yourself love your day work. This is psychological and may well be within the capabilities of many. And all-in-all it provides a nicer, neater, solution.
Kind of Day Job Nearly all makers have day jobs early in their careers. Painters and writers notoriously do. If you're lucky you can get a day job that's closely related to your real work. Musicians often seem to work in record stores. A hacker working on some programming language or operating system might likewise be able to get a day job using it. [1] It strikes me that a programmer exercising his craft is a rather different prospect than a musician taking a day job selling records. However, since this general point seems to be unimportant to the general discussion, it doesn't appear to matter much how it is resolved with respect to this question.
Open-Source Software When I say that the answer is for hackers to have day jobs, and work on beautiful software on the side, I'm not proposing this as a new idea. This is what open-source hacking is all about. What I'm saying is that open-source is probably the right model, because it has been independently confirmed by all the other makers. I'm not quite sure about the segue to `open source programming' or how that relates to a musician getting a day job selling records. And the proposition that `open-source is probably the right model' seems arguable at best. Here all we have is another case of argument by assertion, which seems to add as little weight to the issue as it sheds light upon it.
Viewlab Experience It seems surprising to me that any employer would be reluctant to let hackers work on open-source projects. At Viaweb, we would have been reluctant to hire anyone who didn't. When we interviewed programmers, the main thing we cared about was what kind of software they wrote in their spare time. You can't do anything really well unless you love it, and if you love to hack you'll inevitably be working on projects of your own. [2] Really? A whole bunch of dubious propositions mixed with some home-spun philosophy that is clearly more a projection of wishes rather than any generalizable facts. No doubt some people believe it. And sometimes it works. But there isn't much evidence, certainly here, that this is the case. Lots of people---are bartenders too mundane an example---really have trouble if they like their jobs too much. What professionalism is all about is being able to focus and care about client problems at a proper level without becoming entangled in them, thus allowing us to keep a decent life. So don't believe this one unless you get some better evidence than is presented here.
Looking for Metaphor Because hackers are makers rather than scientists, the right place to look for metaphors is not in the sciences, but among other kinds of makers. What else can painting teach us about hacking? Why? Source of metaphor seems to me to be quite idiosyncratic. Some people might find painting to be a good metaphor for computer programming, but I happen to generally prefer architecture. And while musical metaphor is of no use to me personally because I have a very unsophisticated ear, I can easily imagine it being quite useful to someone else. There is no `right' place to look for metaphor for everyone. It is quite plausible to me that other will find good metaphor in all kinds of places.
Learn by Doing One thing we can learn, or at least confirm, from the example of painting is how to learn to hack. You learn to paint mostly by doing it. Ditto for hacking. Most hackers don't learn to hack by taking college courses in programming. They learn to hack by writing programs of their own at age thirteen. Even in college classes, you learn to hack mostly by hacking. [3] Is this `practice makes perfect'? If so it would seem plainer just to say that.
Watch the Process Because painters leave a trail of work behind them, you can watch them learn by doing. If you look at the work of a painter in chronological order, you'll find that each painting builds on things that have been learned in previous ones. When there's something in a painting that works very well, you can usually find version 1 of it in a smaller form in some earlier painting. While this point seems to be true, it also seems rather universal. Where doesn't it apply. The last wall laid by a bricklayer is likely to express the lessons of build all of the walls that have gone on before. Of course we need the `vocabulary' to be able to recognize it, but in general this strikes me as true of just about everything in life, and as a result not much is learned by asserting it in this particular circumstance.
More Like Painters I think most makers work this way. Writers and architects seem to as well. Maybe it would be good for hackers to act more like painters, and regularly start over from scratch, instead of continuing to work for years on one project, and trying to incorporate all their later ideas as revisions. Huh? In my 50 year programming career, few `jobs' (from a programming standpoint, not from an organizational standpoint) lasted more than a year. But I have no clue what this paragraph is supposed to mean, so I guess I better leave it there.
Learning in Sciencs The fact that hackers learn to hack by doing it is another sign of how different hacking is from the sciences. Scientists don't learn science by doing it, but by doing labs and problem sets. Scientists start out doing work that's perfect, in the sense that they're just trying to reproduce work someone else has already done for them. Eventually, they get to the point where they can do original work. Whereas hackers, from the start, are doing original work; it's just very bad. So hackers start original, and get good, and scientists start good, and get original. This strikes me as completely wrong, at least given my experience with computers, science and engineering. First, the view of science expressed in the paragraph is naive. Second, theoretical science looks a whole lot to me like a good parallel to theoretical computer science, and experimental science looks a lot like experimental computer science. I don't see any broad scheme of differences that appear obvious. Third, only a very small part of science is devoted to `trying to reproduce work someone else has already done'.
Learn from Example The other way makers learn is from examples. For a painter, a museum is a reference library of techniques. For hundreds of years it has been part of the traditional education of painters to copy the works of the great masters, because copying forces you to look closely at the way a painting is made. Learning from example is a time-honored practice in virtually every field. Not only do painters copy, but writers and musicians may well too.
Writers Also Writers do this too. Benjamin Franklin learned to write by summarizing the points in the essays of Addison and Steele and then trying to reproduce them. Raymond Chandler did the same thing with detective stories. It's one way, of course. But it is only the best way for some small subset of the population. There are many other alternatives, and some of them will fit with particular people better than will others. Chandler may have written this way (I don't know) but lots of others (Churchill, Eliot, Hemingway, Faulkner, Goethe, ...) didn't. So, while I'll accept the testimony of anyone who says they find this approach effective, I would be cautious about recommending it to others.
Good Examples Hackers, likewise, can learn to program by looking at good programs--not just at what they do, but the source code too. One of the less publicized benefits of the open-source movement is that it has made it easier to learn to program. When I learned to program, we had to rely mostly on examples in books. The one big chunk of code available then was Unix, but even this was not open source. Most of the people who read the source read it in illicit photocopies of John Lions' book, which though written in 1977 was not allowed to be published until 1996. Most of the programmers I know regard `reading source' as a poor way to learn design. Once in a while, reading someone else's code can be useful in gaining particular technique, but generally I find the set-up costs of understanding another programmer's style so substantial that it is an obstacle rather than an enhancement to the learning process. Perhaps Graham has forgotten that in the early days of computing (the 1950s and 1960s) we generally had rather complete access to source, and it didn't prove to be a particularly effective tool for learning. In addition, in the 50s and 60s we at least had the advantage that most code was small, very small by today's standards. Today reading the source for Knuth's typesetting system (open from the first day of its development) still proves to be a daunting and time consuming task---and good as the code might be, I am dubious that more than a few, indeed a very few, will find the experience worth the investment.
Gradual Refinement Another example we can take from painting is the way that paintings are created by gradual refinement. Paintings usually begin with a sketch. Gradually the details get filled in. But it is not merely a process of filling in. Sometimes the original plans turn out to be mistaken. Countless paintings, when you look at them in xrays, turn out to have limbs that have been moved or facial features that have been readjusted. I'm not sure that the refinement process used in painting is of a similar type to that used in programming, at least the way I do that process. If I refine a program it starts out working, but may reject complex cases and not cover the complete desired final domain of inputs, but it works in a crude way anyway. If I were to develop a painting by refinement I start with a light sketch (I think, as I am not an arist and haven't actually done this). Then I'd paint over perhaps refining detail on the way.
Imperfect Spec Here's a case where we can learn from painting. I think hacking should work this way too. It's unrealistic to expect that the specifications for a program will be perfect. You're better off if you admit this up front, and write programs in a way that allows specifications to change on the fly. I'm getting a little confused. Later in this paper Graham will present the notion that it is important `to divide projects into sharply defined modules'. Yet here we seem to have the suggestion that it is unrealistic to expect that this will be possible. It doesn't seem sensible to base a strategy on the notion that it will be possible to split things clearly and then suggest that such splits are necessarily fuzzy. It doesn't make sense to argue on both sides of this issue.
Large Companies Interfere (The structure of large companies makes this hard for them to do, so here is another place where startups have an advantage.) What is the relevance of this and why is it here? And whatever it is supposed to mean I completely miss the difference between large companies and small companies with respect to this particular point.
Premature Optimization Everyone by now presumably knows about the danger of premature optimization. I think we should be just as worried about premature design---deciding too early what a program should do. I'm unclear about where we are going with this, but I thik that this point also contradicts the later assumption about sharply defined modules.
Need Right Tools The right tools can help us avoid this danger. A good programming language should, like oil paint, make it easy to change your mind. Dynamic typing is a win here because you don't have to commit to specific data representations up front. But the key to flexibility, I think, is to make the language very abstract. The easiest program to change is one that's very short. If this case is right, the argument isn't very well or very clearly made. I think the point about oil paint is mis-directed but probably unimportant. As to the importance of dynamic typing, no case is presented that it satisfies as a way of handling this particular problem. Such a case, if it can be made, would be much aided by some form of examples.
A Paradox? This sounds like a paradox, but a great painting has to be better than it has to be. For example, when Leonardo painted the portrait of Ginevra de Benci in the National Gallery, he put a juniper bush behind her head. In it he carefully painted each individual leaf. Many painters might have thought, this is just something to put in the background to frame her head. No one will look that closely at it. It doesn't sound like a paradox, but it does sound like a mis-use of words. As to the substance of the point, though, it is difficult to predict in advance the impact of particular aspects of a painting. Some painters paint better `fast' and some paint better `slow'. I'm not sure any particular generalization in this case is helpful.
Relentlessness Not Leonardo. How hard he worked on part of a painting didn't depend at all on how closely he expected anyone to look at it. He was like Michael Jordan. Relentless. Some people get off on the adulation of the crowd. Others don't care about it. I fail to find any particular strong correlations between interest in close scrutiny and quality of work.
Detail Becomes Visible Relentlessness wins because, in the aggregate, unseen details become visible. When people walk by the portrait of Ginevra de Benci, their attention is often immediately arrested by it, even before they look at the label and notice that it says Leonardo da Vinci. All those unseen details combine to produce something that's just stunning, like a thousand barely audible voices all singing in tune. I don't buy the argument at all. The relationship between attention to particular kinds of details and relentlessness seems loose at best. I don't doubt that in Leonardo's painting the details combine to produce something wonderful, but then so does the grand sweep of most of his work. And there are surely a lot of relentless painters who pay close attention to detail who also just happen to be lousy and largely forgotten.
Devotion to Beauty Great software, likewise, requires a fanatical devotion to beauty. If you look inside good software, you find that parts no one is ever supposed to see are beautiful too. I'm not claiming I write great software, but I know that when it comes to code I behave in a way that would make me eligible for prescription drugs if I approached everyday life the same way. It drives me crazy to see code that's badly indented, or that uses ugly variable names. Some of the `greats' are finatics, many are not. Some are disciplined, others aren't. I've known some great software that was produced by programmers who retired regularly at 5pm to practice their piano or play the organ.
Inspiration If a hacker were a mere implementor, turning a spec into code, then he could just work his way through it from one end to the other like someone digging a ditch. But if the hacker is a creator, we have to take inspiration into account. Well ok, but how? And why do we care about this now?
Cycles In hacking, like painting, work comes in cycles. Sometimes you get excited about some new project and you want to work sixteen hours a day on it. Other times nothing seems interesting. Like more than just painting. For this to be interesting to me I'd just as soon have a picture of what aspects of life don't behave this way.
Avoiding the Stall To do good work you have to take these cycles into account, because they're affected by how you react to them. When you're driving a car with a manual transmission on a hill, you have to back off the clutch sometimes to avoid stalling. Backing off can likewise prevent ambition from stalling. In both painting and hacking there are some tasks that are terrifyingly ambitious, and others that are comfortingly routine. It's a good idea to save some easy tasks for moments when you would otherwise stall. Good advice, I suppose. General advice about life, actually, hardly specific to this area. And while I find that this strategy is comfortable for me, I am a little more reticent about advising it to others. But if the shoe fits comfortably then you might as well wear it.
Saving up Bugs In hacking, this can literally mean saving up bugs. I like debugging: it's the one time that hacking is as straightforward as people think it is. You have a totally constrained problem, and all you have to do is solve it. Your program is supposed to do x. Instead it does y. Where does it go wrong? You know you're going to win in the end. It's as relaxing as painting a wall. Structured work drives out unstructured. It's not a measure of importance.
Working Together The example of painting can teach us not only how to manage our own work, but how to work together. A lot of the great art of the past is the work of multiple hands, though there may only be one name on the wall next to it in the museum. Leonardo was an apprentice in the workshop of Verrocchio and painted one of the angels in his Baptism of Christ. This sort of thing was the rule, not the exception. Michelangelo was considered especially dedicated for insisting on painting all the figures on the ceiling of the Sistine Chapel himself. While I have no reason to doubt Graham's recounting of these facts, it does seem to me that `painting' is one of the last things that would occur to me to use for examles of `working together'. I meet with some local artists on a semi-regular basis, and I can't think of any group---perhaps my academic colleagues excepted---that are more fiercely independent. And while they are quite collegial, they sho no evidence of much interest in `working together'. While I don't doubt that a huge effort like painting the Sistine ceiling.
Collaboration As far as I know, when painters worked together on a painting, they never worked on the same parts. It was common for the master to paint the principal figures and for assistants to paint the others and the background. But you never had one guy painting over the work of another. Possible, I suppose, but I lose the relevance to the issues being discussed here. If the point is that higher-level programmers are more likely to deal with the complex central core of the problems and leave junior programmers to clean up some of the details, then I guess I'd agree. But I'm unclear why it matters here.
And in Software I think this is the right model for collaboration in software too. Don't push it too far. When a piece of code is being hacked by three or four different people, no one of whom really owns it, it will end up being like a common-room. It will tend to feel bleak and abandoned, and accumulate cruft. The right way to collaborate, I think, is to divide projects into sharply defined modules, each with a definite owner, and with interfaces between them that are as carefully designed and, if possible, as articulated as programming languages. Perhaps. But then any strategy, done intelligently may well work. The problem with the `sharply defined modules' strategy is, for example, that unless someone very good defined the modules, they will turn out to lack sharp definition then this will lead to problems at the interfaces between the modules, and the project may well sink into a sea of despair. Graham may like this way, and it may work well for him, but I would be loath to recommend it as `the right way'.
Empathy Like painting, most software is intended for a human audience. And so hackers, like painters, must have empathy to do really great work. You have to be able to see things from the user's point of view. I'm not at all sure that this is true. I would have thought that most software is intended to run huge transaction systems and to control real-time processes.
What someone else wants When I was a kid I was always being told to look at things from someone else's point of view. What this always meant in practice was to do what someone else wanted, instead of what I wanted. This of course gave empathy a bad name, and I made a point of not cultivating it. Some people are contrarians. Some aren't. And sometimes you learn from people and other times they learn from you. Good people sometimes do what they are told, and other times refuse to do it. We expect soldiers to follow orders, but not if the orders are `illegitimate'.
The Secret of Success Boy, was I wrong. It turns out that looking at things from other people's point of view is practically the secret of success. It doesn't necessarily mean being self-sacrificing. Far from it. Understanding how someone else sees things doesn't imply that you'll act in his interest; in some situations---in war, for example---you want to do exactly the opposite. [4] Some people are good at deriving useful information by `getting under the skin' of their potential users. Others aren't. For them this kind of attempt is a real waste of time. This emphasises the difficulties with this whole set of pronouncements by Graham. I have no reason to doubt that these opinions represent his feelings and that he is being completely forthcoming in exposing them. However, this does not mean that the particulars of the advice given apply to any other human---and there is no useful case made here that this approach would be useful even for a majority of the potential practicioners.
Human Audiences Most makers make things for a human audience. And to engage an audience you have to understand what they need. Nearly all the greatest paintings are paintings of people, for example, because people are what people are interested in. This seems highly questionnable to me. First, many ancient pictures are representations of some form of the divinity. Whether these are `people' in any normal sense is up to your choice of definition. Second, many paintings represent what we might loosely call `allegorical news'. They are sort of news photographs of a pre-photographic world. Third, many great paintings represent directly non-human forms: indstrial situations, land and cityscapes, non-representational art, etc.. Finally, there is a question of how this issue relates to other forms of art---not just painting.
When Empathy Counts Empathy is probably the single most important difference between a good hacker and a great one. Some hackers are quite smart, but when it comes to empathy are practically solipsists. It's hard for such people to design great software [5], because they can't see things from the user's point of view. Some good programmers have a good sense for what their potential users want. Other good programmers have no capacity for this, but seem to have an intuitive sense about how their user communities can be led to make use of things that they haven't thought about needing. Some systems lead their users, other systems follow them. Both are potentially successful paradigms, and I confess---at least at this moment---to not having yet developed a sense which of these strategies is most likely to lead to success in the largest number of cases.
Power of Explanation One way to tell how good people are at empathy is to watch them explain a technical question to someone without a technical background. We probably all know people who, though otherwise smart, are just comically bad at this. If someone asks them at a dinner party what a programming language is, they'll say something like ``Oh, a high-level language is what the compiler uses as input to generate object code.'' High-level language? Compiler? Object code? Someone who doesn't know what a programming language is obviously doesn't know what these things are, either. Finding bad teachers isn't a very interesting sport. It is more difficult and challenging to find good ones. And often, good teachers are not profoundly well versed in the depth of their subjects. Many people who know a lot of stuff have a hard time talking to kids in kindergarten. So there's not much of interest in discovering that some people can't explain things to people of vastly different levels of sophistication.
Self-Explanation Part of what software has to do is explain itself. So to write good software you have to understand how little users understand. They're going to walk up to the software with no preparation, and it had better do what they guess it will, because they're not going to read the manual. The best system I've ever seen in this respect was the original Macintosh, in 1985. It did what software almost never does: it just worked. [6] This discussion is odd. First, it treats `users' as being at a single level of sophistication. Perhaps one of the important reasons that RTFM isn't an adequate practice is a simple recognition of this fact. No manual can likely be appropriate to all of the levels of reader sophistication. Second, while the particular relevance of the Macintosh manual here escapes me, I could easily say the same thing about all of the CPM and DOS systems that I worked on over the years, as well as a large number of machines that well predate by a couple of decades any such material.
Source Explanations Source code, too, should explain itself. If I could get people to remember just one quote about programming, it would be the one at the beginning of Structure and Interpretation of Computer Programs. Abelson and Sussman is a good book. But of all the quotes possible, this isn't the one that I would want people to remember. Particularly since it only expresses a half-truth.
Reading Programs Programs should be written for people to read, and only incidentally for machines to execute. Or perhaps both. Knuth would argue, I think, that both people and machines are important. If he wouldn't make that argument, I'd make it anyway. The purpose of Knuth's `Web' language (Knuth's name predates the World-Wide-Web, and has nothing particular to do with it) is to make this explicit and allow us to express communication to other people as well as to machines, as an important part of the programming process. In any case, the statement made here expresses a philosophy, and it is one that I regard as wrong.
Empathy for Users You need to have empathy not just for your users, but for your readers. It's in your interest, because you'll be one of them. Many a hacker has written a program only to find on returning to it six months later that he has no idea how it works. I know several people who've sworn off Perl after such experiences. [7] Really? And I've known several who said that perl `saved their life' in just such circumstances. I've used perl, among many other languages, for years and have in that time returned to code that is often years old. It doesn't strike me as any worse---or any better---than many of the other languages I regularly use. Perhaps this is just an idiosyncratic characteristic of Graham's relationship with perl. He may well find other languages more comfortable. There's nothing wrong with that, but it doesn't provide any evidence for much generalization either.
Lack of Empathy Lack of empathy is associated with intelligence, to the point that there is even something of a fashion for it in some places. But I don't think there's any correlation. You can do well in math and the natural sciences without having to learn empathy, and people in these fields tend to be smart, so the two qualities have come to be associated. But there are plenty of dumb people who are bad at empathy too. Just listen to the people who call in with questions on talk shows. They ask whatever it is they're asking in such a roundabout way that the hosts often have to rephrase the question for them. I'm lost. If I read this correctly it basically says `heres a theory' and that is quickly followed by `and there's nothing in it'. So I'm not quite sure why this is being discussed, but ...
Working on Something Great So, if hacking works like painting and writing, is it as cool? After all, you only get one life. You might as well spend it working on something great. Yes, I suppose. But only if you're great at it. And few of us are. Telling the `great' that they should devote their lives to expressing their greatness makes at least superficial sense. However, telling it to the not so great is not necessarily much of a service. `Stick to it' is great advice to give Edison, who will eventually suceed with lots of things. But it's lousy advice for the 5'10" kid who wants to become Michael Jordan.
Prestige Unfortunately, the question is hard to answer. There is always a big time lag in prestige. It's like light from a distant star. Painting has prestige now because of great work people did five hundred years ago. At the time, no one thought these paintings were as important as we do today. It would have seemed very odd to people at the time that Federico da Montefeltro, the Duke of Urbino, would one day be known mostly as the guy with the strange nose in a painting by Piero della Francesca. But then, very very few people know anything else about Montefeltro even though his painting has survived. Would the world be any less or any more if the painting of the now equally obscure Duke or Baron just down the road had survived instead of the one of Urbino? Of the countless millenia of experience that took place over all of mans history in the LA basin, only the remains of a woman from about 7,000 years ago that were found in the tar pits have so far appeared to survive. Did she have a name? Does it matter much what it was? Prestige is inherently ephemeral, and to my mind not likely to be much worth seeking. But, of course, YMMV.
Painting's Glory Days So while I admit that hacking doesn't seem as cool as painting now, we should remember that painting itself didn't seem as cool in its glory days as it does now. Oh, I don't know. Programming had a brief flurry of `cool' in 1999-2000 when programmers were not only `cool' they were `rich'. Now that fewer of them are rich, they seem to be in general less cool. Is there a lesson here about coolness?
The Glory Days What we can say with some confidence is that these are the glory days of hacking. In most fields the great work is done early on. The paintings made between 1430 and 1500 are still unsurpassed. Shakespeare appeared just as professional theater was being born, and pushed the medium so far that every playwright since has had to live in his shadow. Albrecht Durer did the same thing with engraving, and Jane Austen with the novel. Perhaps it is just a perverse notion about what constitutes `great'. Surely it wouldn't be a surprise if the change implicit in someting happening for the first time is quite likely to be more noticed than changes which occur where there is more accumulated bulk. As to the specific points, they make me a bit uncomfortable. And to focus on 1430 to 1500 strikes me as awfully euro-centric. Chinese paintings---admittedly quite a different, but no less `valid' style---had flourished well ahead of that time, as had some other european styles.
A New Medium Over and over we see the same pattern. A new medium appears, and people are so excited about it that they explore most of its possibilities in the first couple generations. Hacking seems to be in this phase now. In broad strokes, I guess this might be true. But I'm unclear about just how much time a `generation' is. And, looking at history in broad strokes, it seems to me that we often discover something, lose it, and then come back to it again decades or centuries later.
How `cool' is Hacking? Painting was not, in Leonardo's time, as cool as his work helped make it. How cool hacking turns out to be will depend on what we can do with this new medium. In some ways, the time lag of coolness is an advantage. When you meet someone now who is writing a compiler or hacking a Unix kernel, at least you know they're not just doing it to pick up chicks. How irrelevant is this question? And perhaps it is my age showing, but when I think of `cool' I am more likely to conjure up a pictue of a rock star than a classical painter.
Notes Notes for the Paper The brackets refer back to the points in the paper.
Photography [1] The greatest damage that photography has done to painting may be the fact that it killed the best day job. Most of the great painters in history supported themselves by painting portraits.
Microsoft [2] I've been told that Microsoft discourages employees from contributing to open-source projects, even in their spare time. But so many of the best hackers work on open-source projects now that the main effect of this policy may be to ensure that they won't be able to hire any first-rate programmers. I'd like to have a little more useful product from the open-source community before I am willing to go very far out on the limb declaring its success. At the moment, Microsoft doesn't seem to have much trouble hiring first-rate programmers, but perhaps that's just because there are 10-20 programmers available for each opening. We'll see how well this point fares when we start to have `hacker soup kitchens' in the now empty re-decorated office space of SoMa.
Programming in College [3] What you learn about programming in college is much like what you learn about books or clothes or dating: what bad taste you had in high school. Perhaps. But, since the phenomena is well-observed outside of the domain of colleges as well, perhaps it is just the passage of time.
Applied Empathy [4] Here's an example of applied empathy. At Viaweb, if we couldn't decide between two alternatives, we'd ask, what would our competitors hate most? At one point a competitor added a feature to their software that was basically useless, but since it was one of few they had that we didn't, they made much of it in the trade press. We could have tried to explain that the feature was useless, but we decided it would annoy our competitor more if we just implemented it ourselves, so we hacked together our own version that afternoon. Lucky the competitors weren't very saavy. In todays judicial climate any such revelation might well be the base of a law-suit for copyright infringement that could have cost Viaweb a lot more than they bargained for. And what happend to the competitor anyway? It's reasonably likely that they closed their doors before any software changes could have had any effect. One of the great mistakes often made in systems is to think that features sell the system. But that's a story for another place and time.
Except for ... [5] Except text editors and compilers. Hackers don't need empathy to design these, because they are themselves typical users. The role of empathy in design is argued elsewhere. But in any case editors and compilers are hardly the only pieces of software that programmers regularly use. They interact with windows and forms and all kinds of things which might give them empathy, but often seem to fail to do so. Perhaps some people get it and some don't. It might be just that simple.
Almost [6] Well, almost. They overshot the available RAM somewhat, causing much inconvenient disk swapping, but this could be fixed within a few months by buying an additional disk drive. I'm not sure what this story has to do with the subject of the paper.
Not Stuffed [7] The way to make programs easy to read is not to stuff them with comments. I would take Abelson and Sussman's quote a step further. Programming languages should be designed to express algorithms, and only incidentally to tell computers how to execute them. A good programming language ought to be better for explaining software than English. You should only need comments when there is some kind of kludge you need to warn readers about, just as on a road there are only arrows on parts with unexpectedly sharp curves. Well the prejudice on the issue is pretty clear by the choice of the word `stuff'. Knuth heavily comments his code, but he does it in a way that I find non-distracting. And he also write (English) very well. So I find his comments rewarding and even occasionally charming. Graham seems to me to have not thought this out well at all, or, alternatively, to have been too strongly guided by the idiosyncratic way that he does things.
Thanks Thanks to Trevor Blackwell, Robert Morris, Dan Giffin, and Lisa Randall for reading drafts of this, and to Henry Leitner and Larry Finkelstein for inviting me to speak.
Conclusions This paper is either badly mis-titled or else it is a very poor paper. It spends a lot of time and energy beating on a straw-man---the author's view of what academic computing is trying and failing to do. Since, in my opinion at least, this is not at all what most academic computing is about, the paper fails to deliver on that. It also fails, at a minimum by lack of trying, to deliver on any useful commentary about how design is accomplished. It is all a bit like titling a paper `hackers and painters' and then ranting about why University schools of Engineering fail to turn out decent loaves of bread.

It seems to me that this paper ends up by suggesting that in situations such as the one presented here the world needs at most one `hacker' of Graham-like quality. Two or more would be hard to manage and producing coordinated product might prove to be biting off more than we can really chew.




© Copyright 2003 David Ness.
Last update: 2003-07-14 11:02:44 EDT