Mind / Matter
things that count
DN Logo



Click here to send an email. DN


-11620
021013
Commentary: Paul Graham on `Design and Research'

Paul Graham's Design Research piece along with some mixed commentary produced by DNN. The article is derived from a keynote talk given at the fall 2002 meeting of NEPLS.

DN Title Graham Ness
Introduction Visitors to this country are often surprised to find that Americans like to begin a conversation by asking "what do you do?" I've never liked this question. I've rarely had a neat answer to it. But I think I have finally solved the problem. Now, when someone asks me what I do, I look them straight in the eye and say "I'm designing a new dialect of Lisp." I recommend this answer to anyone who doesn't like being asked what they do. The conversation will turn immediately to other topics. I guess it's an icebreaker. Maybe you had to be there. But it may also be a harbinger of a problem to come for the paper. The discussion is a little cavalier about whether we are talking only about
  1. programming languagage design;
  2. program design; or
  3. design in general
If it is only programming language design, then this is a very narrow field. A field of importance, of course, but one of not very broad interest to most of us, as only a few of us will be called on to design languages. If it is discussion of design in general, on the other hand, then it is a broad topic of considerable interest, but probably much too broad to be treated in a short discussion such as the one here. This suggests that we will assume that the topic of discussion is programming in general.
Programming Language Design Research I don't consider myself to be doing research on programming languages. I'm just designing one, in the same way that someone might design a building or a chair or a new typeface. I'm not trying to discover anything new. I just want to make a language that will be good to program in. In some ways, this assumption makes life a lot easier. This is maybe a bit sloppy, but not profoundly so. Graham is trying to discover something new, by his own admission: a language that will be good to program in that is related to, but not exactly the same as, LISP. So there is a sense, at least, in which his work is trying to discover something new. At some point the `new' distinction seems to blur, so I don't think it is useful for distinguishing design activity from something else that involves creating someting `new'.
New vs. Good The difference between design and research seems to be a question of new versus good. Design doesn't have to be new, but it has to be good. Research doesn't have to be good, but it has to be new. I think these two paths converge at the top: the best design surpasses its predecessors by using new ideas, and the best research solves problems that are not only new, but actually worth solving. So ultimately we're aiming for the same destination, just approaching it from different directions. At best this is an odd phrasing of the distinction between design and research, and I'm missing why he doesn't go for the much more obvious one: research is a paradigm that generally involves an investigation of `the truth'---and thus requires that there is some `truth' which can be discovered. Design, on the other hand, generally presupposes no such `truth', and is more directly concerned with an aesthetic instead. Research, oversimplifying a little, deals with questions of how the world is. Design, on the other hand, deals with how we want some part of the world to be.
Design / Research Differences What I'm going to talk about today is what your target looks like from the back. What do you do differently when you treat programming languages as a design problem instead of a research topic? Hasn't the `classical' treatment of programming languages always been focussed on design rather than research? Perhaps there is some current vogue that looks at the problem as (pseudo-)research, but for most of the history of the profession there was generally a focus on language design, not language `research'. As far as I remember, FORTRAN wasn't a design based on any `research', but rather more focused on satisfying the needs to accomplish some particular set of assumed objectives.
Architecture Comparison The biggest difference is that you focus more on the user. Design begins by asking, who is this for and what do they need from it? A good architect, for example, does not begin by creating a design that he then imposes on the users, but by studying the intended users and figuring out what they need. Here there are some arguments that amount to a great deal more than a quibble. While Graham may be describing a `good architect' (I'd still argue he is wrong, but I'll let that go for now) he certainly isn't describing any of the `great' architects, most of which were noted---or, indeed, notorious---for imposing their designs on their users, often with no study whatsoever of what they `users' may `need'. Most of Wright's architecture may be and example of this: both Fallingwater and Johnson's Wax spring immediately to mind.
Short Order Cook Notice I said "what they need," not "what they want." I don't mean to give the impression that working as a designer means working as a sort of short-order cook, making whatever the client tells you to. This varies from field to field in the arts, but I don't think there is any field in which the best work is done by the people who just make exactly what the customers tell them to. Undoubtedly this is a point worth making. Only very few people have thought enough about their `needs' to be able to articulate them with any clarity whatsoever. And `wants' are even further removed. What I `want' often changes between the time I enter the door of a restaurant and the time the waiter asks for my order. It has often changed again by the time the dinner actually shows up, and occasionally even once more later on (`Oh, I shouldn't have ...'). And unless we are particularly schooled and practiced in the art of expressing ourseves, it is generally very difficult to describe what we really want in any clear way.
As to whether there is any field where the `best work' is done by those who make exactly what the customer wants, we are probably trapped in some philosophical limbo about the nature of `best work'.
Customer is Right The customer is always right in the sense that the measure of good design is how well it works for the user. If you make a novel that bores everyone, or a chair that's horribly uncomfortable to sit in, then you've done a bad job, period. It's no defense to say that the novel or the chair is designed according to the most advanced theoretical principles. This point is fundamentally flawed, and I am really surprised that someone as smart as Graham doesn't see the trap. The statement is based on a `static' model of a user, as though there is a well-defined notion of how well soemthing works for a user. This completely misses the onion-skin nature of lots of our life. Things which work well for us at first glance, often soon bore and pale. Other things we don't like at first, grow on us. And a stove which allows you use to cook delicious food may be a very good design on one level, but if it entices you to cook food that kills you, then it proves to be very bad design (for you as a user, at least) on a deeper level. While I agree that we might reasonably spot some designs that virtually everyone would agree are `bad', the question of what constitutes `good' design is much more difficult to cope with.
Not All Knowing And yet, making what works for the user doesn't mean simply making what the user tells you to. Users don't know what all the choices are, and are often mistaken about what they really want. True enough in most circumstances, but even this requires a caveat. Some users you listen to, some you don't. Knowing which is which is the real art. Another way of saying this is don't second guess Picasso, if you're a brushmaker you probably ought to do what he suggests unless you as good at making brushes as he was at delivering art.
Needs not Wants The answer to the paradox, I think, is that you have to design for the user, but you have to design what the user needs, not simply what he says he wants. It's much like being a doctor. You can't just treat a patient's symptoms. When a patient tells you his symptoms, you have to figure out what's actually wrong with him, and treat that. Graham is, IMO, too `hooked' on the user. This is one theory of design: User-centric. But it isn't the only possibility, and for problems as complex as the ones we are considering here we really should at least pass over some other possibilities. For example, Saarinen has a well expressed design philosophy that says that design should flow from an understand of the `next wider context'. This pretty much precludes the notion that one can focus on `a user'. Indeed, there is---or should be---considerable doubt that any software is really designed for `a user'. Most software helps us deal with the problems of some `community' and it may differentially impact various members within that community.

As a `gedanken experiment' think about designing a human language for use by a community. It is certain that no particular design is the `best for everyone'. Any realization of a design will advantage some and will disadvantage others.

Focus on User This focus on the user is a kind of axiom from which most of the practice of good design can be derived, and around which most design issues center. Of course, given much of what is said above, this is an `axiom' that I find quite dubious. And I demur, at least for now, on the issue of whether `most design issues' center on the user. I think that may be more misleading than helpful.
Who is the User? If good design must do what the user needs, who is the user? When I say that design must be for users, I don't mean to imply that good design aims at some kind of lowest common denominator. You can pick any group of users you want. If you're designing a tool, for example, you can design it for anyone from beginners to experts, and what's good design for one group might be bad for another. The point is, you have to pick some group of users. I don't think you can even talk about good or bad design except with reference to some intended user. It seems to me that this is a jumbled point. For example, it doesn't seem to properly consider the prospect that someone might design without much of a particular user community in mind, hoping that the user community will coalesce itself around the availability of what has been designed. Many successful pieces of software have developed communities that were far from those that were in the initial world-view of the designers at the time of their creation. Graham's own interest in LISP might serve as a reasonable example. When LISP was being designed at MIT in the late 50's, I doubt if there was much thought that it would ever end up being used for many of the tasks that Graham, and others, now suggest it is particularly adept at.
NMKOPD You're most likely to get good design if the intended users include the designer himself. When you design something for a group that doesn't include you, it tends to be for people you consider to be less sophisticated than you, not more sophisticated. This is an interesting point, and it will require some consideration. Certainly my early experience in the computer business would suggest that the notion is patently untrue, but that may just have been the special circumstances of early development.
Looking Down That's a problem, because looking down on the user, however benevolently, seems inevitably to corrupt the designer. I suspect that very few housing projects in the US were designed by architects who expected to live in them. You can see the same thing in programming languages. C, Lisp, and Smalltalk were created for their own designers to use. Cobol, Ada, and Java, were created for other people to use. I'm not sure about this point either. The accounting seems a little questionnable to me. There are too many junk languages built by people for their own use to be mentioned. So `having the problem' is no guarantee that you will be able to deal with it particularly effectively. And there are lots of languages (perhaps most of them are `application' languages, but that's material for another discussion) that have been created for other people to use that have proven to have some dramatic staying power. (Did Gates really `have' the `windows problem'? What about Larry Ellison?). Who would Graham have design housing projects? Would he really expect them to be better if they were designed by the guy who happens to live on the 11th floor?
For Idiots If you think you're designing something for idiots, the odds are that you're not designing something good, even for idiots. If the point is that working for idiots is unlikely to be productive, then I guess I buy it. If he really intends to suggest something else then it's more debatable.
Ergonomics in Design? Even if you're designing something for the most sophisticated users, though, you're still designing for humans. It's different in research. In math you don't choose abstractions because they're easy for humans to understand; you choose whichever make the proof shorter. I think this is true for the sciences generally. Scientific ideas are not meant to be ergonomic. This is wrong, at least given my education. First, the abstractions chosen in mathematics often depend on the purpose of the abstraction. Sometimes it is simply the notion of `abstraction' itself that is being illustrated. Other times you choose a particular abstraction because it is particularly `computable'. Still other times you may be trying to give the student a `model' and in such circumstances how easy the abstraction is to understand is of critical importance. `Shortness of proof' as an objective, if it exists at all, is most likely to be of importance in narrow `academic' publication.

Second, how broadly this applies to science is something that needs to be considered further. I am puzzled by the `Scientific ideas are not meant to be ergonomic' though. I would have thought that in general most of the ideas in science are quite ergonomic. But I need to think about it more.

Comparing to The Arts Over in the arts, things are very different. Design is all about people. The human body is a strange thing, but when you're designing a chair, that's what you're designing for, and there's no way around it. All the arts have to pander to the interests and limitations of humans. In painting, for example, all other things being equal a painting with people in it will be more interesting than one without. It is not merely an accident of history that the great paintings of the Renaissance are all full of people. If they hadn't been, painting as a medium wouldn't have the prestige that it does. This is plain wrong. Chairs do a lot more than just seat people. Indeed, for most of their `lives', most chairs are empty. They are only very occasionally occupied by someone simply sitting in them. They usually fill an important part of our visual space, in the rooms that we occupy, but often they are empty. It is easy to imagine a `good' chair designer understanding this point, and thinking about the `look' as much, or more, than how they `feel'.

This bad point is made worse by the rather extraordinary observation that `a painting with people in it will be more interesting than one without'. This is completely untrue for at least one person (me). And the notion that it is some sort of a `truth' is artistically extremely naive, particularly for someone who demonstrates such sophistication on other issues. And a large number of the `great paintings of the Renaissance' are pastorals, landscapes, seascapes, etc. where the `people' if any are treated largely as ornimental objects in much the same way as the other elements that adorn the canvas. I have no idea where this point was supposed to go, but since it is completely wrong it is not a surprise that it actually doesn't manage to go anyplace.

Programming for People Like it or not, programming languages are also for people, and I suspect the human brain is just as lumpy and idiosyncratic as the human body. Some ideas are easy for people to grasp and some aren't. For example, we seem to have a very limited capacity for dealing with detail. It's this fact that makes programing languages a good idea in the first place; if we could handle the detail, we could just program in machine language. And we did for the better part of 20 years. And I find little indication that there has been any particular spurt in intellectual growth associated with the fact that we now have largely stopped doing so. While I find LISP an interesting and very useful language, another very favorite language, where I do a lot of work, is `K', and this---in many ways---reminds me more of my machine languages of the 1950s and 1960s than of later developments. In any case this begs the issue of why we care about our ability to handle detail. There are some kind of `detail' that humans seem to be extraordinarily good at processing---for example some places where human visual acuity still seems to be very hard to outperform. It has proven to be extraordinarly difficult to get a computer program to `understand' some of the simple kinds of `detail' that are quite immediately obvious to the human eye.
Form and Medium Remember, too, that languages are not primarily a form for finished programs, but something that programs have to be developed in. Anyone in the arts could tell you that you might want different mediums for the two situations. Marble, for example, is a nice, durable medium for finished ideas, but a hopelessly inflexible one for developing new ideas. I like this notion. Perhaps it is the reason that I find that I often write my initial sketch of a program in one language and then, after my ideas have cleared up, proceed to write the later versions in a completely different langauge.
The Path to the Program A program, like a proof, is a pruned version of a tree that in the past has had false starts branching off all over it. So the test of a language is not simply how clean the finished program looks in it, but how clean the path to the finished program was. A design choice that gives you elegant finished programs may not give you an elegant design process. For example, I've written a few macro-defining macros full of nested backquotes that look now like little gems, but writing them took hours of the ugliest trial and error, and frankly, I'm still not entirely sure they're correct. And the point is? First, this is one view of a program. It is an interesting view, but by no means the `only' one that might prove to be productive. Instead of Graham's approach, I see all of the possible programs which might solve a particular problem as a nebulous mass, and for the the programming problem is navigating my way to some particular realization, reconginzing that all of the realizations have some side-effects which may ultimately prove to be of real importance in the problem domain under current consideration.

In addition, a lot about the program / path problem depends on the amount of time that is going to be spent in the process of developing the program vs the amount of time and energy that will be devoted, over its lifetime, to the process of using it. We may be willing to afford an expensive search of the paths if the code is going to have a long life.

Judging the Look We often act as if the test of a language were how good finished programs look in it. It seems so convincing when you see the same program written in two languages, and one version is much shorter. When you approach the problem from the direction of the arts, you're less likely to depend on this sort of test. You don't want to end up with a programming language like marble. Unless you want it to last for 1,000 years that is. Most programmers have a terribly short horizion. The whole field has only been around for a bit over half a century. In the early days of programming virtually every program was rewritten for each new cycle of machine, so programs had lives that were measured in (very) small numbers of years. The look of a program, for me at least, is like the look of food. Good looking food won't necessarily taste good, and one certainly can't eat (or smell or feel) the look, but my experience with restaurants indicates that good food is generally pretty good looking. A chef who is in control of his medium is likely to be able to produce something that is pleasing to both the palate and the eye.
Interactive Top Level For example, it is a huge win in developing software to have an interactive toplevel, what in Lisp is called a read-eval-print loop. And when you have one this has real effects on the design of the language. It would not work well for a language where you have to declare variables before using them, for example. When you're just typing expressions into the toplevel, you want to be able to set x to some value and then start doing things to x. You don't want to have to declare the type of x first. You may dispute either of the premises, but if a language has to have a toplevel to be convenient, and mandatory type declarations are incompatible with a toplevel, then no language that makes type declarations mandatory could be convenient to program in. While I am with Graham on this point, I do rather think he is making a great deal too much out of what is in reality very little. I happen to like interactive top-level debugging too, and find it convenient in a broad class of problems, in my case those which relate to APL, J, and K. But, although available, it isn't of much use to me the way I happen to use Perl. So it is language, not problem, sensitive. And both kinds of solutions are tolerable.

Also, while I am no particular fan of pre-declaration, it isn't much of a burden. Whether you precede a first use by a declaration, or instead are asked, upon first use, to make such a declaration isn't much of a matter for profound concern.

How to Write a Novel In practice, to get good design you have to get close, and stay close, to your users. You have to calibrate your ideas on actual users constantly, especially in the beginning. One of the reasons Jane Austen's novels are so good is that she read them out loud to her family. That's why she never sinks into self-indulgently arty descriptions of landscapes, or pretentious philosophizing. (The philosophy's there, but it's woven into the story instead of being pasted onto it like a label.) If you open an average "literary" novel and imagine reading it out loud to your friends as something you'd written, you'll feel all too keenly what an imposition that kind of thing is upon the reader. Not so fast. While some `good design' continues to evolve, some of the best of it doesn't. Problems were so well anticipated by the design that evolution proved to be unnecessary. Some good novels `sound' good. Other good novels don't. It certainly isn't a necessary characteristic of quality, but rather seems to be related in some complex way to the kind of writing the author happens to choose for expression.
Worse is Better In the software world, this idea is known as Worse is Better. Actually, there are several ideas mixed together in the concept of Worse is Better, which is why people are still arguing about whether worse is actually better or not. But one of the main ideas in that mix is that if you're building something new, you should get a prototype in front of users as soon as possible. Not exactly a model of explanation. I still don't have much of an idea what `Worse is Better' is all about. However I do understand the notion of getting prototypes in front of users as soon as possible. We have been doing that with systems since the middle 1960s.
The `Hail Mary' The alternative approach might be called the Hail Mary strategy. Instead of getting a prototype out quickly and gradually refining it, you try to create the complete, finished, product in one long touchdown pass. As far as I know, this is a recipe for disaster. Countless startups destroyed themselves this way during the Internet bubble. I've never heard of a case where it worked. I'm not sure that I'd place `bad design' at the core of the problems of most of the `countless startups' to which the author refers. Whether `Hail Mary' passes are more successful in systems than they are in football (where, it would seem, they are also only rarely effective) remains to be seen.
Gradual Refinement What people outside the software world may not realize is that Worse is Better is found throughout the arts. In drawing, for example, the idea was discovered during the Renaissance. Now almost every drawing teacher will tell you that the right way to get an accurate drawing is not to work your way slowly around the contour of an object, because errors will accumulate and you'll find at the end that the lines don't meet. Instead you should draw a few quick lines in roughly the right place, and then gradually refine this initial sketch. An interesting point. And, since I have no detectable drawing skill, not something I knew before. Whether the kind of refinement in systems contexts parallels the kind of refinement possible in drawing sketches remains to be seen.
Prototypes In most fields, prototypes have traditionally been made out of different materials. Typefaces to be cut in metal were initially designed with a brush on paper. Statues to be cast in bronze were modelled in wax. Patterns to be embroidered on tapestries were drawn on paper with ink wash. Buildings to be constructed from stone were tested on a smaller scale in wood. Prototypes have been a key element in many of the design practices that we have used, and documented at both MIT and Wharton in the 1960s and 1970s. Back in those times we talked a lot about `middle-out' design (as opposed to both `top-down' and `bottom-up' design), and prototypes were a very important part of the processes we advocated. Prototypes were effective for many reasons of which the malleability discussed here is only one. However, a more thorough discussion of these points is probably appropriate in a separate context.
Oil Paint as a Medium What made oil paint so exciting, when it first became popular in the fifteenth century, was that you could actually make the finished work from the prototype. You could make a prelimary drawing if you wanted to, but you weren't held to it; you could work out all the details, and even make major changes, as you finished the painting. This is a bit of history that I didn't know.
Programming Refinement You can do this in software too. A prototype doesn't have to be just a model; you can refine it into the finished product. I think you should always do this when you can. It lets you take advantage of new insights you have along the way. But perhaps even more important, it's good for morale. Again a `perhaps' is in order. It probably should be noted that it seems to me to run directly contrary to an alternative design theory that says that a good thing to do with an early design is to learn from it and then throw it away and start again with a `clean sheet' of paper to avoid many of the psychological problems that result from such things as problem `set'.
Morale's Effect Morale is key in design. I'm surprised people don't talk more about it. One of my first drawing teachers told me: if you're bored when you're drawing something, the drawing will look boring. For example, suppose you have to draw a building, and you decide to draw each brick individually. You can do this if you want, but if you get bored halfway through and start making the bricks mechanically instead of observing each one, the drawing will look worse than if you had merely suggested the bricks. Morale, viewed from one side, may appear to be attractive. But if we are touting it as an important element in the success of design projects we should at least deal with the problems caused by considering the role of morale in the `countless' design projects so many of which led to the failure of the dot com `revolution'. If any of the press is to be believed, the one thing that many of these sad companies had going for them was that they had tremendously high `morale'. So if `morale' is important there must be `good morale' (which leads to good designs) and `bad morale' (the dot com kind). The problem is that many of the people who actually have bad morale think that they have good morale. And it doesn't really seem worthwhile to spend a lot of time and energy figuring out which we have in some particular situation. So I'm afraid that considering it doesn't really add much.

The Bad News Bears had better morale than the Yankees on a bad day, but all in all, I'd go with the Yankees any time.

Always have Working Code Building something by gradually refining a prototype is good for morale because it keeps you engaged. In software, my rule is: always have working code. If you're writing something that you'll be able to test in an hour, then you have the prospect of an immediate reward to motivate you. The same is true in the arts, and particularly in oil painting. Most painters start with a blurry sketch and gradually refine it. If you work this way, then in principle you never have to end the day with something that actually looks unfinished. Indeed, there is even a saying among painters: "A painting is never finished, you just stop working on it." This idea will be familiar to anyone who has worked on software. This is all fairly confused. There are two separate issues which get blurred together. One issue is the `planning' of the overall program / painting / whatever. The other is the activity you engage in after the object in question actually exists. The notion of a `plan' or `design' seems, at least superficially, to have much more to do with a classical painting than with some of the more modern forms of expression. What this suggests to me is that Graham's picture of the relationship between systems and art may well be dominated by a classical model. There's nothing in principle wrong with that, of course, but it is a rather more limited perspective than it might appear to be at the outset.
Naive Users Morale is another reason that it's hard to design something for an unsophisticated user. It's hard to stay interested in something you don't like yourself. To make something good, you have to be thinking, "wow, this is really great," not "what a piece of shit those fools will love it." But not if you're not designing for yourself. Even the tastes of a relatively unsophisticated user can be interesting to someone who is learning something from them.
Both User and Designer Humans Design means making things for humans. But it's not just the user who's human. The designer is human too. I'd cheerfully concede that this particular point is true. But the consequences of doing so are not anywhere so clear to me.
Single vs Group Notice all this time I've been talking about "the designer." Design usually has to be under the control of a single person to be any good. And yet it seems to be possible for several people to collaborate on a research project. This seems to me one of the most interesting differences between research and design. I'm not so sure that the argument from common language use really works. While `The Designer' works for me, so does `The Design Team'. And I guess that I'd disagree with the assertion that `design usually has to be under the control of a single person to be any good'. Some of the efforts of single designers are appalling while others are wonderful. The same can be said of design teams. I haven't really seen any data that convinces me there's much of a difference.
Control in Design There have been famous instances of collaboration in the arts, but most of them seem to have been cases of molecular bonding rather than nuclear fusion. In an opera it's commmon for one person to write the libretto and another to write the music. In painting too, different parts of a painting might be made by different specialists. During the Renaissance, journeymen from northern Europe were often employed to do the landscapes in the backgrounds of Italian paintings. But these aren't true collaborations. They're more like examples of Robert Frost's "good fences make good neighbors." You can stick instances of good design together, but within each individual project, one person has to be in control. This is an argument by assertion that I don't buy as a necessary consequence. While I think that it is most commonly true that design proceeds best with very small crews (often one), I would be more comfortable with this if it were supported by research rather than by pure assertion.
Decisions in Design I'm not saying that good design requires that one person think of everything. There's nothing more valuable than the criticism of someone whose judgement you trust. But after the talking is done, the decision about what to do has to rest with one person. Yes, but in a good design team the team may well have a `sense' of who should be followed when such conficts arise. I have participated in such groups many times, and lots of these groups were remarkably effective.
Working Alone Why is it that research can be done by collaborators and design can't? This is an interesting question. I don't know the answer. Perhaps, if design and research converge, the best research is also good design, and in fact can't be done by collaborators. A lot of the most famous scientists seem to have worked alone. But I don't know enough to say whether there is a pattern here. It could be simply that many famous scientists worked when collaboration was less common. I'm afraid I don't find it a very interesting question. First, we have to be clear that we are talking about research and not research design. Research design is probably just another design problem, but I haven't had time to think this thru clearly yet. Research itself isn't the kind of thing that is very sensitive to the number of people who undertake it. Many research tasks are quite natuarlly divisible into a number of seperate, often parallelizable tasks. For example, data can often be collected either serially or in parallel depending on how quickly we need it and on what human resources happen to be available.
Collaboration Whatever the story is in the sciences, true collaboration seems to be vanishingly rare in the arts. Design by committee is a synonym for bad design. Why is that so? Is there some way to beat this limitation? Perhaps collaboration can properly be described as rare in the fine arts, but it isn't so rare in the `intellectual arts', where collaborations are quite common (think Russell and Whitehead, Von Neumann and Morgenstern as examples). Perhaps Graham's own example of the composer / librettist deserves more careful consideration. Musical composition and the phrasing of words are quite different skills, and it is no surprise to find them in different people.
Dictatorship I'm inclined to think there isn't---that good design requires a dictator. One reason is that good design has to be all of a piece. Design is not just for humans, but for individual humans. If a design represents an idea that fits in one person's head, then the idea will fit in the user's head too. While I probably pretty much agree with this, I am nowhere near as sure as Graham is. If my experience is shared by more than just a few, then there are at least some situations where small numbers of designers (most often two, but I could concieve that it might be more) bring enough complementary intelligence into a problem that they can complement one another in design tasks. Perhaps Graham has an experience dominated by the scale of problem where one designer is appropriate.
Postlogue Related: Taste for Makers It might be worth writing a similar parallel-ogue for Graham's `Taste' Paper.



© Copyright 2003 David Ness.
Last update: 2003-03-09 21:10:13 EST