Mind / Matter
things that count
DN Logo

Click here to send an email. DN

Commentary: Paul Graham on `Succinctness is Power'

This note is a commentary on one of Paul Graham's papers that is available at his site. It presents another `parallelogue' where I comment on the piece pretty much paragraph by paragraph.

DN Title Graham David Ness
Opening "The quantity of meaning compressed into a small space by algebraic signs, is another circumstance that facilitates the reasonings we are accustomed to carry on by their aid." - Charles Babbage, quoted in Iverson's Turing Award Lecture This raises the issue of `compression' and effectiveness, but it doesn't---by any means---dispose of that issue. One really has to be quite careful in drawing conclusions about such relationships, particularly when they are raised not out of any indicated research, but rather just because `we know that they're true.
Remark I rather agree with this paper more than I do with Graham's other design papers. However, even on this particular issue there are still some problems that need to be considered. I share with Graham a real working relationship with lots of terse languages. However, I have also watched a large number students and professionals perfectly capable of doing high quality work who seemed to be remarkably incapable of using many of these languages to do anything useful. So I am somewhat more tempted to qualify my opinions than Graham seems to be willing to do, at least in the presentation he makes here.
What's the Problem? In the discussion about issues raised by Revenge of the Nerds on the LL1 mailing list, Paul Prescod wrote something that stuck in my mind.

Python's goal is regularity and readability, not succinctness.

On the face of it, this seems a rather damning thing to claim about a programming language. As far as I can tell, succinctness = power. If so, then substituting, we get Python's goal is regularity and readability, not power. and this doesn't seem a tradeoff (if it is a tradeoff) that you'd want to make. It's not far from saying that Python's goal is not to be effective as a programming language.

One proposition, not considered in this paper or in much of the literature of other efforts like it, is that it just might be the case that no universal rule applies, and that while succinctness might be powerful for some, it might be quite counter-effective for others, who happen to think some different way.
Succinctness and Power: Does succinctness = power? This seems to me an important question, maybe the most important question for anyone interested in language design, and one that it would be useful to confront directly. I don't feel sure yet that the answer is a simple yes, but it seems a good hypothesis to begin with. For some, certainly. For all, probably not. Just how many---what proportion of potential users---remains to be seen and discussed.
Hypothesis My hypothesis is that succinctness is power, or is close enough that except in pathological examples you can treat them as identical. I think this is wrong, but that it is likely to be an `instructive' error anyway.
Source Code: It seems to me that succinctness is what programming languages are for. Computers would be just as happy to be told what to do directly in machine language. I think that the main reason we take the trouble to develop high-level languages is to get leverage, so that we can say (and more importantly, think) in 10 lines of a high-level language what would require 1000 lines of machine language. In other words, the main point of high-level languages to make source code smaller. For me this is too narrow a view of programming languages. Succinctness is helpful for many different aspects of the practice of programming, but for me it is only rarely the goal. One could easily imagine a very succinct language that was perfectly useless for most aspects of the programming task. Think, as an absurd example, of a language where the `code' was the page number of a program that performed the task in some comprehensive text. A whole `program' might consist of just a string of numbers, but this would not likely be very useful.
Size as Measure: If smaller source code is the purpose of high-level languages, and the power of something is how well it achieves its purpose, then the measure of the power of a programming language is how small it makes your programs. A very terse program to do a job might be DOJOB#123123543. This is a very terse program, but we have replaced `the guts' of the task to be performed by simply pushing off the probelm of saying what really needs to be done elsewhere in the documentation, wherever we specify just exactly what job #123123543 is, and how it is to be accomplished. Smaller source code is surely not our (sole) purpose.
A Bad Job: Conversely, a language that doesn't make your programs small is doing a bad job of what programming languages are supposed to do, like a knife that doesn't cut well, or printing that is illegible. This strikes me as a real bad analogue. None of the characterisics of dull knives or smeared printing seem to be relevant to the case at hand. And unless they are relevant to the case at hand, they are an unnecessary distraction.
Metrics Small in what sense though? The most common measure of code size is lines of code. But I think that this metric is the most common because it is is the easiest to measure. I don't think anyone really believes it is the true test of the length of a program. Different languages have different conventions for how much you should put on a line; in C a lot of lines have nothing on them but a delimiter or two. Arguing about issues at this level is like discussing the `size' of the deck chairs on the Titanic. With the ship `going down' issues about whether the chairs were `extra-wide' or merely `wide' seems to likely be a distraction.
Physical Size: Another easy test is the number of characters in a program, but this is not very good either; some languages (Perl, for example) just use shorter identifiers than others. This point might, I suppose, be relevant, but doesn't it matter whether the fact that identifiers are `shorter' or not somehow relates to effectiveness. There seems to be an implicit assumption, but as far as I can see it is quite post hoc. I'm not sure that the point is really true anyway (shorter relative to what?) but that is not worth arguing if the whole point doesn't wash anyway.
Number of Elements: I think a better measure of the size of a program would be the number of elements, where an element is anything that would be a distinct node if you drew a tree representing the source code. The name of a variable or function is an element; an integer or a floating-point number is an element; a segment of literal text is an element; an element of a pattern, or a format directive, is an element; a new block is an element. There are borderline cases (is -5 two elements or one?) but I think most of them are the same for every language, so they don't affect comparisons much. OK, you can have your elements. Since we don't have any evidence so far (and at least in a cursory look-ahead I don't see any coming up soon either, arguing about this would truly seem to be pointless.
Work Proportionate to Size: This metric needs fleshing out, and it could require interpretation in the case of specific languages, but I think it tries to measure the right thing, which is the number of parts a program has. I think the tree you'd draw in this exercise is what you have to make in your head in order to conceive of the program, and so its size is proportionate to the amount of work you have to do to write or read it. Measures of `work' relative to active (writing) or passive (reading) use are a dangerous matter. People interact meet the problems of active and passive use with a differential impact anyway. So the same problem may be more `work' for some than it is for others. It's a bad problem formulation that mixes these.
Design This kind of metric would allow us to compare different languages, but that is not, at least for me, its main value. The main value of the succinctness test is as a guide in designing languages. The most useful comparison between languages is between two potential variants of the same language. What can I do in the language to make programs shorter? This is a particular problem formulation, but I can find nothing intrinsically right about it. I can understand the appeal as a formulation that would appeal to some, but it seems incorrect, to me, to suggest that it would appeal to all, or even most. `Brevit' may be the soul of wit, but not all funnny jokes are short, nor are shorter jokes always better.
Design of Language: If the conceptual load of a program is proportionate to its complexity, and a given programmer can tolerate a fixed conceptual load, then this is the same as asking, what can I do to enable programmers to get the most done? And that seems to me identical to asking, how can I design a good language?

(Incidentally, nothing makes it more patently obvious that the old chestnut "all languages are equivalent" is false than designing languages. When you are designing a new language, you're constantly comparing two languages---the language if I did x, and if I didn't---to decide which is better. If this were really a meaningless question, you might as well flip a coin.)

`Conceptual load of a program? What is that? It pops up here pretty suddenly and without enough explanation for me to have any clue about what it is meant to refer to. And this is all a pretty odd discussion. Surely no one ever says `all languages are equivalent' other than in reference to some Turing/computational issue. Human languages are all meant to allow us to talk about human experience, but Navajo is probably not `equivalent' to Eskimo. And the points about language design are of only a pretty narrow interest.
Finding New Ideas: Aiming for succinctness seems a good way to find new ideas. If you can do something that makes many different programs shorter, it is probably not a coincidence: you have probably discovered a useful new abstraction. You might even be able to write a program to help by searching source code for repeated patterns. Among other languages, those with a reputation for succinctness would be the ones to look to for new ideas: Forth, Joy, Icon. I'm not sure I'd put Forth in this list, but the others surely do belong. And there is much to learn from studying them. However, there's lots to learn from studying many of the languages that are no where nearly as succinct (in Graham's sense). As to whether this is universally a good way to `find new ideas', it seems to me that this is a vague assumption at best. Some people will undoubtedly find it so. But I would not be surprised to find lots of people who might say things like `a good way to find new ideas' is to listen to a Bach Cantata. Or a Mozart aria. How to find new ideas has been the subject of considerable study for a good while now, and I think any of these methods appear to be about as good as any other. If someone testifies that a particular method `works for them' I have no reason to doubt it. But I also have equally little reason to believe that their testimony would be more or less likely to work for others, any more than a particular priest or pastor's way of finding God is likely to help some particular heathen.
Comparison The first person to write about these issues, as far as I know, was Fred Brooks in the Mythical Man Month. He wrote that programmers seemed to generate about the same amount of code per day regardless of the language. When I first read this in my early twenties, it was a big surprise to me and seemed to have huge implications. It meant that (a) the only way to get software written faster was to use a more succinct language, and (b) someone who took the trouble to do this could leave competitors who didn't in the dust. I guess I'll have to go back and reresd Brooks. I only vaguely remember reading his book when it came out, and all-in-all while it has had some durability, it doesn't seem to me that it has actually had much impact of the practice of our profession, for good or for ill. If it had huge implications, they seem to have been largely neglected.

In any case, leaping to strong conclusions on the basis of this work seems to be a dangerous proposition at best. For example, how fast software gets written may depend more on the `settling time' that it takes for ideas and approaches to coalesce in our heads than it does on the language that we happen to use to record such thoughts. It's not a well-researched point. And the interaction between personal problem solving style and programming language may be subject to such strong forces that it is easy to mis-draw the lines of cause and effect. If it were the case that tall people were, by measure, healtier, it still does not follow that being stretched on a rack to add an inch to our stature will necessarily have a salubrious impact on our health.

Brooks' Hypothesis: Brooks' hypothesis, if it's true, seems to be at the very heart of hacking. In the years since, I've paid close attention to any evidence I could get on the question, from formal studies to anecdotes about individual projects. I have seen nothing to contradict him. I'm afraid that I don't immediately see a relationship between Brooks' ideas and hacking. I've worked with lots of hackers since TX-0 days at MIT in the 1950s, and I don't particularly associate their expressive style with any particular terseness.
No Conclusive Evidence: I have not yet seen evidence that seemed to me conclusive, and I don't expect to. Studies like Lutz Prechelt's comparison of programming languages, while generating the kind of results I expected, tend to use problems that are too short to be meaningful tests. A better test of a language is what happens in programs that take a month to write. And the only real test, if you believe as I do that the main purpose of a language is to be good to think in (rather than just to tell a computer what to do once you've thought of it) is what new things you can write in it. So any language comparison where you have to meet a predefined spec is testing slightly the wrong thing. And, I'd think, any comparison ought to also study impact across a wide variety of different people with different problem solving styles. Otherwise you run into the risk of missing the important interactions between language, person and problem, and it is this interaction that is at the real heart of the matter.
Discover New Problems: The true test of a language is how well you can discover and solve new problems, not how well you can use it to solve a problem someone else has already formulated. These two are quite different criteria. In art, mediums like embroidery and mosaic work well if you know beforehand what you want to make, but are absolutely lousy if you don't. When you want to discover the image as you make it---as you have to do with anything as complex as an image of a person, for example---you need to use a more fluid medium like pencil or ink wash or oil paint. And indeed, the way tapestries and mosaics are made in practice is to make a painting first, then copy it. (The word "cartoon" was originally used to describe a painting intended for this purpose). I think this is wrong, perhaps profoundly wrong. In order to make a real judgement about this we would really have to have some data about how much energy is expended in the `solution of new problems' stage as opposed to the `solve a problem someone else nas formulated' stage. Without this evidence, Graham's point does not seem to be well based. The analogy with art seems stretched---most `problems' in art are rather of rather a different kind than the problems that we face when we program. And we would need to understand that difference before we can accept any analogy between to two. Analogies that don't apply are just a cover for sloppy thinking---and unless we understand that relationship this may be just what is going on here.
Accurate Comparisons?: What this means is that we are never likely to have accurate comparisons of the relative power of programming languages. We'll have precise comparisons, but not accurate ones. In particular, explicit studies for the purpose of comparing languages, because they will probably use small problems, and will necessarily use predefined problems, will tend to underestimate the power of the more powerful languages. Worse, I can't find much evidence that would suggest that there isn't a very strong interaction factor between programmer and programming language. It isn't much of a stretch, is it, to suggest that some people might find one language effective while other people might find a different language effective? And if there is such an interaction effect then it seems to me to bring the whol notion of `relative power of language' into question unless we factor characteristics of the programmers into the picture. This may not be good news for languague designers, but it does seem to represent reality.
Precision?: Reports from the field, though they will necessarily be less precise than "scientific" studies, are likely to be more meaningful. For example, Ulf Wiger of Ericsson did a study that concluded that Erlang was 4-10x more succinct than C++, and proportionately faster to develop software in: In my experience most reports from the field turn out to be quite suspicious under anything more than casual investigation. In most cases the people making the reports have some axe to grind. While this doesn't necessarily mean that any data (and it's usually pretty lightweight data) presented is necessarily wrong, it does mean that we should take reports with more than just a small grain of salt.
Ericsson Example: Comparisons between Ericsson-internal development projects indicate similar line/hour productivity, including all phases of software development, rather independently of which language (Erlang, PLEX, C, C++, or Java) was used. What differentiates the different languages then becomes source code volume. One also has to be careful with causal direction. Often organizations, having at least casually glanced at Brooks' or other equivalents, establish standards that create expectations that turn out to be self-validating. If I expect similiar line/hour productivity, and measure and reward according to such a standard, then perhaps I shouldn't be too surprised to end up observing data that squares with my expectations.
The Study: The study also deals explictly with a point that was only implicit in Brooks' book (since he measured lines of debugged code): programs written in more powerful languages tend to have fewer bugs. That becomes an end in itself, possibly more important than programmer productivity, in applications like network switches. This doesn't necessairly account for the interaction effect. Perhaps the better programmers both tend to make fewer mistakes and tend to use `more powerful' languages. In such a case we'd expect those using the more powerful languages to produce fewer bugs. But that shouldn't suggest that there is any causal relationship, in and of itself. The discussion here doesn't make clear if there is data present in `The Study' that speaks to this issue. But without such data the point doesn't really mean much.
The Taste Test Ultimately, I think you have to go with your gut. What does it feel like to program in the language? I think the way to find (or design) the best language is to become hypersensitive to how well a language lets you think, then choose/design the language that feels best. If some language feature is awkward or restricting, don't worry, you'll know about it. I think this is a very good point, but it doesn't seem to me to support the fundamental thesis of Graham's paper. The world has pretty much `voted with it's feet' over the past thirty or forty years, and it seems safe to suggest that there hasn't exactly been any sort of a stampede in the direction of the more terse langauges. The APL community, for example, seems to have reached a high-water-mark in the mid 1970s, and has been in a slow decline ever since. Similarly, FORTH has always had a cadre of adherents, as have REXX and other languages, but the number of people using them seems to be growing excruciatingly slowly, if at all. And that number is virtually inconsequential when one compares it to other language, many of more recent creation, that have a huge user base.
Hypersensitivity: Such hypersensitivity will come at a cost. You'll find that you can't stand programming in clumsy languages. I find it unbearably restrictive to program in languages without macros, just as someone used to dynamic typing finds it unbearably restrictive to have to go back to programming in a language where you have to declare the type of every variable, and can't make a list of objects of different types. Yes. To each his own. There's nothing wrong in that. But there's also no indication that any particular individual's dispostion with regard to these issues yields any particular leaning in the direction of terse languages.
Lisp Hackers: I'm not the only one. I know many Lisp hackers that this has happened to. In fact, the most accurate measure of the relative power of programming languages might be the percentage of people who know the language who will take any job where they get to use that language, regardless of the application domain. This is an odd judgement, isn't it. We have lived through an era where programming talent was in short supply relative to the problems that it could profitably be applied to. And some programmers have had the good luck to have enough work opportunities to be able to choose what kind of a language they would consider programming in. I'd suspect that some of these programmers chose jobs because of location, or salary or other job aspects as well. And there generally is a tradeoff between such factors. My guess is that the number of people who fall in the category that this defines is small, but I'd be more comfortable in any judgement about this if there was some research data to back up such judgements.
Restrictiveness I think most hackers know what it means for a language to feel restrictive. What's happening when you feel that? I think it's the same feeling you get when the street you want to take is blocked off, and you have to take a long detour to get where you wanted to go. There is something you want to say, and the language won't let you. True, perhaps, but I don't associate the phenomena with succinctness in any way. I have often felt contstrained by the structure of different computer languages, but it never has had anything particular to do with the succinctness of the language.
Restrictive Language: What's really going on here, I think, is that a restrictive language is one that isn't succinct enough. The problem is not simply that you can't say what you planned to. It's that the detour the language makes you take is longer. Try this thought experiment. Suppose there were some program you wanted to write, and the language wouldn't let you express it the way you planned to, but instead forced you to write the program in some other way that was shorter. For me at least, that wouldn't feel very restrictive. It would be like the street you wanted to take being blocked off, and the policeman at the intersection directing you to a shortcut instead of a detour. Great! I don't buy this. `Restrictive' to me doesn't have much to do with succinctness. Something can be terse and restrictive, and something else can be elaborate and still restrictive. And if Graham can say `forced to to write' followed by `that wouldn't feel very restrictive' simply means that I have no clue what he is talking about. It seems very confused to me.
Succinct: I think most (ninety percent?) of the feeling of restrictiveness comes from being forced to make the program you write in the language longer than one you have in your head. Restrictiveness is mostly lack of succinctness. So when a language feels restrictive, what that (mostly) means is that it isn't succinct enough, and when a language isn't succinct, it will feel restrictive. This seems to me to be a confusion, at least in the way that I would use the words. `Restrictive' is not the word I'd use to describe the concept that is being presented.
Regularity The quote I began with mentions two other qualities, regularity and readability. I'm not sure what regularity is, or what advantage, if any, code that is regular and readable has over code that is merely readable. But I think I know what is meant by readability, and I think it is also related to succinctness. I'm confused, too. But my guess is that `regularity' has to do with our ability to gulp in whole large pieces of some expression in whole swallows. If we encounter a regularity in code, for example, then we probably don't need to figure out and focus each occurrence in complete detail.
Line vs. Whole: We have to be careful here to distinguish between the readability of an individual line of code and the readability of the whole program. It's the second that matters. I agree that a line of Basic is likely to be more readable than a line of Lisp. But a program written in Basic is is going to have more lines than the same program written in Lisp (especially once you cross over into Greenspunland). The total effort of reading the Basic program will surely be greater. total effort = effort per line x number of lines If I understand this correctly it says `lines are (somewhat) arbitrary'. And in that I would certainly agree. Many languages allow multiple statements to appear on each line, and it would clearly be quite arbitrary count such a composed line as a single line.
As a Factor: I'm not as sure that readability is directly proportionate to succinctness as I am that power is, but certainly succinctness is a factor (in the mathematical sense; see equation above) in readability. So it may not even be meaningful to say that the goal of a language is readability, not succinctness; it could be like saying the goal was readability, not readability. Well, I'm confused. A jumble of succinctness, power and readability. And what it all means is not at all clear to me. But I have problems on several levels, not the least of which is a problem with the whole concept of readability of a `language'. Particular documents can be more or less readable, of course. But I think of `readability' as a property of a language as it manifests itself in the eyes of the reader. For me, for example, things written in ideaographic language are essentially unreadable, I don't even know which of the strokes that make up a particular glyph `count' in the process of delivering meaning. But within a language there is still a wide range of `readability' and how it relates to succinctness and power needs to be thought about very carefully before coming to any useful conclusion.
Per Line: What readability-per-line does mean, to the user encountering the language for the first time, is that source code will look unthreatening. So readability-per-line could be a good marketing decision, even if it is a bad design decision. It's isomorphic to the very successful technique of letting people pay in installments: instead of frightening them with a high upfront price, you tell them the low monthly payment. Installment plans are a net lose for the buyer, though, as mere readability-per-line probably is for the programmer. The buyer is going to make a lot of those low, low payments; and the programmer is going to read a lot of those individually readable lines. This expresses a specific, and I might say peculiar, philosophy. The idea that installment payments are a `net lose for the buyer' seems belied by the fact the without such payments very few people would ever be able to afford housing. The fact that Graham makes such an absurd observation does not only suggest a poor model of some of the important aspects of economics, but it also has an analogous impact in the programming domain. readability-per-line may not be useful for some programmers, but that doesn't mean that it is only a `marketing' thing.
Predates Programming: This tradeoff predates programming languages. If you're used to reading novels and newspaper articles, your first experience of reading a math paper can be dismaying. It could take half an hour to read a single page. And yet, I am pretty sure that the notation is not the problem, even though it may feel like it is. The math paper is hard to read because the ideas are hard. If you expressed the same ideas in prose (as mathematicians had to do before they evolved succinct notations), they wouldn't be any easier to read, because the paper would grow to the size of a book. And it can take an difficult hour or two to get past a single page of Kirkegaard. Graham is right when he notes that some `ideas are hard'. This has little, if anything, to do with succinctness, however. Hard ideas are hard, pretty much independently of whether they are expressed in lots of words or in just a few.
To What Extent? A number of people have rejected the idea that succinctness = power. I think it would be more useful, instead of simply arguing that they are the same or aren't, to ask: to what extent does succinctness = power? Because clearly succinctness is a large part of what higher-level languages are for. If it is not all they're for, then what else are they for, and how important, relatively, are these other functions? Succinctness, it seems to me, are a part of what some higher-level languages are for. For other higher lavel languages it may well not be succinctness. Take a program like Knuth's monumental TeX or MetaFont for example. Here the language spends considerable effort making it possible to mix coding and documentation in a way which allows this huge programming complex to be effectively managed, modified and used.
Civilized Debate?: I'm not proposing this just to make the debate more civilized. I really want to know the answer. When, if ever, is a language too succinct for its own good? Does this question mean anything? I can imagine a question about a language possibly being too succinct for some user's good, but I have no idea what `good' is served by the language itself in `its own good'. Either this is a slip of the pen or a glitch in the thought, But it doesn't mean much to me in either case.
Not Pathological: The hypothesis I began with was that, except in pathological examples, I thought succinctness could be considered identical with power. What I meant was that in any language anyone would design, they would be identical, but that if someone wanted to design a language explicitly to disprove this hyphothesis, they could probably do it. I'm not even sure of that, actually. I'd like to have a discussion of TeX and MetaFont with respect to these questions. I haven't yet had time to really sort out my own thoughts about how they would apply, but Knuth is not only a brilliant programmer, he has been one for decades, and yet he places very scant emphasis on terseness. He does love macros, of course, and these help to expose structure in his code, but even a quick glance at any of his major efforts seems to suggest little emphasis on succinctness.
Languages, not Programs We should be clear that we are talking about the succinctness of languages, not of individual programs. It certainly is possible for individual programs to be written too densely. I keep getting confused by the discussion of the `succinctness of the language' vs. the `succinctness of a program realized in that language'. While I am reasonably clear about what the latter means, I am not at all clear about the former.
Complex Macros?: I wrote about this in On Lisp. A complex macro may have to save many times its own length to be a justifiable. If writing some hairy macro could save you ten lines of code every time you use it, and the macro is itself ten lines of code, then you get a net saving in lines if you use it more than twice. But that could still be a bad move, because macro definitions are harder to read than ordinary code. You might have to use the macro ten or twenty times before it yielded a net improvement in readability. Or, more likely, the `readability' is not much effected by the repitition of usage. I've seen situations were one use, while costing `net lines' of extra code, added enough clarity, and enough prospect for future use, to be quite well worth it. And I've seen cases on the other side, as well, where although many net lines might have been saved, particular macros added to the burden of managing the code by making it harder to understand and manage.
Using Tricks: I'm sure every language has such tradeoffs (though I suspect the stakes get higher as the language gets more powerful). Every programmer must have seen code that some clever person has made marginally shorter by using dubious programming tricks. Yes. And I've seen code that some clever person has made dramatically shorter by using such tricks. And I've even seen situations where the difference quite literally provided the difference between success and failure in the early life of a company with its cost savings. So such `tricks' shouldn't be taken too lightly.
Individual Programs: So there is no argument about that---at least, not from me. Individual programs can certainly be too succinct for their own good. The question is, can a language be? Can a language compel programmers to write code that's short (in elements) at the expense of overall readability? Can a langauge be too succinct for its own good? I'm not sure the question itself makes any sense. Succinctness seems to me to be a property of a way someone uses a language more than it is an intrinsic property of the language itself.
Arc Example: One reason it's hard to imagine a language being too succinct is that if there were some excessively compact way to phrase something, there would probably also be a longer way. For example, if you felt Lisp programs using a lot of macros or higher-order functions were too dense, you could, if you preferred, write code that was isomorphic to Pascal. If you don't want to express factorial in Arc as a call to a higher-order function
(rec zero 1 * 1-)
you can also write out a recursive definition:
(rfn fact (x) (if (zero x) 1 (* x (fact (1- x)))))
I guess this strikes me as falling in the true but so what category. I guess I miss the point.
Too Succinct?: Though I can't off the top of my head think of any examples, I am interested in the question of whether a language could be too succinct. Are there languages that force you to write code in a way that is crabbed and incomprehensible? If anyone has examples, I would be very interested to see them. I'm not sure that the concept of a language being `too succinct' makes much sense. The notion that a language can `force you to write' in any particular fashion doesn't square with my use of the concepts. English doesn't `force me' to write in any particular way. How my writing works out, and whether it does its job or not, is a matter for the interactio between my expressive abilities and the language in which I express them.
Density: (Reminder: What I'm looking for are programs that are very dense according to the metric of "elements" sketched above, not merely programs that are short because delimiters can be omitted and everything has a one-character name.) Yes, but in my experience it is rare to see `terse languages' filled with long and exaborate names. Generally terseness in one dimension is accompanied by terseness in other dimensions as well.
Related Links Related Links:
  • Japanese Translation
  • Lutz Prechelt: Comparison of Seven Languages
  • Erann Gat: Lisp vs. Java
  • Peter Norvig Tries Prechelt's Test
  • Matthias Felleisen: Expressive Power of Languages
  • Kragen Sitaker: Redundancy and Power
  • Forth
  • Joy
  • Icon
  • J
  • K
This is a pretty good list. The languages make a good list of some of those that have contributed a good deal to one of the sides of current programming efforts.
Summary This paper is oddly narrow. It is interesting and raises some important questions, but only on one side of the programming practice. The `terse' languages are an interesting sub-set of current computer languages, but they only represent part of the picture. A different aspect might be emphasised by Python, while still others might be better discussed with regard to TeX and Knuth's Web. Then there are things suggested by languages such as Oberon, Haskell, Alice, Mozart, Lua, Plan 9 and Inferno. The fact that there are all of these other language that might be worthy of consideration do not detract from the discussion that takes place here, but their existence does suggest that the discussion here is much narrower than might be understood at first consideration.
Afterword Perhaps the point can be made this way. Mathematics is an example of a language which is enormously productive for some people and some problems. However, it would be ignorant to assume that it was a good language for all people and all problems. If you can use it, and if the problem you are attempting to solve yields nicely to a mathematical formulation, then it can provide an enormously insightful and effective conclusion.

However, there is no evidence, that I know of anyway, that suggests that everyone can make effective use of the models that mathematics provides. And certainly there is a lot of reason to believe that there are many real-world situations that to not very conveniently yield to mathematical formulations.

I think the same is true of terse languages. There are some people who can use them effectively, and there are some problems where they lead to highly effective formulations. To suggest that they are `better' however, and in particular to make such a suggestion with no evidence other than `feeling it's true' is simply naive.

Onward There are some aspects of this paper that I find very attractive. And while I may or may not get what Graham is specifically going for, I feel some empathy for certain aspects of succinctness. When I program I do notice that as my ideas get clearer, my code gets shorter. The very activitiy of getting `clearer' about some problem generally means that I can find expressions of it which are more directly to the point, and thus which take fewer distractive detours on the way to my final goals.

But I must say that I think it is somewhat muddled to go after this point by empasising a rather vague concept like the `succinctness of a language'. There is something attractive in getting representations of program which are more central and less disctractive, but I'm not at all sure that it is directly related to the succinctness of the language. And even if it were, there is a whole interaction effect with how people look at and solve problems that needs to be taken into account before Graham is entitled to his conclusions.

© Copyright 2003 David Ness.
Last update: 2003-05-04 23:49:32 EDT