The biggest hit for me while at SAIL in late ’69 was to really understand LISP. Of course, every student knew about car, cdr, and cons, but Utah was impoverished in that no one there used LISP and hence, no one had penetrated the mysteries of eval and apply. I could hardly believe how beautiful and wonderful the idea of LISP was [McCarthy 1960]. I say it this way because LISP had not only been around enough to get some honest barnacles, but worse, there were deep flaws in its logical foundations. By this, I mean that the pure language was supposed to be based on functions, but its most important components—such as lambda expressions quotes, and conds –were not functions at all, and instead were called special forms. Landin and others had been able to get quotes and cons in terms of lambda by tricks that were variously clever and useful, but the flaw remained in the jewel. In the practical language things were better. There were not just EXPRs (which evaluated their arguments, but FEXPRs (which did not). My next questions was, why on earth call it a functional language? Why not just base everything on FEXPRs and force evaluation on the receiving side when needed? I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said “take the hardest and most profound thing you need to do, make it great, and then build every easier thing out of it”. That was the promise of LiSP and the lure of lambda – needed was a better “hardest and most profound” thing. Objects should be it.
Now, it’s true that we don’t know how to program (although developments like seL4, coqasm, Excel, NaCl, and PHP seem to be making some progress on that front). But the “software crisis” isn’t about that. Rather, it’s about how Moore’s Law has reduced the cost of computers so much that programming is suddenly the limiting reagent in nearly everything in the economy. You can get a 48MHz ARM processor with 64kB of program Flash for US$1.76 and burn ten thousand lines of C into it, then use it to run a string of Christmas lights. For US$6.46 you can get a 48MHz ARM processor with 1MB of program Flash and burn a hundred thousand lines of C into it. Or you can put a Lua interpreter into 20% of that memory and fill the other 80% with eighty thousand lines of Lua. The “crisis” is that it costs literally a million times more to write the code than it does to make the processor. (Prices from Digi-Key, unit price for quantity 1000.) The “crisis” started in 1968 when the price of hardware at last fell below the cost of writing the software to take advantage of it, and it’s been “worsening” ever since. And that’s the origin of “software engineering”.
In no particular order:
- My wife gifted me tickets to the opening day show of the new Star Wars movie (trailer here if you care).
- I revisited the Dune soundtrack while browsing Youtube (yes, this still happens, though very infrequently these days), and noticed that while most of the music was done by what was popular at the time (Toto — I’d never heard of it), one track, titled “Prophecy” was contributed by Brian Eno! Check it out at the top of this post.
Finished listening to World Without End (No, not the Ken Follett one). Also, stockpiled six books and cancelled my Audible subscription temporarily (otherwise I’m just wasting credits).
- A nice commercial (from a few months ago), the making of the commercial, the original song it’s based on, and a nice cover I found (if you’re in a rush, skip the rest just listen to the recorded-at-home cover (by Larkin Poe), it’s wonderful).
(It’s so hard to remember stuff from just a month ago — no wonder the years gone by seem so hazy in terms of “what did I read?”, “what did I watch?”, etc!)
To me, one of the nice things about the semantics of real objects is that they are “real computers all the way down (RCATWD)” – this always retains the full ability to represent anything. The old way quickly gets to two things that aren’t computers – data and procedures – and all of a sudden the ability to defer optimizations and particular decisions in favour of behaviours has been lost. In other words, always having real objects always retains the ability to simulate anything you want, and to send it around the planet. … And RCATWD also provides perfect protection in both directions. We can see this in the hardware model of the Internet (possibly the only real object-oriented system in working order). You get language extensibility almost for free by simply agreeing on conventions for the message forms. My thought in the 70s was that the Internet we were all working on alongside personal computing was a really good scalable design, and that we should make a virtual internet of virtual machines that could be cached by the hardware machines. It’s really too bad that this didn’t happen.
More than once in recent years, others have reached the same conclusion as Brooks. Some have tried to impose a kind of sanity, or even to lay down the law formally in the form of technical standards, hoping to bring order and structure to the bazaar. So far they have all failed spectacularly, because the generation of lost dot-com wunderkinder in the bazaar has never seen a cathedral and therefore cannot even imagine why you would want one in the first place, much less what it should look like. It is a sad irony, indeed, that those who most need to read it may find The Design of Design entirely incomprehensible. But to anyone who has ever wondered whether using m4 macros to configure autoconf to write a shell script to look for 26 Fortran compilers in order to build a Web browser was a bit of a detour, Brooks offers well-reasoned hope that there can be a better way.