Calculating, Scheming, and Paul Graham

I came across [this paper] recently, and it challenged some of the thoughts/assumptions that had been building in my mind for a while (it discusses Scheme vs Miranda, but you can imagine Lisp vs Haskell instead).

It also mirrors a short email exchange I had with someone who I expected to be a Scheme/Lisp “champion”, but who gently led me down from my gas balloon of hype.

Yes, S-expressions are great, and one can fall in love with them, and the notion of “building material” for a language to build an application or solve a problem, but there’s no point being dogmatic about them.

Also, another tangential perspective: people consider Paul Graham a big advocate of Lisp, but in my opinion there is no one who has harmed the cause of Common Lisp more than Paul Graham.

Instead of praising the language that allowed him to build his own “killer app”, or teaching the specific details of his implementation, or his own work with the language, what did he proceed to do instead? Ask everyone to wait for his “perfect language” (i.e. stop using Common Lisp!!), and write inflated, abstract articles attracting only language lawyers and the my-language-is-longer-than-yours crowd. Sheesh.

If you want to learn Common Lisp, read The Land of Lisp or PAIP or PCL instead, or browse the sources of Hunchentoot or gendl (or some of the other well-used open-source libraries out there).

Is JavaScript the new Lisp?

I seem to have been blissfully unaware of just how far JavaScript has come over the last decade or so.

I wanted to make a foray into mobile app programming (ah ok, nothing serious! Just a toy game or two) — and when I looked around, it looked like I had to deeply immerse myself into either the entire iOS ecosystem or the entire Android ecosystem.

Well, in fact, it’s worse — because I first have to make that choice!

So there are platform-independent alternatives — Xamarin (C#? no thanks), Titanium (maybe) and PhoneGap (heard good things!) Eventually though I came across [this nifty open-source framework], that seemed like it would fit my use case of “a toy game project” just fine.

It was super easy to get started (hey! a simulator in the browser! — contrast that with a completely non-working emulator for Android). But very soon I ran into (what seemed like) a huge problem — how the $#% was I supposed to debug anything here?

The running server (context: the app development environment is basically a node.js environment) just printed nonsensical error messages about “native view: undefined” or some such. This was horrible! How did anyone ever use this?

And then I encountered the obvious solution — the tooling for JavaScript is just really, really good, right out of the box. The simulator mentioned earlier is just another object in my web page — so I can drill into it, print out the values of any running objects, and — isn’t this the live-debugging nirvana? — modify them in-place to fix any error and resume.

Put that in your REPL and smoke it

The JavaScript console (sorry, my experience so far has only been with Chrome) is phenomenal. I can evaluate anything I like, getting not just text, but entire arbitrary-nested hierarchical objects dumped out for my inspection.

Yes, there is the whole “dynamic types => errors from typos” problem, and I ran into this pretty early on when a missing comma gave me a lot of grief. But this is somewhat made up by the source-level debugging at the console, where I can see the problem, and fix it right away!

WTF? Everything just works? And they’re tons of libraries too!

And here I was, thinking that the solution to the Lisp GUI problem was to tie in Webkit bindings to an MVC framework, to create a modern version of CLIM — but there’s already a (non-lispy) version of that out there!

(I’m confused)

Alan Kay: The Computer Revolution Hasn’t Happened Yet

Watching this amazing video by Alan Kay, at OOPSLA 1998.

It starts off fairly normal, with some reminiscing about Smalltalk, but then slowly spirals onward and upward, generating one BIG insight after another. Here are just a few extracts — but think of this as a trailer, it’s no substitute for watching the talk itself.

The realization here—and it’s not possible to assign this realization to any particular person because it was in the seeds of Sketchpad, and in the seeds of [the] air training command file system, and in the seeds of Simula. That is, that once you have encapsulated, in such a way that there is an interface between the inside and the outside, it is possible to make an object act like anything.


The reason is simply this, that what you have encapsulated is a computer. You have done a powerful thing in computer science, which is to take the powerful thing you’re working on, and not lose it by partitioning up your design space. This is the bug in data and procedure languages. I think this is the most pernicious thing about languages a lot like C++ and Java, is that they think they’re helping the programmer by looking as much like the old thing as possible, but in fact they are hurting the programmer terribly by making it difficult for the programmer to understand what’s really powerful about this new metaphor. People who were doing time-sharing systems had already figured this out as well. Butler Lampson’s thesis in 1965 was about what you want to give a person on a time-sharing system, is something that is now called a virtual machine, which is not the same as what the Java VM is, but something that is as much like the physical computer as possible, but give one separately to everybody.


UNIX had that sense about it. The biggest problem with that scheme is that a UNIX process had an overhead of about two thousand bytes just to have a process, and so it was going to be very difficult in UNIX to let a UNIX process just be the number three. You would be going from three bits to a couple of thousand bytes, and you have this problem with scaling. A lot of the problem here is both deciding that the biological metaphor is the one that is going to win out over the next twenty-five years or so, and then committing to it enough to get it so it can be practical at all the levels of scale we actually need.


Here’s one we haven’t faced up to much yet, that, now we have to construct this stuff and soon we’ll be required to grow it. It’s very easy, for instance, to grow a baby six inches. They do it about ten times in their life and you never have to take it down for maintenance. But if you try and grow a 747, you are faced with an unbelievable problem, because it’s in this simple-minded mechanical world, in which the only object has been to make the artifact in the first place. Not to fix it. Not to change it. Not to let it live for a hundred years.


There is not one line of code in the Internet today that was in the original ARPANET. Of course, if we had IBM main frames in the original ARPANET, that wouldn’t have been true. This is a system that has expanded by a hundred million, has changed every atom and every bit, and has never had to stop. That is the metaphor we absolutely must apply to what we think are smaller things.


This notion of meta-programming. Lots of different ways of looking at it. One of them is that, any particular implementation is making pragmatic choices, and these pragmatic choices are likely not to be able to cover all of the cases, at the level of efficiency, and even at the level of richness required. Of course, this is standard OOP lore. This is why we encapsulate. We need to hide our messes. We need to have different ways of dealing with the same concepts in a way that does not distract the programmer. But in fact, it is also applicable, as the LISP people found, and we at Xerox PARC found; you can also apply it to the building of the language itself. The more the language can see its own structures, the more liberated you can be from the tyranny of a single implementation. I think this is one of the most critical things that very few people are worrying about in a practical form.


What they missed was, to me, the deepest thing I would like to communicate with you today, and that is we don’t know how to design systems yet. Let’s not make what we don’t know into a religion, for God’s sake. What we need to do is to constantly think and think and think about what’s important. We have to have our systems let us get to the next levels of abstraction as we come to them.

… and my favorite one:

When we think programming is small, that’s why your programs are so big. That’s why they become pyramids instead of gothic cathedrals.

Fixing a SLIME/Swank version problem

Installing a bunch of packages willy-nilly resulted in this configuration problem for me: starting slime resulted in this frustrating error message: Versions differ (2014…) slime vs (2009…) swank

It asks you whether to continue. If you say ‘n’, sorry, no slime for you ! If you say ‘y’, no fancy slime for you; you have to be content with the Comint buffer mode

If you want to fix this, you’re going to have to fix a possible mismatch somewhere.

In my case, I ran locate swank-loader.lisp and found two instances: one under /usr/share/common-lisp/... and the other under /usr/share/emacs24/site-lisp/..., you will probably have different paths but similarly conflicting.

Running dpkg -S on the first, I found it came from the cl-swank package, so I uninstalled it.

Now, slime refused to start because it was trying to load the missing swank-loader.lisp file.

Luckily, this was simply a matter of running M-x customize-variable swank-backend and set it to the remaining path (e.g. in my case, "/usr/share/emacs24/site-lisp/google/slime/swank-loader.lisp")

After this, M-x slime worked as usual!

… the programming environment model in Unix is an imitation of the Lisp machine, and the way it is implemented is through processes and inter-process communication instead of function calls.

Functions in the Lisp heap are programs on disk in the Unix model. the optimizations that made Unix able to survive this incredible inefficiency are bad for Lisp, which would have done it a lot better had it been in control …