Summing up

I came across this post talking about numerical speed in Clojure, so I thought I would try out the equivalent in Common Lisp (Clozure CL) on my Macbook:

CL-USER> (let* ((arr-size 3000000)
        (double-arr (make-array arr-size :element-type 'single-float)))
       (dotimes (i arr-size)
         (setf (aref double-arr i) (random 1.0)))
       (time (loop for i from 0 below arr-size
            summing (aref double-arr i))))
(LOOP FOR I FROM 0 BELOW ARR-SIZE SUMMING (AREF DOUBLE-ARR I))
took 45,649 microseconds (0.045649 seconds) to run.
During that period, and with 4 available CPU cores,
     45,558 microseconds (0.045558 seconds) were spent in user mode
         57 microseconds (0.000057 seconds) were spent in system mode
1500183.5

So — 45 milliseconds, not bad.

Lispium ?

What if you did the following:

  • Take a chromebook
  • Modify the chromium build running to run Sbcl within it.
  • Create lisp bindings to the internal surface, so that all UI elements can be created and manipulated within the Lisp image.
  • Allow downloading, compiling and running arbitrary lisp code
  • One of the tabs is always a Repl
  • Add a caching filesystem that would persist part or whole of the image

… might this create a modern-day Lisp machine? Maybe.

Did I miss anything obvious here? If not, this sounds doable in a few years.

I’m lazy, do you if you like this idea (I’m sure there’s a guaranteed niche market for these machines), go ahead and throw it on to Kickstarter. Or something.

Macros are a simple mechanism for generating code, in other words, automating programming. Unless your system includes a better mechanism for automating programming (so far, I have not seen any such mechanisms), not having macros means that you basically don’t understand why you are writing code. This is why it is not surprising that most software sucks – a lot of programmers only have a very shallow understanding of why they are programming.

Even many hackers just hack because it’s fun. So is masturbation.

This is also the reason why functional programming languages ignore macros. The people behind them are not interested in programming automation.

Most computers today, for all of their potential speed, are largely a mistake, based on the provenly unscalable Von Neumann architecture, controlled with one of the most shortsighted languages of all time, x86 assembly. They are almost unfathomably inefficient. Their processors have close to a billion transistors, most of which sit idle while a tiny fraction of a fraction of them perform some operation. Three quarters of a processor may be devoted to the quagmire of cache memory and its demands.

All of this brute force horsepower gets stacked in an ever higher tower of babel in the relentless race to perform more sequential calculations per second. If people only know what engineering was required to implement branch prediction and 20 stage deep pipelines… It’s like seeing being the walls of a meat packing plant. You just don’t want to know.

If you knew that your computer performed two or three hundred empty cycles waiting for some piece of data to be fetched from main memory on a cache miss, or that when you see the little spinny thing, you are actually waiting for your hard drive to track down dozens of fragments of a file scattered across the hard disk drive because it got too full that one time, or that your web browser locked up on you because some novice programmer wrote some portion of it in blocking network code that is waiting for the last byte to arrive from the web server, and that the web server is sending that byte over and over again because a router is temporarily overloaded and is dropping packets like crazy so your neighbor can download a youtube clip of a cat hitting a ball into its owner’s crotch, you might throw up in your mouth a little bit.

Sure, your computer can perform 10 billion floating point operations per second. But most of the time it’s not doing anything at all. Just like you.

… an important point here about what a program is. Does it cause action by subjecting a static machine that otherwise does nothing to a list of instructions, or is it static data that is accepting by a “live” machine that acts on it?

Above all the wonders of Lisp’s pantheon stand its metalinguistic tools; by their grace have Lisp’s acolytes been liberated from the rigid asceticism of lesser faiths. Thanks to Macro and kin, the jolly, complacent Lisp hacker can gaze through a fragrant cloud of setfs and defstructs at the emaciated unfortunates below, scraping out their meager code in inflexible notation, and sneer superciliously. It’s a good feeling.

But all’s not joy in Consville. For—I beg your pardon, but—there really is no good way to iterate in Lisp. Now, some are happy to map their way about, whether for real with mapcar and friends, or with the make-believe of Series; others are so satisfied with do it’s a wonder they’re not C hackers.1 Still others have gotten by with loop, but are getting tired of looking up the syntax in the manual over and over again. And in the elegant schemes of some, only tail recursion and lambdas figure. But that still leaves a sizeable majority of folk—well, me, at least—who would simply like to iterate, thank you, but in a way that provides nice abstractions, is extensible, and looks like honest-to-God Lisp.

What has speed brought ?

Many good things, to be sure, but more has been omitted.

Perhaps Kent Pitman expressed it the best:

My only problem with this is that things are ALREADY sped up. What’s the point of running a zillion times faster than the machines of yesteryear, yet still not be willing to sacrifice a dime of it to anything other than doing the same kinds of boring computations that you did before? I want speedups not just to make my same old boring life faster, but to buy me the flexibility to do something I wasn’t willing to do at slower speeds.