Propagating, artfully

Just came across [this paper] today, and it was super-interesting. (I’m assuming the title is a reference to [an earlier one] by Steele and Sussman).

You can read the abstract, or [watch Sussman talk about it], so I won’t waste your time summarizing it here. Instead, I’ll skip all the way ahead to the [thesis version] of the paper, past the “Conclusions” and on to the section titled Philosophical Insights. This part was quite … enlightening.

First off, the “concurrency problem” really isn’t one:

… the problem comes from trying to maintain the illusion that the events of a concurrent system occur in a particular chronological order.

But this artificiality is about concurrency in programs, not in the natural world, since

… concurrency is the natural state of affairs, and synchronicity is what’s difficult. The physical world in which we live is perfectly concurrent: every little patch of universe evolves on its own, according to local rules, and physical effects travel from one patch to another at a finite speed …

More importantly,

Our experience of time appears as a linear stream of events only because our memory imposes an order on them, which is the order in which events are remembered to have occurred … the illusion of synchronous time is sustainable for our society, however, because the patch of universe we occupy synchronizes far faster than the temporal resolution of “event” our society expects.

So, then, why is this a problem? Because we work around the behavior of the “analog world” to create the “digital world”.

A great deal of engineering effort is expended to build from this substrate computational devices that provide the illusion of synchronous time.

This would be fine, except that these cores share memory. Memory is what creates this problem.

This large memory implicitly imposes a linear order on the reads or writes that it experiences. Since synchronizing concurrent things well is hard, the implicit synchronization done by such a memory is terrible.

And this is what makes concurrent programming so hard, which is where the propagator network steps in.

… the structure of a propagator network is analogous to space, and the speed at which information propagates through it is analogous to the speed of light … the idea of partial information lets us build a propagation infrastructure that does not force the notion of synchronous time.

Let’s take a step back now and consider the bigger picture.

An electronic computer is a fixed electrical circuit that performs a universal computation, that is, it implements a layer of indirection which allows it to mimic any computation that is described to it … we understand the meaning of machine instructions as sequences of things for the computer to do, with a very powerful notion of the flow of time

Seen in this way, we can derive many concepts in Computer Science as notions introduced to rearrange or describe parts of these sequences, such as imperative programming …

… pure linear sequences, or course, are insufficient for describing the computations we want to describe, so we add means of naming portions of such sequences and referring to them …

… or virtual memory …

… virtual memory introduces a notion of space. The memories of separate programs running on the same computer are kept separate … each program, therefore, has its own flow of time …

… or lexical scope …

… lexical scope introduces a finer-grained notion of space … each lexical scope is therefore a sort of point in space; and their interconnections give space a topology.

This leads us, inexorably, to thinking of the fundamental structure of computations as trees rather than lines.

Tree-shared computing systems can of course be made out of line-shaped ones with some additional mechanism, such as a recursive interpreter of the tree structure of a program.

… and I love this next bit …

Conceptually, the space of such a system is topologically a tree, and time is inherited from the underlying linear interpreter, and therefore constitutes some traversal of the tree that is space. The use of names to refer to reusable partial results of a tree computation turns it into a directed graph computation. The sequential time of the underlying computer constitutes a topological sort of the computation, forcing the graph to be acyclic.

Whew. Ok! So now we know! With this insight, let’s tackle the next bit, the bane of all FP-advocates: Side effects.

(Note: I would like the paper to end right here, at this high point. I could split this article into two halves, but if I stop here I probably won’t get around to the next one)

The essential reason why side effects tend to mess up models of computation is that side effects inescapably introduce time.

So the fundamental tension can be expressed as …

… we want to be able to build programs whose observable actions are what we want them to be; for that we need the observable actions of our programs to be predictable from the programs themselves.

… or equivalently, as …

… between giving our computers the freedom to operate as they will on their internal structures, where we are not concerned with the details of progress but only with its result; and constraining our computers to act the way we wish externally …

Ok, we know this, we’ve seen this before. What exactly is new about this?

The same tension exists in the architecture of human minds. Indeed, people too can both think and act … we have a word for the transition humans make between thought and action: “decision”.

This is something I found quite novel. Thinking doesn’t respect the flow of time at all. But action is constrained entirely by it. So we are faced with an (apparent ?) cartesian-like duality of thought and action, and we have to choose how to deal with it.

So the result is either that the primitives for action amount to always doing everything whenever one thinks about doing it, or that the story about thinking offers no primitives for action at all.

So either thoughts are constrained in a way to make actions predictable, or else “bridges” between these two worlds of thought and action are needed. Again, the propagator paradigm can step in and resolve this.

We can have three worlds, of thoughts, actions and decisions, each being some sort of propagator. To put it more concretely,

… when a network has many fan ins and cycles, and operates on interesting partial information structures, propagators can run in a chaotic order, and be run many times, and let the merging of the partial information ensure that they produce a coherent result at the end; this is thought.
… when a network is acyclic and the “partial” information structure is the all-or-nothing structure, then propagation mimics conventional evaluation, and the order of events is perfectly predictable; this is action
… propagators that will accept partial inputs, that can perhaps be refined with the passage of time, and will take upon themselves the responsibility of picking a moment, making a choice, and committing to an all-or-nothing answer; this is decision

That’s it! Read it once, read it twice, I can guarantee you that you won’t think about the computation the same way again!

A common analogy for a resistor is a pipe that carries water. The water itself is analogous to electrical charge, the pressure at the input of the pipe is similar to voltage, and the rate of flow of the water through the pipe is like electrical current. Just as with an electrical resistor, the flow of water through the pipe is faster if the pipe is shorter and/or it has a larger diameter. An analogy for a memristor is an interesting kind of pipe that expands or shrinks when water flows through it. If water flows through the pipe in one direction, the diameter of the pipe increases, thus enabling the water to flow faster. If water flows through the pipe in the opposite direction, the diameter of the pipe decreases, thus slowing down the flow of water. If the water pressure is turned off, the pipe will retain it most recent diameter until the water is turned back on. Thus, the pipe does not store water like a bucket (or a capacitor) – it remembers how much water flowed through it.

Baktoo: Taking baby steps with generative art

Continuing (or stumbling) along a path to using Common Lisp for stuff I consider fun, I came up with this.

As mentioned in the `README`, I need to work on making this more efficient.

Stuff I learned along the way:

  • I love how I can focus on “making it work” _before_ worrying about “making it fast”
  • Utilities exist for a reason. _Use them_ (I’ve decided to stick with `:rutils`)
  • Don’t be afraid of using libraries. I found `:cl-log`, which is amazingly well-written and I will never use all of it, but just the basic use case of toggling levels of verbosity is good enough for me, and something that would normally be hard to do.
  • Building up the system works really well. This is the first time I’ve had an experience of writing stuff that sort of _just worked_, since it felt like I was **directly translating my thoughts into code**.
  • It’s easy to rapidly create and modify functions, stubs, what not, while keeping them all in view, in the same 100 or so lines of vertical space (contrast with switching between (say) multiple `.java` files)

Anyway, here are a couple of samples I made with this (click to see detailed image):
Baktoo - 2

Baktoo - 1

Programming is like writing

From a comment on Slashdot:

I swtiched jobs from being a computer programmer to being an ESL teacher in Japan. Japan is somewhat famous for churning out students who know a lot about English, but can’t order a drink at Mac Donald’s. We used to have a name for those kinds of people with regard to programming languages: language laywers. They can answer any question you put to them about a programming language, but couldn’t program to save their life. These people often make it past job interviews easily, but then turn out to be huge disappointments when they actually get down to work. I’ve read a lot about this problem, but the more I look at it, the more I realise that these disabled programmers are just like my students. They have a vocabulary of 5000 words, know every grammar rule in the book but just can’t speak.

My current theory is that programming is quite literally writing. The vast majority of programming is not conceptually difficult (contrary to what a lot of people would have you believe). We only make it difficult because we suck at writing. The vast majority of programmers aren’t fluent, and don’t even have a desire to be fluent. They don’t read other people’s code. They don’t recognise or use idioms. They don’t think in the programming language. Most code sucks because we have the fluency equivalent of 3 year olds trying to write a novel. And so our programs are needlessly complex.

Those programmers with a “spark” are programmers who have an innate talent for the language. Or they are people who have read and read and read code. Or both. We teach programming wrong. We teach it the way Japanese teachers have been teaching English. We teach about programming and expect that students will spontaneously learn to write from this collection of facts.

In language acquisition there is a hypothesis called the “Input Hypothesis”. It states that all language acquisition comes from “comprehensible input”. That is, if you hear or read language that you can understand based on what you already know and from context, you will acquire it. Explanation does not help you acquire language. I believe the same is true of programming. We should be immersing students in good code. We should be burying them in idiom after idiom after idiom, allowing them to acquire the ability to program without explanation.

SDL on OSX

(ql:quickload 'lispbuilder-sdl) worked right away on an ubuntu box, but failed (couldn’t find `cocoahelper) when I tried the sam on my Macbook.

The fix is to install SDL, then go to ~/quicklisp/dists/quicklisp/software/lispbuilder-20140113-svn/lispbuilder-sdl/cocoahelper and type make. After this, the ql:quickload form succeeds.

However, the same dummy blank screen test program that worked fine on the linux box doesn’t seem to work anymore:

(defpackage :simple-sdl
  (:use :common-lisp
    :lispbuilder-sdl))

(in-package :simple-sdl)

(defun main-screen ()
  (sdl:with-init ()
    (sdl:window 300 300
        :title-caption "Simple SDL"
        :icon-caption "Simple SDL")
    (sdl:with-events ()
      (:quit-event () t)
      (:key-down-event ()
               (sdl:push-quit-event)))))

It fails with this backtrace:

Backtrace:
  0: ("bogus stack frame")
  1: ("foreign function: CGSScanconverterRenderRGBMask")
  2: ("foreign function: create_rgb_bitmap")
  3: ("foreign function: CGGlyphBitmapCreateWithPath_32")
  4: ("foreign function: CGFontCreateGlyphBitmap")
  5: ("foreign function: _ZN14CGGlyphBuilder22create_missing_bitmapsEPK17CGGlyphIdentifiermPPK13CGGlyphBitmap")
  6: ("foreign function: render_glyphs")
  7: ("foreign function: draw_glyph_bitmaps")
  8: ("foreign function: ripc_DrawGlyphs")
  9: ("foreign function: draw_glyphs")
 10: ("foreign function: CTFontDrawGlyphsAtPositions")
 11: ("foreign function: -[CUITextEffectStack drawGlyphs:inContext:usingFont:atPositions:count:lineHeight:inBounds:atScale:]")
 12: ("foreign function: -[CUICatalog drawGlyphs:atPositions:inContext:withFont:count:stylePresetName:styleConfiguration:foregroundColor:]")
 13: ("foreign function: -[NSLineFragmentRenderingContext drawAtPoint:inContext:]")
 14: ("foreign function: _NSStringDrawingCore")
 15: ("foreign function: _NSDrawTextCell")
 16: ("foreign function: -[NSTitledFrame _drawTitleStringIn:withColor:]")
 17: ("foreign function: -[NSThemeFrame _drawTitleStringInClip:]")
 18: ("foreign function: -[NSThemeFrame drawFrame:]")

Just when I was getting excited about it … so what other platform-independent Lisp GUI should I use 😦 ?

BTW, while we’re knocking this (unfairly, of course), I should point out it doesn’t support SDL 2.x

I then stumbled on this library which seemed better maintained, and modified my example:

(defun main-screen ()
  (sdl2:with-init (:everything)
    (sdl2:with-window (win :w 300 :h 300
               :title "Simple SDL")
    (sdl2:with-event-loop (:method :poll)
      (:quit () t)
      (:keydown ()
        (sdl2:push-quit-event))))))

Now a window did get created, but it didn’t play well with OSX (it wasn’t a “window” so much as a blank square of real estate on the screen, with a spinning beachball and no response).

So … the search for a basic simple drawing facility continues 😦