- Applying the lessons of the Golang scheduler to the Tokio scheduler in Rust
- Speeding up Nixpkgs by avoiding subshells
- Pessimism about engineering software
- The Racket programming language is heading in a new direction
- Ben Lynn’s “most functional compiler”
- Someone tries TLA+ and comes away impressed
- Something recommended by a co-worker: CMU 15-721, a course on Advanced Database Systems
- Matt Godbolt (of Compiler Explorer) on “why C++ isn’t dead”)
- Urbit is here
- Comparing APL and C (!)
- Remembering the era of Flash
- Clojure at “the immutable bank”
- Zen and the art of software maintenance
- Shareware and floppy disks …
- Reminiscing about different versions of Microsoft
- I recently discovered all the different “server-less options in GCP”
- Looking back at the first version of Redis, written in … wait for it … in Tcl !!
- A periodic reminder about microkernels
- Why text editing is so hard to implement
- (Re-)Assembling the Apollo Guidance Computer (!)
I never thought of it this way
Logic tells us what propositions exist (what sorts of thoughts we wish to express) and what constitutes a proof (how we can communicate our thoughts to others). Languages (in the sense of programming) tells us what types exist (what computational phenomena we wish to express) and what constitutes a program (how we can give rise to that phenomenon). Categories tell us what structures exist (what mathematical models we have to work with) and what constitutes a mapping between them (how they relate to one another). In this sense all three have ontological force; they codify what is, not how to describe what is already given to us.
In this sense they are foundational; if we suppose that they are merely descriptive, we would be left with the question of where these previously given concepts arise, leading us back again to foundations.
I came across this wired article recently, and what I read sounded too science-fiction-y to be true, so then I decided to go to the source, and found this video (see below) by a researcher at HP, and it turns out to be both true and “science-fiction-y”.
We are used to thinking in terms of standard circuit elements — resistors, capacitors, inductors. One establishes a relationship between voltage and current, the second better voltage and charge, and the third between magnetic flux and current.
Now it never occurred to me to really think about it this way (it’s one of those things that’s only obvious in hindsight), but there is a missing piece of symmetry here.
Look at that list again, and it might jump out at you that among current, voltage, charge and magnetic flux, they’re related in pairs to each other, with the exception of charge and magnetic flux. Seeing this now, it might be reasonable to speculate on another circuit element that should do precisely that. And indeed someone did, about forty years ago, and named the missing piece the memristor.
Now I should acknowledge that there is a bit of controversy whether what HP labs claims to have discovered really matches up with this idea, so we’ll just have to wait a few years to test these claims, since the first commercial applications of this technology won’t be out for another five years at least.
But let’s continue. One of the observations made in the video linked I above is that the memristance obeys an inverse square law. This means the tinier the dimensions, the greater the observed effect. Which also means this is something that would belong purely in a chip, and not something you’d be putting on a breadboard any time soon.
The most exciting property, though, is that it’s behavior in the future depends on its past. So it is both a logic component as well as a storage component. So you could build a dense cluster of these things and determine which parts do what function, in a configurable sense, much like an FPGA on steroids.
I used to think (again, only because this is what I was taught) that the fundamental logic component was a NAND gate — but this turns out not to be true. Instead, it turns out that if we consider the interaction between input A and input/output B expressed using memristors, as an IMP gate, then we can construct a NAND gate out of these.
Further, multiple layers of these memristors can be stacked above a conventional CMOS layout, and densely packed together, leading to unprecedented on-chip memory, perhaps on the order of petabits!
So, how would this change things? It would certainly deprecate the SRAM ->DRAM->Hard Drive pyramid of caches we have right now, and we would not only have an ocean of universal memory, but our processing elements would be floating on this ocean, and entirely commingled with it!
We certainly won’t need to deal with the Von Neumann bottleneck any more …
It is usually hard to get an idea of how the time taken for various fundamental operations varies, and it does matter, but it’s hard to viscerally get it (time intervals in nanoseconds, microseconds, milliseconds aren’t really felt in the same way).
I came across this idea of representing the smallest number as a single second and everything else in terms of it, so that the relationship between the numbers is represented in more of a human scale, which results in the following table:
I wanted to show this in a chart, but it never shows more than the last two values, so I had to break it down into a series of smaller charts (I could use a log scale to represent them too, but that would’ve again lessened the impact you feel when seeing these numbers side by side)
What if you did the following:
- Take a chromebook
- Modify the chromium build running to run Sbcl within it.
- Create lisp bindings to the internal surface, so that all UI elements can be created and manipulated within the Lisp image.
- Allow downloading, compiling and running arbitrary lisp code
- One of the tabs is always a Repl
- Add a caching filesystem that would persist part or whole of the image
… might this create a modern-day Lisp machine? Maybe.
Did I miss anything obvious here? If not, this sounds doable in a few years.
I’m lazy, do you if you like this idea (I’m sure there’s a guaranteed niche market for these machines), go ahead and throw it on to Kickstarter. Or something.
Many good things, to be sure, but more has been omitted.
Perhaps Kent Pitman expressed it the best:
My only problem with this is that things are ALREADY sped up. What’s the point of running a zillion times faster than the machines of yesteryear, yet still not be willing to sacrifice a dime of it to anything other than doing the same kinds of boring computations that you did before? I want speedups not just to make my same old boring life faster, but to buy me the flexibility to do something I wasn’t willing to do at slower speeds.
I wanted to make a foray into mobile app programming (ah ok, nothing serious! Just a toy game or two) — and when I looked around, it looked like I had to deeply immerse myself into either the entire iOS ecosystem or the entire Android ecosystem.
Well, in fact, it’s worse — because I first have to make that choice!
So there are platform-independent alternatives — Xamarin (C#? no thanks), Titanium (maybe) and PhoneGap (heard good things!) Eventually though I came across [this nifty open-source framework], that seemed like it would fit my use case of “a toy game project” just fine.
It was super easy to get started (hey! a simulator in the browser! — contrast that with a completely non-working emulator for Android). But very soon I ran into (what seemed like) a huge problem — how the $#% was I supposed to debug anything here?
The running server (context: the app development environment is basically a
node.js environment) just printed nonsensical error messages about “
native view: undefined” or some such. This was horrible! How did anyone ever use this?
Yes, there is the whole “dynamic types => errors from typos” problem, and I ran into this pretty early on when a missing comma gave me a lot of grief. But this is somewhat made up by the source-level debugging at the console, where I can see the problem, and fix it right away!
WTF? Everything just works? And they’re tons of libraries too!
And here I was, thinking that the solution to the Lisp GUI problem was to tie in Webkit bindings to an MVC framework, to create a modern version of CLIM — but there’s already a (non-lispy) version of that out there!