A giant CS/Lisp rant

Read the whole piece here, here’s a flavor:

Why are we not out there to offer a real database system with Common Lisp datatypes instead of the tragic mess that SQL imposes on us in the C-based APIs out there (not to mention that XML calamity)? Why are we not out there building the next planning system for interstate highway updates?

Why are we not building publication solutions that would allow a reversal of the most hostile of all hostile intellectual activities undertaken by mankind in the past 40,000 years – the flooding of innocent people with senseless loads of marketing crap – and building the foundation for pull advertising?

Where the hell did the intelligent agents go, anyway? Where is the grammar- and synonym-sensitive search engine that finds matches for articles with words you did not think of? Where is the dumbing-down service that can take a precisely formulated and primarily correct technical or scientific article and turn it into a meaningful piece of information for the 1000-word-vocabularians? Where is the research on machine representation of contextgoing? Never mind the expert system that learns, I simply want an interface to an encyclopedia or specialized information database that expects me to remember what I read in some other article not all that long ago, so I do not need the full-blown version aimed at the relatively ignorant.

Where is the artificial intelligence that can actually take care of some of the things the human brain sucks at, like precision in its otherwise amazing memory? Where is the active suggestor, as opposed to the passive computer of what-if-scenarios? I want to let the network of company computers run what-if-I-had-thought-of-that-experiments and other Searches for Terrestial Intelligence instead of wasting computrons on SETI. What if people were not so goddamn scared of machine intelligence higher than their own that they would keep computers as stupid as can be? Where are the people working on the future?

Where are the futurists that do the interesting stuff that will hit us all around the next bend? I mean, to hell with some practical extraction and reporting language, I want real progress, and I want it before I go mad with rage over the wastes of human ingenuity, such as it is, that goes into writing yet another spyware “app” for Windows so yet another retard can send his obnoxious, insulting advertising to people who explicitly do not want that kind of information? For that matter, where is the spam filter that does the job of the intelligent, conscientious receptionist I can no longer afford because of the supposed labor-saving office automation that makes an ordinary business letter cost 20 times what it did in 1965 (adjusted for all important economic indicators)?

While I am at it – where is the real savings of the computer revolution? Who took all my money and gave me advertising for life insurance and Viagra?

Memristance is futile. Not.

I came across this wired article recently, and what I read sounded too science-fiction-y to be true, so then I decided to go to the source, and found this video (see below) by a researcher at HP, and it turns out to be both true and “science-fiction-y”.

We are used to thinking in terms of standard circuit elements — resistors, capacitors, inductors. One establishes a relationship between voltage and current, the second better voltage and charge, and the third between magnetic flux and current.

Now it never occurred to me to really think about it this way (it’s one of those things that’s only obvious in hindsight), but there is a missing piece of symmetry here.

Look at that list again, and it might jump out at you that among current, voltage, charge and magnetic flux, they’re related in pairs to each other, with the exception of charge and magnetic flux. Seeing this now, it might be reasonable to speculate on another circuit element that should do precisely that. And indeed someone did, about forty years ago, and named the missing piece the memristor.

Now I should acknowledge that there is a bit of controversy whether what HP labs claims to have discovered really matches up with this idea, so we’ll just have to wait a few years to test these claims, since the first commercial applications of this technology won’t be out for another five years at least.

But let’s continue. One of the observations made in the video linked I above is that the memristance obeys an inverse square law. This means the tinier the dimensions, the greater the observed effect. Which also means this is something that would belong purely in a chip, and not something you’d be putting on a breadboard any time soon.

The most exciting property, though, is that it’s behavior in the future depends on its past. So it is both a logic component as well as a storage component. So you could build a dense cluster of these things and determine which parts do what function, in a configurable sense, much like an FPGA on steroids.

I used to think (again, only because this is what I was taught) that the fundamental logic component was a NAND gate — but this turns out not to be true. Instead, it turns out that if we consider the interaction between input A and input/output B expressed using memristors, as an IMP gate, then we can construct a NAND gate out of these.

Further, multiple layers of these memristors can be stacked above a conventional CMOS layout, and densely packed together, leading to unprecedented on-chip memory, perhaps on the order of petabits!

So, how would this change things? It would certainly deprecate the SRAM ->DRAM->Hard Drive pyramid of caches we have right now, and we would not only have an ocean of universal memory, but our processing elements would be floating on this ocean, and entirely commingled with it!

We certainly won’t need to deal with the Von Neumann bottleneck any more …

Comments are explanations to the future maintainers of the code. Even if you’re the only person who will ever see and touch the code, even if you’re either immortal and never going to quit, or unconcerned with what happens after you leave (and have your code self-destruct in such an eventuality), you may find it useful to comment your code. Indeed, by the time you revisit your code, weeks, months or years later, you will find yourself a different person from the one who wrote it, and you will be grateful to that previous self for making the code readable.

Only in the mid-nineties when the number of transistors on a single chip ceased to be the true bottleneck, the “von Neumann bottleneck” may have ceased to be the optimal solution. For the first time after fifty years of progress at break-neck speed, there was a glut of switching elements. This may have made it practical to consider architectures customized to high level languages. But it would only be competitive at the expense of investments that only those firms can afford that are committed to the von Neumann bottleneck. So now, instead of a single such bottleneck, my next computer, with it quadcore chip, will have four of them. And I will be able to have my OCaml programs run wondrously fast. Will all those millions transistors be used efficiently? Of course not: the famed “Road Ahead” is paved with waste.