Back to the future

I know not of these “end users” of whom you speak. There is only programming.

— Unknown programmer of the future

Jonathan Edwards recently wrote that an IDE is not enough. To break out of our current commitment to text files and text editors requires a new programming paradigm, not just a fancy IDE. I feel the same. I’ve spent the last two and a half years moving my interactive programming work forward, and I feel I’m now ready to start sharing. Bret Victor blew us away with his influential, Jonathan-Edwards-inspired, mindmelt of a demo but was conspicuously quiet on how to make that kind of stuff work. (Running and re-running fragments of imperative code that spit out effects against a shared state, for example, does not a paradigm make, even if it can be made to work most of the time. “Most of time” is not good enough.) I won’t be blowing anyone’s minds with fancy demos, unfortunately, but over the next few months I’ll be saying something about how we can make some of this stuff work. I’ve been testing out the ideas with a proof-of-concept system called LambdaCalc, implemented in Haskell.

We can push the envelope of interactive programming from two directions simultaneously. From one direction we can use existing languages such as Clojure and Javascript to build more interactive tools, perhaps for those very languages, with support for live programming. An example is Chris Granger’s high-profile, Bret-Victor-inspired, Light Table venture. From the other direction, we can explore new computational paradigms in which this kind of interactivity comes built-in. LambdaCalc is very much in the latter camp. It’s where I feel we’re ultimately headed. Plus coming at things from that direction is a lot more fun :-/

In some ways we need to go backwards to go forwards. In Smalltalk systems such as Squeak, “programs” are interactive, computational universes where everything is live and connected. These systems date back at least to the 1980s. This Alternate Reality Kit (ARK) video from 1987, which Daniel Yokomizo recently tweeted, is really worth a watch.  The Self language was very much in the same spirit.

Influential though these systems were, they didn’t cause a programming revolution. I think one reason is simply that that they predated the internet. A live environment where you can connect computations together interactively is less interesting if the “universe” is a single process running on your local machine. If the universe is the global network with potentially billions of interacting agents, it’s a different story. Perhaps it’s an idea whose time has come.

Thinking outside the black box

I’ll briefly say something about my long-term goals, before I claw myself back to some examples of my meagre achievements so far. What I’m working towards is a computational medium into which you can drop fragments of code and have them unfold like proteins into their execution. All code is live, explorable, connectable and interactive. There are no black boxes: no computations, apart from some basic primitives, whose inner workings are in principle inaccessible. David Barbour reminded me there are boxes that we lack “the authority, time, or inclination to look into. And boxes that have yet to be built.” Much of course remains hidden for practical reasons such these, but the point is that nothing is hidden in any kind of fundamental way. What this means is that (subject to David’s caveat) any end-user — not just a programmer — can browse into a running program, see how it works, change things, fix stuff. Moreover, any programmer can compose new applications on the fly out of parts of existing, running applications. It’s the mashup model of web programming, promoted from bespoke cobbling-together of web technologies into ubiquitous infrastructure.

Unifying user interface and execution, all the way down

I have several core components of a system like this working. I call the system LambdaCalc after VisiCalc and SuperCalc, the iconic spreadsheet systems of the late 70s and early 80s. Think of it as an incremental spreadsheet for functional programming. LambdaCalc is proof-of-concept, not something you can play with yet. Why? Well, it’s partly self-hosted — a significant chunk of the UI is written in LambdaCalc. That slows it down rather profoundly.

The irony of presenting “interactive programming: the non-interactive version” has not escaped me. Nevertheless, I shall plough on, undeterred. I’ll try to defend the self-hosting decision a bit towards the end. So let’s see what a running program looks like in LambdaCalc. Since it doesn’t run fast enough to be usable, I’ve used the GUI to generate some slides which simulate a plausible interaction. Just click on the figure below (you might want to download the PDF and view them full-screen instead):

The effect of expanding the ellipsis is similar to stepping into a function call in a debugger, but rather than having to run their program in a separate debugger, the user instead inspects the computation from the editor where they created it, by simply unfolding the source code in situ. (If you’re familiar with Subtext, you’ll know what I mean.) Modulo a few minor syntactic extensions to aid comprehension, such as the argument-binding convention shown in the slides, the “language of computation” is the language of programs. The programmer need only learn one notation and can understand the execution of a program in terms that are already familiar.

Incremental update

In LambdaCalc, when the user makes a change to the program their view of its execution is updated on the fly. The result is a new computation with some highlighting indicating the parts that changed. The system uses a memoisation-like scheme to determine when a particular part of the new computation notionally has the same “identity” as, although possibly a distinct value from, a part of the old computation. When a node of the computation is preserved in the new state, the information about how the user was browsing it can also be preserved, even though its contents may have changed. This allows the user to see the impact of the change expressed in terms of the aspect of the program they were viewing previously. (This is how I justify using the term “on the fly”, even though the update is slow at the moment.) The following set of slides illustrates this idea:

These interactions are queries that the user formulates and obtains answers to by browsing and making edits and observing the consequences. I call them “what if” questions because semantically they have the form “how would my program’s execution differ if it were changed in the following way?” The answers to such questions manifest as changes to the current view. If the user does not like the answer they can undo to return to their previous state, or continue with further diagnostic interactions or edits. This ability to frame “what if” questions by editing data values or programs is what makes LambdaCalc “spreadsheet-like”, albeit for functional programming. But unlike a normal spreadsheet, both edits and their consequences are always made visible via explicit delta-highlighting, until the user accepts the changes and moves on. (Accepting changes might be an implicit action triggered for example by starting a new edit.) This delta information can in principle be calculated efficiently by exploiting the dependency structure of the computation, but for now I make no attempt to do so.

If only we had them

Manuel Simoni introduced me to Jacques Mattheij’s notion of tool strapping. Whatever tool we happen to be building, we always seem to need that tool to build it. To write our compiler, we need to have already written that very compiler, etc etc. As I’ve suggested several times in the past (for example at the end of this post), I kind of feel this is where we are with next-gen programming tools. If only we had them, we could build them.

One way to think about the motivation for self-hosting is to think about how one might visualise deltas. A straightforward approach would be to write some Haskell code to directly generate a visualisation of a computation delta. An approach that I find more interesting is to think of the visualisations themselves as incrementally evolving values, computed by a LambdaCalc function, in response to input changes. To make this possible requires linguistic reflection. We need to be able to reify a LambdaCalc computation from the meta-language (in this case Haskell) into a LambdaCalc value which can be provided as input to visualisation code written in LambdaCalc, and then be able to reflect the output of that function back into the meta-language for rendering.

This is indeed how I’ve currently implemented things, and the reason that LambdaCalc runs so slowly. The advantage is that we compute visualisations that can be structurally compared like any other LambdaCalc value. We then simply define the visualisation of a trace delta to be the delta of the respective visualisations. The execution of the UI code on a changed computation produces a precise description of the corresponding changes in the visualisation of that computation. Alternatively, the user can think of the UI as a language with its own incremental execution semantics. We just need a layer of GTK+ code (~350 lines) which automatically highlights the parts of the view which are new or have changed. All the graphics presented in this post were produced in this way. This may be an approach doomed to failure. Or maybe there’s a clever way to get fast and stay self-hosted. We’ll see.


  1. dmbarbour says:

    I intially had some difficulty commenting here, so started a reddit.