Archive for LambdaCalc

Self-explaining computation

Behind every value lies a computation struggling to get out. That’s the idea behind what I call the explodable user interface. (Forget wearable. Explodable.) By “explodable” what I have in mind is the ability to pick any part of an application’s GUI that you happen to be interested in and interactively “unpack” it into a story that explains how it was computed. It should be as though code lurks behind everything you can see, all the way back to the data sources your application works with. In Lisp you can explode an atom into its constituent characters, but the relationship between a Lisp atom and its characters has no computational content to speak of. In a typical application, the relationship between a value and its parts is non-trivial. If it were otherwise, you wouldn’t call it an “application”: you’d call it “a bunch of data”. Whenever this non-trivial structure is present, you should be able to browse into it in order to understand it or change it.

One of the steps towards turning this idea into a reality involves making the sharing structure of a pure computation explicit. I discuss this and related ideas in the following video. (The video is a segment of a talk I’ll be giving in a few weeks here at MPI-SWSJames Cheney came up with “self-explaining computation” as the title of a paper we submitted to POPL 2010. The paper wasn’t accepted, and eventually mutated into this ICFP paper, but I still like the old name! It’s a nod to self-adjusting computation, which is itself an allusion to self-adjusting binary trees. Ho hum.)

Visualising sharing by sharing visualisation

I already need to represent a LambdaCalc computation as a DAG, with explicit sharing, so that I can capture re-use across computations as the partial inclusion of the previous state into the new state. But sharing also happens within a single state. Computations share whenever when you invoke the same closure with an identical value (think memoisation, only with pointers) but, more importantly, values are shared all over the place. This kind of structural re-use is a familiar implementation technique from languages like ML and Haskell. Functional programmers often think about the way values are shared, even if technically it’s not part of the observable behaviour of their program.

There are two kinds of value-sharing within a computation. The first is analogous to the tail-call optimisation. Consider the call graph for eval as it interprets a program. Some of these calls are tail calls. (In terms of big-step semantics: whenever you have an evaluation rule which returns v, where v is the value computed by the rightmost premise of the rule, you have a tail call.) For example a case analysis just returns the value of the selected branch; a function call just returns the value computed by the executed function body. Maximal chains of these tail calls in the interpreter can be visualised as single computation-cells in LambdaCalc, returning the value computed by the terminal computation in the chain. (And this observation now suggests an obvious improvement to the current LambdaCalc UI: make the linear control-flow within one of these computation-chains more explicit, by just showing the path selected in the tree of conditionals, rather than the whole tree.) The structural/visual re-use here is that many steps in the computation share a single value-cell.

The second kind of visual sharing is more akin to memoisation. With memoisation, you avoid redundant computations by retrieving the cached value whenever you “match” a previously executed function call. In LambdaCalc, computation “is” the visualised structure. So memoisation is spatial: when you find yourself trying the construct a visualisation that’s already been built, you can reuse the cached version instead. This gives rise to explicit sharing in the UI itself. The sharing can be visualised using links (as shown the video above) or by other visual cues that make it clear that two bits of UI are “the same”. For example you could transclude the memoised view, but have it so that selecting the transcluded view selects all occurrences of that view. You could even clone the mouse pointer when over a shared view.

Exploding the user interface

I’m only scratching the surface of this idea in the video above. I show how you can collapse a computation all the way back to a value, and conversely expand a value out into an explanation of how it was computed. There’s a sizeable gap between this basic facility and explodable user interfaces proper, of course. Some prerequisites are straightforward UI improvements, in the form of more control-flow-symmetric visualisation: flattening conditional structure within a linear chain, as mentioned above; allowing “leading” context as well as “trailing” context to be collapsed into an ellipsis; and showing bindings for the pattern-variables that occur in a case analysis, as mentioned in the video. These are all relatively straightfoward regularisations that will significantly improve the scalability and incremental explorability of the UI.

Beyond making the UI more symmetric and scalable, a more fundamental requirement is GUI-level integration between the meta-language – the host environment that provides LambdaCalc with a GUI, namely Haskell – and the object language, LambdaCalc itself. In particular, when the programmer writes GUI code (code that computes values of type “GUI”) in LambdaCalc, the computed GUI values must be visualised as GUI objects inline with the visualisation of the execution of the GUI program itself. This means being able to reflect a value of type “GUI” up to the meta-language so that it can be directly rendered in the host GUI. I’m guessing that this is the kind of thing that goes on in reflective GUIs like Squeak, but I haven’t actually looked at how they work.

I already have the raw components of this, a pair of functions called reflect and reify that coerce values between Haskell and LambdaCalc. This is how I’ve implemented some of the GUI for LambdaCalc itself. However, the integration is ad hoc and requires more work. When that’s in place, collapsing a computation back to a value will turn it back into the application’s GUI. At that point the application and its programming environment become one and the same beast.

This kind of destructuring interweaving of visualisation of execution and visualisation of output has been lacking so far in the live programming systems I’ve seen. The standard setup is to have a static program in one window, and the output in the other. The execution itself is hidden, and a fortiori the relationship between steps in the execution and elements of the output. As you hack the program, the output updates, but there’s no way to browse into the execution and see the value similarly broken down and distributed over the bits of the computation that contribute to it.