Sweet Home West Midlands

Not my actual home, obviously. I’ll still be living in Bristol. But Birmingham is going to be my spiritual home: where I go when I lose the faith and stop believing in life after Dynamic Aspects. My first day left me feeling refreshingly (if inaccurately) only mildly stupid. My supervisor Paul Levy seems to be an all round nice bloke; handy, as we’ve got to hang out for the next three years.

I’m currently learning modal logic. It’s turning out to be awesome. I’m hoping, sometime (oh, in the next hundred years or so) to be able to use it to formalise the possible worlds view of interactive systems which I’ve been informally mincing about with for the last 6 years or so. The basic idea of modal logic is that a modal formula is a little automaton that can wander about in a structure of worlds which are mutually accessible through various relations, taking on different values at different worlds. Different access relations between worlds induce different kinds of structural properties on the space of worlds (or contexts, states, instants or points, depending on how you want to construe the relata).

Any kind of labelled transition system can be talked about modally, but so can many other things, such as time. A simple example: defining two modalities F and P (future and past) plus their respective duals G and H) such that F and P are inverses means that theorems such as

p -> GPp

and

p -> HFp

are universally true of such systems. (Respectively: if something is true now then at all future times it will have been true in the past; if something is true now, then at all past times it was going to be true in the future.) Temporal modal logic is obviously relevant to interaction [Pnu92], but modality in general seems to offer a nice framework for thinking about lots and lots of things: state, control, and dynamic scoping, to mention some existing applications, but also incremental computation, transactional concurrency, declarative concurrency, mobile code; the list goes on. And on.

That’s the theory side of things, for now. On the practical front, I’m trying to understand monads and monad transformers well enough to implement a backtracking interpreter in Haskell that addresses the famously poor interaction between a normal-order or lazy reduction strategy and function caching or memoisation. The problem is simple: to be any good as an incrementalisation technique, memoisation needs to know the values of arguments to functions up front. But normal-order reduction defers the evaluation of arguments until they’re needed. A prima facie conflict of interests.

I have a vague notion of a solution, an idea gleaned from Avi Pfeffer’s work on IBAL [Pfe06], but much more investigation is required. The nice thing about implementing the interpreter using monads is that it should transfer easily into a modal language, as it turns out monads themselves are closely related to modal logic [Kob97][Pfe01].

(Actually call-by-need – the technique usually used to implement call-by-name – uses something like memoisation to avoid evaluating arguments more than once. But call-by-need is still a just-in-time evaluation strategy. What I’m looking for is something with the useful termination characteristics of call-by-name, for unneeded arguments, but the eagerness, for needed arguments, of call-by-value, without requiring a full global strictness analysis.)

2 comments

  1. Tom says:

    Great! This avenue sounds fascinating (and valid, which is even higher praise). I’m delighted you’ve bitten the bullet and are going for a PhD, thereby finally admitting that you just want to think all day and live off Cup-a-Soup.

    I still haven’t written up, obviously.

  2. Daniel Faken says:

    Regarding memoization in lazy languages, see: John Hughes. Lazy memo-functions. In Jean-Pierre Jouannaud, editor, Functional Programming Languages and Computer Architecture, number 201 in Lecture Notes in Computer Science, pages 129–146, Nancy, France, September 1985. SpringerVerlag. The idea is that you memoize on a pointer to the lazily-computed result. The pointer will always correspond to the most-recent (possibly partial) evaluation of the result.(in your case, the result is a parameter)