Almost all software systems today are interactive, in that they maintain ongoing interactions with their environments, rather than simply producing a result on termination [GSW06]. Indeed, a consistent trend since the beginning of the digital era has been towards increasingly interactive systems. The trend has progressed on at least two fronts: enhancements to end-user interactivity, and increasingly integrated systems. The trend began with the first teletype and textual interfaces and continued through early GUIs and LAN-based operating systems. It continues with today’s 3D virtual worlds and applications deployed over the wide-area network.
With the Internet emerging as the “global operating system”, the pressure on our software to be interactive is greater than ever before. Consider how the following requirements can be understood in terms of enhanced interactivity:
- Ability to reconfigure or repair applications without taking them offline → interaction with code as well as data
- Long-running, continously-available applications → interaction must be robust
- Sessions resumable from wherever we happen to be located → persistence of interactions
- Transparent recovery from latency problems and intermittent connectivity → interaction should not be semantically sensitive to the network
- Mobile code whose behaviour depends on execution context → dynamically scoped interactions
A variety of process algebras and other formalisms have been developed for modelling and reasoning about interactive systems. Yet despite the trend towards greater interactivity, we continue to lack a simple and coherent paradigm for building robust interactive systems. The main bugbear that has faced us has been what we might characterise as an “impedance mismatch” between traditional algorithmic programming languages and the way interactive systems abstractly work. Whereas an algorithmic language treats a program as a black box which produces a final value on termination, an interactive system allows other systems to observe and influence its behaviour as it executes, and must adjust its internal state in response to each interaction to maintain the consistency of the computation.
In current desktop systems, the mismatch is usually resolved by representing the state of the system as a set of mutable stores and then employing a notification scheme to maintain the consistency of the state. Rather than the host language being used to execute a single sequential program to termination, it is employed to execute fragments of imperative code as interactions occur. Each executed fragment must produce exactly the side-effects required to synchronise the state of the system correctly.
Unfortunately this near-universal reliance on the imperative to support interaction has come at an enormous cost in complexity. It will be useful here to recall Brooks’ distinction between essential and accidental (or inessential) complexity [Bro87]. Complexity inherent in the problem itself (from a user’s perspective), or which can be attributed to human factors, is essential; what remains is accidental or inessential. Interactive systems as currently implemented are dominated by inessential complexity, the main culprits being this ad hoc management of state, explicit concern with flow of control, and unnecessary use of non-deterministic concurrency. Imperative languages (ab)used in this way are the “Turing tar-pit in which everything is possible but nothing of interest is easy” [Per82]. Web applications only complicate things further, by adding a variety of ill-defined staging policies and interaction models into the mix.
Ironically, what unifies much of this inessential imperative complexity is that it exists in the service of essentially declarative ends. Indeed, I suggest that the reason the current paradigm works at all is that systems implemented in this way approximate a much simpler, declarative model of interactive computation. Experienced software designers rely tacitly on something like this declarative model in order to make decisions about what counts as reasonable behaviour. Rather than being ill-suited to interaction, as is sometimes assumed, perhaps because of the somewhat arcane feel of techniques such as monads [JW93][Wad99] for modelling effectful computation, the declarative paradigm – with a simple orthogonal extension to support interactivity – fits the abstract behaviour of interactive, stateful systems surprisingly well. But today, because this declarative behaviour is achieved only indirectly, interactive systems are significantly less reliable and understandable than they need to be, despite the widespread use of design patterns such as Observer [GHJV95] to manage some of the inessential complexity. State-related dysfunctions in particular, such as the gradual degradation of a system over time, spurious sensitivities to the order in which two operations take place, deadlocks and race conditions, are common. I propose that the best way to address the impedance mismatch between algorithmic languages and interactive systems is not to wallow in the tar-pit of imperative languages, but to lift declarative languages into a computational model which supports interaction directly.
The aim of this blog is to explore the conceptual foundations of such a programming model, which I shall refer to as the declarative interaction model. Developing a formal semantics will be a topic for later treatment. Like some other declarative approaches to interaction, the declarative interaction model maintains a clean separation of (stateless, deterministic) declarative computation and (stateful, non-deterministic) reactive computation. The model is distinguished by its modal construal of interaction, whereby an interactive system is taken to be a space of canonically represented “possible worlds”, each expressing a purely declarative computation, along with an index into that space indicating the “actual world”, which represents the system’s current state. To interact with such a system is to select a new actual state from this space of possible states. Interactions (effectful operations) are lifted to meta-programs that manipulate values of modal type; crucially, they are unable to interfere with the purely declarative semantics of each possible state, and in any case are only required when they are essential to application logic. Non-determinism arises from concurrent interactions, which are handled transactionally.
The declarative interaction model relates to many active research areas, including modal type systems, incremental computation, meta-programming, declarative concurrency, transactional concurrency, dataflow computing, interactive computing, and wide-area computing. As we shall see, four closely related concepts in particular are central to the model: modality, incrementality , concurrency, and persistence. A key validation of the model will be the development of an interactive programming language which supports the model directly.