Changing the past in open systems

This post is about something I call retroactive update and its significance for editing a live program. Execution environments that allow you to hack running programs are an idea constantly being reinvented. I take that as a sign that it’s a good idea. There are debuggers and virtual machines that allow you to change definitions while execution is suspended at a breakpoint, and then to resume under the new definitions. In live programming, which means different things to different people but which here I will just define by example (impromptu, Bricks, fluxus, SuperCollider), you hack concurrent processes to change their ongoing behaviour. Lots of web frameworks have something like Rails “development mode” where you change code and have it reloaded into the browser automatically without restarting the app. Erlang has hot code loading. And I haven’t even mentioned the old stuff (ok, I can’t help myself: SketchPad, ThingLab, VisiProg, ARK), or the new stuff (Bret Victor, Light Table, VisiPro, Subtext, Kahn Academy Computer Science, RDP). Then there are spreadsheet languages and visual programming and…I could go on. This just scratches the surface of what’s already been done. Although these things differ wildly in what they mean by “execution”, “program”, and “make a change”, there is something they have in common. My personal suspicion is that Light Table will be the thing that brings this “something” to the masses, and at that point the pressure will be on to understand what that something actually is.

I’ll call this general ability to modify a running program online update. Online update is important — essential, I would say — for lots of reasons beyond the “immediate feedback” motivation often touted. I really like David Barbour’s short list of reasons why we should care. But although online update is fast becoming essential, in most of its current forms it is ad hoc and unsound. It’s basically just hacking and hoping. The user’s interactions and the program together form a crazy hybrid computation that interweaves changes to the program with execution of the program, in an informal and unspecified way. Sometimes this is the point; sometimes it’s not the point, but it doesn’t matter much. But in general, it matters a lot. We want implementations of paradigms, not just “implementations”. Informality by design. By way of example, in all the discussion around Light Table, no-one seems to have wondered out loud, “what are we actually trying to do here?”. Somehow it doesn’t matter, we’ll just hack something together and see what happens.

Programming in the subjunctive

One important step towards a more systematic approach to online update is to make the dimension of interaction explicit. This is one of the things I’ve focused on in my own research, which I call interactive programming, although that term has probably already been laid claim to. I allow the user to step sideways in time, into a “counterfactual” execution where it is “as though” the program had been written differently from the outset. Inspired by Demaine etal‘s retroactive data structures, which are imperative data structures which permit modifications to the historical sequence of operations performed on them, I’ll refer to this notion of online update as retroactive update. Retroactive update allows the “computational past” to be changed. Self-adjusting computation (SAC) is another system based on retroactive update. SAC explores another crucial aspect of online update: efficient update, via an algorithm called change propagation. SAC’s commitment to retroactivity appears in the correctness of change propagation, which is defined as consistency with a from-scratch run under the modified code. (The kinds of changes you can make in SAC are quite limited, but this need not distract us here.)

There is an apparent thorn in the side of retroactive update: computational effects. In order for it to be “as though” the previous computation never happened, effects have to be reversible. We see this in SAC, where all mutating operations must be revocable. (Related, but probably another distraction: effects also need to be persistent in spacetime, which amounts to idempotence. There must be no sense in which the “same” effect can happen “twice”.) The problem is when the application being updated is embedded in a larger computational system, as live systems usually are. Then its effectful interactions with other agents are typically irreversible. Writing output to a remote console, committing a financial transaction, and emitting a sound through a speaker are all examples of actions that cannot usually be revoked.

Even when we are able to revoke all the effects we would need to for semantic consistency with a from-scratch run, it might just be impractical to do so. For example, for a long-running application such as an online retail website, which writes many transactions to a database, an update semantically equivalent to re-running all past transactions with the new code against an empty database would quite possibly be infeasible to execute. I can imagine applications where this would make sense. But it cannot in general be a correctness requirement.

Retroactive update, at least naively construed, thus embodies a closed-world assumption: that every past interaction falls within the scope of the update. For open systems, we need an approach to online update which is not so committed to reinventing the past. Dynamic software updating (DSU) is one attempt at this. In its most basic form, DSU tries to balance flexibility of update – permitting as broad a class of updates as possible – with safety, meaning that subsequent execution will not fail with a runtime type error as a consequence of the update. I’ll explain the basics of DSU with reference to the Proteus system and its successor Proteus-tx, which I believe are roughly representative of the state of the art. What we’ll see is that, while consistency with a from-scratch run is apparently too strong a property for many applications, merely guaranteeing that an update will not introduce a type error is far too weak. It’s probably a necessary condition, for online update in a typed language, but it’s insufficient. It’s essentially just a type-safe version of hack-and-hope. Proteus-tx is a partial fix, but I think what that improvement actually points towards is a revisiting of the retroactive semantics in a new light.

In Proteus, the user first specifies a collection of modifications called an update which they intend to apply to a running system. An update is a set of new bindings for selected global symbols, with no fine-grained information about how they changed. (This is in contrast with interactive programming, where changes are syntactic deltas, but this isn’t the root of the problem.) DSU also permits data type definitions to change. In that case, the update must additionally include a “migration” function which, when the update is applied, will be used to translate values created under the old data type definition to forms consistent with the new definition.

An update point is a static program location specified by the user. At runtime, if an update is pending and an update point is reached for which the update is deemed type-safe, the update is performed and then execution resumed. Otherwise execution continues as normal. What distinguishes a DSU update, compared with a retroactive one, is that invocations of functions that are active when the update is applied will finish executing with the old version of the code, and only subsequent invocations will use the new code. For this to be sound, DSU relies on a property called “con-freeness”. Intuitively, the idea behind con-freeness (for an update to a data type definition, say) is that old code that will resume after the update will not concretely manipulate values of that type. Accessing a field or pattern-matching are concrete usage; simply mentioning a value of that type, without relying on its representation, does not compromise con-freeness. Notions of con-freeness are also defined for function and variable update, and then the authors give an algorithm which statically approximates con-freeness for a given update site.

This form of DSU ensures that updating is sound from a typing point of view. However, it doesn’t rule out other runtime errors which would not arise under retroactive update, but which can arise under a hybrid execution of old and new code. Here is an example, taken from the Proteus-tx paper:

   proc f () =                            proc f () =
      ...                                    ...

   proc h () =                            proc h () =
      ...                                    ...
      // update point                        // update point
      f();                                   f();

Suppose the program on the left is edited into the program on the right by moving the call of g in h into the body of f. Don’t worry about whether there are other calls of f which are adjusted similarly, i.e. whether this is a refactoring, or a behaviour-changing edit. It doesn’t matter. We need only note is that if we dynamically update the program during an invocation of h, just before h makes the call to f (at the point indicated above), then that call to f will invoke the new version of f, which will call g. Then, the old definition of h will finish executing, resulting in g being called twice, even though this is not possible under either version of the program alone. If g has side-effects, this could be catastrophic. In fact, this isn’t really a solution. It’s just a type-safe version of the problem.

Transactional version consistency

Needless to say, this observation didn’t escape the DSU folks. It motivated an improved approach to DSU, called transactional version consistency, which has been implemented in the Proteus-tx system mentioned above. The idea of transactional version consistency is to allow the user to designate blocks of code as transactions, whose execution will always be attributable to a single version of the program. A contextual effect system, for any expression, statically computes approximations of the effect of the computation that has already taken place (the prior effect), and of the effect that has yet to take place (the future effect). An update is permitted during the execution of a transaction if it will be “as though” that transaction had run under the new code from the outset, or under the old code from the outset, but not some hybrid of the two, using the prior and future effects of the update point to decide this conservatively. If neither of these is the case, either the update or the transaction must be rejected. This approach is quite effective for so-called “long-running” applications, since (as the designers note) such applications are in fact often structured around relatively short-lived concurrently executing user sessions. Each session either aborts or executes to completion, committing changes to a shared persistent store. For these systems, transactional version consistency offers a less insane version of DSU.

Stoyle and colleages, in the Proteus paper, attribute the difficulties with DSU, such as those addressed by transactional version consistency, to the flexibility of being able to change the program in the future. But equally they can be attributed to the inflexibility of being unable to change the program in the past. So here’s my tentative conclusion: transactional version consistency is a hankering, in long-running systems, for some of the semantic consistency of retroactive update. Their observation is that what we mean, in practice, by a “long-running system” is not that the code is long-running, but rather that the effects of code (changes to data, etc) are external and long-lived. That’s a good observation. But what it suggests to me is that we can use retroactive update after all, even in a long-running system, by just amending the semantic correctness criterion: requiring consistency with a from-scratch run of the transaction or session, rather than a from-scratch run of a closed program.

What’s the advantage of thinking in terms of retroactive update, rather than transactional version consistency? Well, suppose we want to push a change to pricing, purchasing terms or other business logic while a session is active. With DSU, even with transactional version consistency, there are many undesirable outcomes. The session may not see the update, because it was applied too late in its execution (it will complete under the old code). This is consistent, but it may not be what we wanted. Or, the update may be rejected entirely, because there was no consistent update point available. Or, we might have pushed the update through, but only by rejecting some user transactions.

With retroactive update, we can reflect changes to the session at a point that is convenient to the user, but preserving retroactive consistency. Retroactive update certainly presents challenges of its own, in particular efficient implementation with arbitrary code changes, but I think these are outweighed by the benefits. Finally, another reason to prefer retroactive update is that implementing it properly implies reification of the computational history. This means we can show how the computation changed, by making the delta visible to the user. This is the philosophy behind LambdaCalc. After all, the use case the DSU folk are explicitly considering is deploying a change to an application in the middle of a transaction. It’s not hard to think of situations where it would be important for the user to have visibility on that. We’re yanking them into a parallel universe, so we should at least show them what happened differently.


  1. I like your goal of making the retroactive consistency more compositional in an open world.

    Though, you’ll still have difficulty correcting, say, sounds that have already been emitted from a speaker. And you might upset users who aren’t quite as careful as our electronic agents to observe changes in pricing, or who will get frustrated at changing their orders.

    If we had a robot we were controlling, we cannot retroactively change the route it already took, nor its current inertia, nor which objects are within its metal grasp. These could be understood as state updates, controlled by fine-grained transactions, I suppose, but it’s a bit unclear how to wrap transactions around actuator effects. I find dub

    What is “retroactive update” as the limit of transaction size approaches zero? i.e. as effects become continuous? It is instantaneous update – i.e. occurring at a well defined instant in logical time. This is essentially what I’m pursuing with RDP.

    • Seems I left a sentence unfinished. (“I find dub”-ious the use of transactions for general purpose…)

      I think the emphasis on transactions is generally a mistake in concurrency models. Transactions are a way to stretch logical instants over physical time – thumbing one’s nose at physical reality. Transactions operate poorly in scenarios that must observe changes from concurrent sessions (such as chatrooms), or where the most significant effects can only be observed after commit (such as blackboard metaphors, where effects are achieved indirectly by concurrent agents), or whose effects are not best described by events (which is usually the case).

      Anyhow, can we benefit from something similar to retroactive computation *even without* transactions?

      I believe so.

      Changes code in the present cannot reasonably affect the past, but they can effect the future. Further, our present may be determined partially by our future – e.g. if we are able to systematically anticipate future states (or a probabilistic projection of them) in a compositional manner, we can feed this projection into our present decisions. When we don’t like a future, we can change the present decisions and observe the newly anticipated future. This use of anticipation generally forms an open feedback loop, a bit like sticking one’s microphone near a speaker, but with some discipline (especially when developing shared-state models) it can be stabilized or dampened like feedback in any other system.

      Retroactive update then occurs in that finite loop between present and projected future. Basically, we change our future on a premise that the the previously projected future never happened. The previous projection doesn’t affect the new projection.

      Or, at least, there is no semantic effect. Memoization, caching, machine-learning, and non-deterministic constraint models may usefully reuse computations from the rejected future, resulting in more efficient, stable, and intelligent systems.

      I’ve been pursuing `probabilistic RDP` with an interest in projecting multiple futures at once, and to leverage these projections to select from among multiple pasts. I see this as sort of a many-worlds zipper: the past presents probable futures, the future filters fanciful pasts.

      Retroactive computation serves very for processing multiple futures in parallel – because it means there are no semantic dependencies between them; it ensures we really can compute in parallel. (Though, as above, we can share some non-semantic computations across the futures – caching, memoization, machine learning, non-deterministic decisions, etc.)

      • Roly Perera says:

        I agree that a transaction tries to extend logical instants over physical time. So does anticipation, though, doesn’t it? I’m not seeing a big difference between a transaction, and a speculative branch of execution about the future that might get discarded (semantically, with the option to re-use computation for efficiency or stability). “Retroactivity” is the ability of the transaction to translate itself into the “current” present of the agent it wants to influence, even if it began executing in a previous present. It’s a speculative execution that can refresh itself if it goes stale. It can’t affect the past of the system whose history it eventually forms part of, but rather its /own/ past can change: it will be “as though” the transaction started at some later point.

        You end up with agents carving their own paths through a many-worlds spacetime, near each other much of the time but with lots of local merging and branching. In such a system, the notion of “now”, “future” and “past” is quite ambiguous. On a small scale, time doesn’t even have a linear order.

        An analogue of this kind of speculative transaction in the physical world is “phi phenomena” (illusory movement), or at least some of the things you can expose with clever experiments. Libet studied some of these findings, and wondered whether the brain could somehow go back in time and change the past. More sensible people realised that our speculative interpretation of reality was being retroactively updated to be consistent with the present.

        • No. Anticipation does not extend a logical instant over physical time.

          First, it is not “a logical instant” – anticipation has no notion of temporal isolation. We can account for anticipated reactions from concurrent agents. Those reactions may occur at future logical instants (e.g. 200 milliseconds from now).

          Second, while we do depend on anticipation computing earlier than real-time, we do not depend on instantaneous computation. For example, we can ask questions of the form: “what if I press this button in one second?” then anticipate the future at two seconds. If it takes 300 physical milliseconds to compute that future, we still have 700 physical milliseconds to change our mind about pressing that button.

          • Roly Perera says:

            Yes, I realised afterwards that I probably hadn’t understood properly by what you meant by “logical instant”. So I take back those first couple of sentences.

            Responding to your comment below, yes, it seems what I’m calling a transaction is close to what you call anticipation. I’m unclear as to what the accidental complexity of a transaction comprises, that you want to replace by anticipation. You don’t want anticipation to be tied to commitment, obviously, but that feels like a transactional property.

            Transactions becoming invalid because of current events is normally a problem but not if transactions can be incrementally updated.

            So what other properties of transactions are accidental complexity? Isolation might sometimes be undesirable (as in the chatroom example) but often it’s exactly what’s wanted.

        • The use of transactions was motivated by a failure case – in which “update point” was annotated in code based on a control flow. I think if we more formally modeled the temporal properties of the code and the update, we could achieve a far more precise understanding of how the update interacts with ongoing behavior. Leveraging this understanding to achieve safe updates would avoid the need for transactions, and thus avoid the possibility of transactional failures (i.e. rejecting a transaction or update).

          You say we “can’t affect the past of the system… it will be as though the transaction started at some later point”. So it seems what you’re actually seeking is close to anticipation anyway. The features of transactions are an unnecessary burden, a cause of accidental complexity.

  2. Multiple possibilities exist (in the present) due to ambiguity or ambivalence. In both cases, they can be adjusted (in the future or from context) as we improve our information or commit to decisions. Of course, by the time the future comes around, we’ll have new ambiguous sensor information and new decisions to make… and so the many-worlds zipper continues.

    You have, in this article (and in the comment about Libet) framed the processing of multiple possibilities in terms of “retroactive” update. But we don’t need to model multiple possibilities in terms of retroactive update, i.e. temporally. Another option is to model multiple possibilities spatially – literally modeling multiple branches in state and communication.

    By improving our information and committing to decisions, we make future computations more efficient. I.e. there are fewer present possibilities that must be computed to determine future possibilities. If we can statically place fixed, stochastic, or asymptotic bounds on possibilities, we can even achieve real-time probabilistic computations.

    Note that retroactive correction is not part of this concept – no transactions, no anticipation, no rewind, no speculative evaluation. They simply aren’t necessary. We don’t need such features to explain subjunctive interfaces.

    For RDP, I achieved anticipation early in my design, based on my temporal semantics and modularity properties. But anticipation only allowed me to consider one possible future at once, which forces me to introduce a lot of explicit state and latency to consider multiple possible futures before choosing one. I needed an orthogonal concept if I am to process multiple presents possibilities at once, which I’ve been developing as probabilistic RDP.

    Anticipation and probabilistic mechanisms synergize very well. At the intersection of anticipating the future and processing multiple present possibilities is the ability to anticipate multiple possible futures. If we can “choose” between present possibilities, we automatically get the ability to “choose” between possible futures. There are direct applications to command and control, conflict avoidance, continuous planning systems, and HCI.

  3. Roly Perera says:

    Now I’m more certain that we’re talking about similar things. The computational history that I’m talking about updating is represented spatially, but it captures something that can be thought of as having temporal structure (currently the big-step derivation tree, but I’m open to other structures). Now, it takes “actual” time to build that structure, because data dependencies impose partial sequentiality. But that must be equally true in what you have in mind. Time and space are computationally related (it takes time to explore space).

    This is what I meant (although I didn’t take the time to explain) by saying that implementing retroactive update “properly” (i.e. efficiently) implies reified computation. Unless computation is a spatial structure, I can’t see it being physically possible to switch to an alterate timeline in time proportional to the delta between the two computations. You have to think of retroactive update as navigation. This has a temporal “interpretation”: you have to walk back along the dimension of the spatial structure that corresponds to computational dependency until you reach the most “recent” unaffected point in the computation, and then start building new computation, reconnecting with and reusing old computation where possible. Space and time are more intrinsically connected in this view of things. I should probably be more explicit about what I mean when I say “retroactive”.

    • In open systems, I think you’ll find your spatial structure – the one with a temporal “interpretation” – is frayed at the edges.

      • Roly Perera says:

        I don’t see a problem. It’s in its own local branch of spacetime (with lots of overlapping with nearby branches), until it’s incorporated into some larger computation, at which point it might turn out to be stale and in need of synchronisation. Feel free to be more explicit, though.

  4. RE: “if transactions can be incrementally updated”

    Of the few things that can be modeled effectively with transactions, even fewer can be modeled with incremental updates. It is not a technique suitable for general purpose programming.

    RE: “what other properties of transactions [cause] accidental complexity?”

    Against this role, the big ones are serializability, non-deterministic ordering, isolation, atomicity, and the need to break continuous or streaming data into transactions in the first place. See my article on transaction tribulations for more general issues. Serializability and non-deterministic ordering are more generally related to problems of eventful programming.

    RE: “Isolation [..] often it’s exactly what’s wanted.”

    Nonetheless, isolation is rarely what is needed. I have never seen a case where isolation is essential to a problem domain. Further, by introducing isolation, there is an opportunity cost – e.g. we cannot readily decompose into a multi-agent system, nor readily support cooperative work. Transactions help us resolve conflicts, but isolation hinders us from avoiding them in the first place.

    Isolation is “often wanted” because it hides problems with a certain popular class of programming models – those that mix of imperative and concurrency. Isolation is answering problems that may not have existed if we were using a different programming model.

  5. Roly Perera says:

    Of the few things that can be modeled effectively with transactions, even fewer can be modeled with incremental updates. It is not a technique suitable for general purpose programming.

    I don’t think this is true. One of the main things that limits the effectiveness of transactions is that they can’t be incrementally updated, but have to be thrown away. So updatability increases their applicability, rather than diminishing it. It certainly weakens your first two objections to transactions. At the very least, the jury is still out.

    Against this role, the big ones are serializability, non-deterministic ordering, isolation, atomicity, and the need to break continuous or streaming data into transactions in the first place. See my article on transaction tribulations for more general issues. Serializability and non-deterministic ordering are more generally related to problems of eventful programming.

    I see non-determinism as an artefact of observing changes at a system boundary. Grow the scope of the computation and then those interactions are usually being processed deterministically. Equally events: what looks like an “event” at a system boundary is just a propagating state change from the vantage point of a larger system.

    I’m not proposing that what we currently call transactions or events are a good way to program. Re. your critique of the imperative, event-driven world we currently inhabit, I share your pain. But I’ve recently come to the conclusion that something like transactions and events are a well-justified perspective on the evolution of a multi-agent system. Each agent has its little world view that’s locally consistent. Inter-agent synchronisation isn’t continuous but happens in discrete units of update.

    Isolation is indeed often needed. I “need” my current interpretation of the world not to be continuously influenced by yours. If we’re both collaborating on the same document, I “need” not to see your edits in real-time. Instead, what I need is some local isolation. Moreover, some local isolation is physically inevitable: it is not possible for everything to influence everything else simultaneously.

    Atomicity may be a problem for large transactions, I agree. But we have collaborative models that don’t require atomicity, by for example permitting merging.

    What I’m lacking is a clear sense of what you have in mind. Perhaps you could explain how you imagine collaborative coding working, by way of an example.

    • You are quite optimistic about transactions, but I suspect your optimism hasn’t been tempered by experience. Let me know how you feel about it when you attempt to extend your programming model to problem domains that make it non-trivial – especially real-time multiple sensors and actuators, FFI, continuous dataflows, digital signal processing, blackboard metaphors. Try sound, robotic motion, cooperative planning systems.

      Many transaction systems have approximated incremental updates by use of hierarchical transactions – i.e. such that the common case is that we only need to throw away and redo some sub-transactions and try again. When I was developing a transactional actors model back in 2008, they were implicitly hierarchical. But that didn’t help with the real problem cases.

      Here is a truth: “the set of operations that can be modeled effectively with transactions” is larger than “the set of operations that can be modeled effectively with transactions AND updated incrementally”. This is the truth you are rejecting in your first sentence, above. You posit that the main weakness of transactions is lack of incremental update. I posit the main weakness of transactions is that they are an unsuitable abstraction for many critical problem domains.

      “I don’t see a problem [..] it might turn out to be stale and in need of synchronisation”

      The probability of it turning out stale and in need of synchronization grows with the number of agents and with the size of the transactions. Ad-hoc synchronization is itself a problem – hinders reasoning about progress, starvation; hinders development of real-time systems.

      I “need” my current interpretation of the world not to be continuously influenced by yours.

      This is not the same as the temporal isolation of transactions. We may, of course, compartmentalize data, such that you are continuously influenced only by my actions on shared resources, rather than by my interpretation of the world. I am an advocate of object capability model and securable language designs.

      we have collaborative models that don’t require atomicity, by for example permitting merging

      Merging of transactions is (a) very domain specific, (b) ofren subtly a wrong thing to do.

      If we’re both collaborating on the same document, I “need” not to see your edits in real-time.

      It can be valuable to see edits in real-time, even better to see someone highlighting a chunk of text they intend to edit so you can avoid edit conflicts before they happen. (Conflict avoidance is preferable to conflict resolution.) But, if you do feel the dubious “need” to have your own copy of a text for editing, you can construct a copy without relying on transactional isolation. (Later, you can “need” a pony.)

      • Roly Perera says:

        I’m not suggesting transactions work for every interaction pattern, but just that it’s a reasonable and common pattern. Not a universal one.

        I’m still lacking a sense of what you propose as an alternative. I don’t feel the force of most your points, and maybe I would if I could understand how you imagine collaboration and other forms of mutual influence working. Sure, excessive synchronisation will be problematic, but it’s the kind of problem that arises when you have multiple competing/collaborating agents with different opinions about how things should be. Chaos isn’t a viable alternative. There are costs to structure, but they’re matched by benefits. Do your solutions not make similar cost/benefit trade-offs? What are your alternatives to isolation, atomicity, and so on? What good is a copy if I have to manually synchronise it?

        • I am not suggesting chaos as an alternative.

          I favor use of temporal semantics to achieve logical, declarative synchronization. I directly model concurrent influence – i.e. at any given instant, a resource may be influenced by a set of values (which may change over time). This is composable with continuous queries or views. Since effects are logically concurrent, they do not not need to be serialized.

          It is up to the resource how it resolves apparent conflicts. E.g. if told at the same time to turn left and right, an agent might arbitrarily favor left. While arbitrary, it is also programmatic and potentially deterministic – much easier to control and understand than the arbitrary before/after race conditions we see with serializing transactions. Simultaneously, the resource could report conflicts. With support for anticipation, we could anticipate the conflict and adjust our commands accordingly.

          I assume you at least know of temporal logic, temporal concurrent constraint programming, lightweight time warp protocol, discrete event logics, etc.. I believe it would be worth your time to drink deeply of those subjects, if you have not already.

          RDP’s temporal structure is, in the large, similar in nature to lightweight time warp, but much lighter weight by abandoning “messages”: the continuous signals make it easy to precisely compute and communicate what needs to be recomputed. (Anticipated signals also serve as a buffer of ‘mostly good’ data against network or scheduling hiccups, resulting in a very robust model – much more precise and expressive than dead reckoning.)

          Collaborative influence in RDP occurs at points of observation and influence of shared resources – especially demand monitors, stateless stability models, and shared state. (I’ve described some of these on my blog.)

          Compared to transactions, the tradeoffs I make are very different. Developers must model logical latency explicitly, statically. (A conservative estimate is acceptable.) The RDP component of any program is real-time. (Non real-time operations must be modeled by continuous or incremental processing of state, where each increment is real-time.) Control flow, in the unlikely case you might want it, must be modeled by use of intermediate state (e.g. to represent a message queue, or waiting on a return value from a procedure call).

          RE: What good is a copy if I have to manually synchronise it?

          You can still automate such synchronization, e.g. attach it to a timeout after you stop editing, or a commit button. But I’m not suggesting you favor a copy, unless you need your own version long-term. You don’t “need” that isolation in the first place.

          • Roly Perera says:

            Ok, thanks for clarifying. I can reasonably intuit your approach being useful for many of the “real” (non-digital) scenarios you mentioned earlier. I still worry that it doesn’t have the kind of properties that are needed for effective digital collaboration, co-operation and competition, but without more study (both of your approach, and mine), they remain mostly unsupported intuitions.

            I don’t know what you mean by “model logical latency explicitly”, but I guess I’ll read up on some of those topics you mentioned (of which temporal logic is the only one I’ve even drunk shallowly of), and maybe then you can elucidate.

          • Modeling concurrent effects via a set of values (in the role of demands) is excellent for collaboration and cooperation. Processing demands as a set, rather than one at a time:

            eliminates a state burden to track concurrent demands
            simplifies awareness of concurrent demands
            makes it easier to seek solutions that meet many demands
            requires a more formal and comprehensible expression of priority

            While I mentioned the `left vs. right` conflict, the more common case is that demands aren’t in direct conflict – i.e. there is often a solution available that satisfies most or all demands. And when there is no such solution, there is sometimes a compromise available that at least seems fair and safe.

            Combined with anticipation, collaboration and cooperation improve further. Anticipation makes it easier to report probable conflicts, since if we anticipate conflicts we can change our minds and avoid them before they become actual conflicts. Anticipation also makes it easier to perform short-term tactical planning and transition strategies that can avoid most conflicts – i.e. resulting in a smoother experience.

            By analogy, the first is like telegraphing that you plan to change lanes with your car, so the other cars may cooperatively open a hole. Transactions, by comparison, would smash the cars together when they move into the same lane, then say “oops! undo!”. You express worry about my approach to cooperation, but you should be much more concerned about the quality of your approach.

            I assume by “digital competition” you mean something like a chess game, or a first person shooter, or trading in a stock market. In these (rather specialized) cases, we might wish to restrict anticipation, so as to not telegraph our intentions.

            Restricting anticipation in RDP would be relatively trivial to model as a behavior that hides information about the future of a signal (e.g. reducing its quality or cutting it off entirely) then parcels the changes out over real time. I’ve mentioned such a behavior before, as an `opsec` behavior, which could be introduced at certain boundaries between competitive systems.

            The need for cooperation and collaboration (potentially with mutual distrust) is more common than the need for true competition. Insofar as direct communication goes, we want competition only at rare system boundaries. If we have fighting robots, they’re more likely to simply not talk to their opposition.

          • Roly Perera says:

            We seem to have run out of available nesting depth, so maybe we should take that as a sign to bring this conversation to a (temporary) end soon ;-)

            By “competition”, I mean multiple agents “attempting” to influence the overall behaviour of a bigger system that they are part of. Real-world examples are cars on a motorway, genes (which have pressures to compete as well as to co-operate), cells in a body (ditto), financial transactions against a bank account, alternative interpretations of perceptual stimuli, shoppers on eBay. Competitive processes are ubiquitous. Dynamic phenomena are often solving optimisation problems.

            Transactions, by comparison, would smash the cars together when they move into the same lane, then say “oops! undo!”.

            Not the transactions I’ve been talking about. Multiple cars compete to merge into a lane; one car is lucky enough to be the first winner, updating the top-level state; any car that wants to influence the outcome of the whole system can only do so by updating its knowledge and intentions to match the new state; at some point another car wins; repeat. There is no undo, only a continual process of state weaving and agent update. Having to update is the cost of doing the speculative execution. The opportunity to “win” is the benefit.

            (As you mentioned once before, words can only get one so far, when intuitions don’t already align. In this case, ours don’t yet align, so we maybe should resume this when we have something more concrete to discuss.)

          • By “competition”, I mean multiple agents “attempting” to influence the overall behaviour of a bigger system

            I think you might want to find a different word for your meaning. Not all influence on systems is competitive in nature. Nor is it something we need to “attempt” to do. (For example: Put a boulder on a highway, and it will influence the the behavior of the larger system without even trying.)

            In any case, it is easy to model competitive influence on a system in RDP. For example, that `left vs. right` example I mentioned earlier. One can add weights to influences for priority.

            If you’re interested in this sort of thing, I suggest you look up’s logic model, and “soft” constraint logic models. I also describe some of it wrgt. my stateless stability model, on my blog.

            one car is lucky enough to be the first winner, updating the top-level state;

            Transactional systems aren’t prescient. The conflict of two transactions is typically only discovered after the collision.