Modularity in many dimensions

I recently read Doug Janzen and Kris De Volder’s paper Programming with Crosscutting Effective Views. I grok a little more clearly now what Gregor Kiczales meant by the word “effective” in the OOPSLA 2007 talk I discussed in my last post. In Janzen and De Volder’s terms, a view is effective when it can mediate changes to the underlying model. The problem of making a view effective, sometimes called the problem of bidirectionality, is a tricky one, but won’t concern us here. Instead I’m going to talk about what Kiczales and others call fluid AOP, for which effective views are essential.

One program, many faces

Aspect-oriented programming (AOP) is the latest development in our ongoing battle with the problem of modularity. But although AOP, in its standard form, represents a useful step forward, it ultimately succumbs to the very problem it sets out to address: the fact that that some aspects or features of a system cross-cut others. By rearchitecting an OO system into a set of aspect declarations, one often merely inverts the modularity problem, distributing the behaviour of the original classes over the implementation of various aspects. What would previously have been a localised change to the class structure may now involve a change to several aspects.

Fluid AOP addresses this issue by taking a much less static view of programs. The idea is to treat aspects as editable — “effective” — views which are generated on demand for particular coding activities. For persistence purposes, one view is nominated as the “primary decomposition”. (With a legacy language this may just be the regular source view, but in principle the primary decomposition can be any view that suffices to determine all the others.) Making these views effective is where bidirectionality comes in: each view must define how edits are “put back” as edits to the primary decomposition.

Before we come to fluid AOP, I want to revisit its motivation, and in particular the modularity challenge which is sometimes called The Expression Problem. My claim will be that, although some languages may be able to solve the Expression Problem, the more general modularity problem requires more than just fancy language features. Some kind of “fluidity”, it seems, is essential. I’ll close by suggesting that fluidity be understood in meta-linguistic terms: views must be seen as syntax, and inspection and manipulation of views as a form of meta-programming, often refactoring. “Effectiveness” (bidirectionality) is a challenge that needs to be tackled within this framework.

But I’m getting ahead of myself. Let’s start with modularity. I’ll understand modularity to mean, roughly, the extent to which a change to or extension of functionality requires only local changes. I want to make the case that the pursuit of “absolute” modularity is doomed: that we must always ask “modularity with respect to what kind of change?” In effect the question “how modular is my software?” is one that should be qualified by a class of transformation. If you’re already familiar with the Expression Problem, you can skip the next section.

Modularity with respect to what?

Imagine we’re writing an interpreter in, say, Java, for a simple language for integer arithmetic. We crank out the usual OO class hierarchy:

abstract class SyntaxNode {
    public abstract int eval ();

class Add extends SyntaxNode {
    public SyntaxNode left, right;
    public int eval () {
        return left.eval() + right.eval();

class IntegerLiteral extends SyntaxNode {
    public int val;
    public int eval () {
        return val;

(Don’t worry about the fact that the fields are public; that’s just me being lazy.)

We can make the observation here that adding a new kind of node involves only local changes: we just add a new node class and implement eval:

class Multiply extends SyntaxNode {
    public SyntaxNode left, right;
    public int eval () {
        return left.eval() * right.eval();

What we have identified is a modularity property: modularity with respect to the addition of new node types.

Now let’s consider a different kind of change. Instead of a new kind of node, we want to provide a new kind of operation on nodes, say a toString method. One might be tempted to add the following code to the SyntaxNode class:

public final String toString () {
    if (this instanceof IntegerLiteral) {
        IntegerLiteral op = (IntegerLiteral)this;
        return op.val;
    if (this instanceof Add) {
        Add op = (Add)this;
        return op.left.toString() + " + " + op.right.toString();
        throw new AbstractMethodError();

The problem with this approach, as we all know from standard OO doctrine, is that our über-method has to know about every kind of syntax node; in particular if we want to add a new kind of syntax node, we need to update this method as well to add a new clause:

    if (this instanceof Multiply) {
        Multiply op = (Multiply)this;
        return op.left.toString() + " * " + op.right.toString();

So what we don’t like about this approach is that it violates modularity with respect to the addition of new node types.

In a functional language like Haskell this particular kind of modularity failure isn’t considered bad; in fact it’s normal! Our class hierarchy would correspond to an algebraic data type:

All our functions would start with a case analysis, the functional counterpart of a type switch:

eval :: SyntaxNode -> Int
eval x = case x of
    Add a b -> eval a + eval b
    IntegerLiteral n -> n

toString :: SyntaxNode -> String
toString x = case x of
    IntegerLiteral n -> show n
    Add a b -> toString a ++ "+" ++ toString b

and adding a new kind of node would require modifying every function to add the corresponding clause.

But returning to the world of OO, the “right” way to define toString is as follows:

class SyntaxNode {
    public abstract String toString();

class Add {
    public String toString () {
        return left.toString() + " + " + right.toString();

This allows us to maintain modularity with respect to the addition of new node types. As before, we need only define a new class and provide an implementation of the various abstract methods:

class Multiply extends SyntaxNode {
public SyntaxNode left, right;

public int eval () {
return left.eval() * right.eval();

public String toString () {
return left.toString() + " * " + right.toString();

But clearly this kind of modularity actually came at the expense of modularity of a different kind. To make it easy to add node types, we had to spread our toString implementation across the class hierarchy. In other words adding new operations entails modifying each class; the design thus lacks modularity with respect to the addition of new node operations.

FPers usually inhabit the other corner of the trade-off space: they enjoy modularity with respect to new operations, but not with respect to new node types. Although we could probably achieve a similar inversion of modularity in Haskell:

class SyntaxNode a where
eval :: a -> Int
toString :: a -> String

data IntegerLiteral = IntegerLiteral Int
instance SyntaxNode IntegerLiteral where
eval (IntegerLiteral n) = n
toString (IntegerLiteral n) = show n

data SyntaxNode a => Add a = Add a a
instance SyntaxNode a => SyntaxNode (Add a) where
eval (Add a b) = eval a + eval b
toString (Add a b) = toString a ++ "+" ++ toString b

data SyntaxNode a => Multiply a = Multiply a a
instance SyntaxNode a => SyntaxNode (Multiply a) where
eval (Multiply a b) = eval a * eval b
toString (Multiply a b) = toString a ++ "*" ++ toString b

In either paradigm, it seems that we’re torn between operations cross-cutting data types, and data types cross-cutting operations (although OO prefers the former, and FP the latter). Which of these two alternatives is the “more modular” simply depends on whether it’s operations, or data types, that we intend to modify. That’s not to say that whether a design is modular is a mere matter of taste or opinion, but just that modularity is relative to what you’re trying to do.

The multi-dimensional modularity problem

It’s tempting to look for a “linguistic” solution to this problem. Perhaps with the right language feature (or the right design pattern), it would be possible to make a program modular with respect to both kinds of transformation simultaneously. Some time ago Philip Wadler dubbed this challenge the Expression Problem. (Although discussion of the problem itself goes back further: see for example this chapter of SICP. Thanks Noel!) CLOS-style multi-methods are one linguistic approach to this problem; design patterns such as Extensible Visitor are another.

But although there certainly are linguistic approaches to the Expression Problem, there is usually a cost in terms of design-time cohesion. Multi-methods and similar approaches work precisely by making programs more granular (less cohesive), and then relying on a dispatch mechanism to compose the fragments (restore cohesion) at run-time. The point of the granularity is precisely to achieve modularity: behaviour can often be added or modified by just dropping in a new code fragment. But the loss of any unified view of individual features or behaviours means the programmer has to mentally execute the composition rules in order to know what will happen when the program runs. And the cohesion problem only gets worse when you try to generalise these approaches to multiple dimensions of composition.

Standard “static” AOP is not really a response to the Expression Problem but to this more general problem of multi-dimensional composition. It avoids the granularity problem associated with multi-methods and similar approaches by allowing us to specify aspects as cohesive design-time modules. But the very cohesion of these definitions means that they are inevitably cross-cut by other features of the system. These features can now only be understood by reasoning about how the aspects are woven into them. The benefit of increased modularity in some respects has, as with our interpreter example earlier, been bought at the price of reduced modularity in others.

The basic problem is that there is no static language feature that can universally resolve the tension between modularity and cohesion. What we really need is greater fluidity. Since no design can anticipate its evolution, the capacity to “remodularise” a program quickly is much more valuable than any particular contingent modularity property. Our programming tools should allow us to eliminate degrees of freedom which we no longer wish to exploit (so they don’t distract us from the task at hand), and introduce, in their place, the degrees of freedom required to achieve that task, safely and easily. This is the promise of tool-based approaches such as fluid AOP: “just-in-time cohesion”.

Cohesion-on-demand turns weaving and unweaving into a browsing activity. The programmer no longer has to reason about complex composition rules a la multi-methods or static aspects, as she can simply inspect the content of the relevant view.

Where fluid tools become fluid languages

So it seems like fluid AOP is the only plausible approach to the problem of multi-dimensional modularity. It’s hugely exciting, but also rather daunting when one thinks about what’s involved in making the vision a practical reality. The words “indistinguishable” and “magic” come to mind. I’m well out of time and space resources, so I’ll wrap up for now by just sketching what I see as the way forward.

Shortcomings aside, linguistic approaches have one indispensible property: they offer well-defined semantics for multi-dimensional composition. (As an example, see this paper, which extends Hindley-Milner type inference with static advice weaving.) The future of fluid AOP lies I think in harnessing this linguistic precision in an interactive tool.

After all, if we hope to rely on “fluid aspect” views to visualise key features of our software, as well as mediate critical changes to it, then they really are “the source files of the future”. How the contents of the various views are combined, how edits are put back, and so forth, will have to be precisely defined. In turn, this will require fluid aspects to have well-defined syntax.

Indeed the difference between fluid aspects and traditional syntax trees is not one of kind, but only of lifecycle: they are built on the fly in response to the user’s changing goals, and may co-exist with other effective views that share the same underlying model. But otherwise they are just plain ol’ syntax. We see a taste of this in the “virtual source files” of Janzen and De Volder, as well as in the “fluid source code views” of Desmond et al.

(Of course I don’t mean that we should necessarily store such syntactic artifacts in text files, or present them to users as flat, uninterpreted text; but in an uncontroversial sense they are “just syntax”. This was the point of my post Why syntax won’t go away.)

A corollary of this perspective is that we must regard our interactions with these views as a form of meta-programming. Matt Hellige expressed the same idea when he suggested that we think of “programming-time” as a computational stage. To interact is to effect a syntactic transformation. Switching to a new view of a program is really just a refactoring.

A mature AOP, then, is not a static language feature as such, but an interactive meta-programming activity: the process of refactoring a program into a view that provides a single locus of change for the edit you want to apply. Refactoring back to the original view distributes (“puts back”) your change into all the relevant places. We could for example implement the toString method we defined earlier as a single type switch, for convenience; expand it over a class hierarchy, in order to add a new node type like Multiply; collapse it back to a single method again, perhaps to modify its signature; expand it again, to add more node types; and so on.

The toggling, in this hypothetical scenario, between two different points in modularity space is little more than the iterated application of the refactoring Inline method and its inverse. For an overriding method, Inline method injects the body of the method to be inlined into a new clause of a type switch in the overridden method. There are really just two syntactic presentations of the “same” system here, and a function that computes one view from the other, that happens to be able to compute backwards as well as forwards. (Ok, ok: maybe there’s just an incey, wincey little bit of work to do to distinguish this from magic.)


  1. Kevlin says:

    One minor point I would make that threw me a little is how the challenge of modularity is presented. It is first introduced with the phrase “our ongoing battle with the problem of modularity”. The first reading of this suggests that modularity is a problem and it is the problem that we are trying to tackle. It is not until later that it becomes clear that this is not what was intended, and that it is the challenge of modularising all aspects of interest that is the problem, so just an ambiguity in phrasing.

    In performing the comparison between Haskell and Java versions I think there are a few other concerns that could be worth stating, although I see that they could also take you off topic. For example, although implied, it could be worth stating that it is not simply cultural habit that differentiates the two styles, it is the unit of modularity itself. The article focuses on criteria for modularity, but the fact that the two choices are composed of different ingredients is key. Data type and operation are inverted in one case to the other, which leads to an inversion in the trade-offs. In this case particular the reduction is that simple and makes for a nice, controlled comparison. Different criteria have led to different modular concepts.
    This theme is very much in the article, but is perhaps worth stating more clearly.

    As an aside, it is also interesting to note that the natural style for a functional solution is similar to that for a procedural one. Although
    there are key differences between functional and procedural styles, in this example the dominant unit of modularity in both is comparable in granularity and intent. Differences between the styles in terms of control flow, state manipulation, etc., are not relevant in this example. So, a nice controlled example in this respect.

  2. Roly Perera says:

    Thanks for picking up my ambiguous phrasing. You’re right, talking of “the problem of modularity” is like talking of the “the problem of gender”. Gender isn’t a problem; gender is…well, gender. Of course what I meant was “the challenge of achieving modularity”, in particular when different concerns pull the software in different directions.

    I also agree that it’s worth clarifying that the modularity “preferences” (as I put it) of FP and OOP are biases somehow inherent to those paradigms, rather than mere cultural differences. Maybe we’ll one day end up characterising the essential difference between these two paradigms in terms of these modularity properties. With a truly fluid language, they just become two different (but in some sense equivalent) ways of viewing a program.

    The basis for this is the formal duality of FP and OOP, which I had originally hoped to discuss in the article. Given suitably idealised versions of FP and OOP, the correspondence has algebraic data types dual to class hierarchies, functions over those algebraic data types dual to virtual methods, and case analysis for the data types dual to virtual method dispatch. I suppose this also explains the similarity vis-à-vis granularity and intent) that you mentioned between the Haskell and Java units of modularity.