Recently, the unordered-containers library has included some changes that made a function much more strict than is sensible in a lazy language like Haskell. The changes were meant to improve consistency on a performance-related issue, namely whether the container should be “key-strict” or “value-strict” or some combination thereof. However, I think that the recent blooper is a sign that there is a lot of confusion around these notions. In this post, I want to clear up the issue and explain a general principle for reasoning about space usage in Haskell, namely the concept of *space invariants* as I would like to call them. One consequence of this concept will be that the word “strict” in the term “value-strict” is actually a bad idea because knowing whether a function is *strict* is not enough to reason about its *space* usage. (Hence my use of quotation marks.)

Another reason for writing this post is that some time ago, several people pointed out to me that my library reactive-banana had a couple of unpleasant space leaks. I finally found the time to investigate and fix them, but I had a hard time to prove to myself that I had actually fixed the space leak for good, until space invariants gave a me an excellent way to reason about it.

Most importantly, *space invaders* make a great pun on space invariants.

Anyway, space invariants are about expressing the folklore advice of adding strictness annotations to your data types as an actual property of the data structure that can be reasoned about. The essence of space invariants is

to ensure that whenever we evaluate a value to weak head normal form (WHNF), we automatically know a bound on its memory usage.

The idea of attaching invariants to WHNFs has proven very successful (see Chris Okasaki’s book [pdf]) for reasoning about the *time complexity* of lazy functional programs; this is known as the debit method. Hopefully, this will help us with space usage as well.

In the next section, I will give a short overview aimed at more experienced Haskellers. Then we’ll have a section aimed at Haskell beginners and a section concerning the relation to strictness. Finally, I’ll present a short case study describing how I fixed a space leak in my reactive-banana library.

Consider a container like `Data.Map`

. Here is an example container:

```
container = insert "key1" 'a'
. delete "key2" . insert "key2" 'b' $ empty
```

As written, this expression is a rather inefficient representation of the container which contains a single value `'a'`

at the key `"key1"`

. After all, the expression still contains a reference to another value `'b'`

, which will take up space in memory. It is only when this expression is evaluated that the container will contain a single value. Or will it?

spine-strict = if the container is in WHNF, then the space used by the container is the total of the space used by the values that are contained in the container (times a constant)

Apparently, when evaluating this expression, only a s*pine-strict container* will give you the seemingly obvious guarantee that it stores references only to those values that it actually contains. You see, the value of the `container`

is *denotationally* equivalent to

`container2 = fromList [("key1",'a')]`

but these two expressions are represented very differently *operationally* in the computer memory. To be able to reason about the space usage, it is convenient to link the denotational side to the operational side via an *invariant*. The great thing about denotation is that it is independent of how the expression is represented. Using an invariant like spine-strictess, you can now say “Oh, I know which elements are in the container, so I can conclude how much space it takes up.”, regardless of how you constructed the container in the course of the program. For languages based on lazy evaluation, such a link is usually not available.

It is not necessarily a bad thing that denotation and space usage can be separate from each other, because the denotation can also be much larger than the actual expression. For instance, lists are a data structure that are *not* spine-strict and we can easily express lists of infinite (denotational) size in small space

`squares = map (^2) [1..]`

By the way, this distinction between denotation and operational behavior is also why the word “strict” in the term “spine-strict” is a bad idea. Strictness is a purely denotational concept and hence cannot give you any of the operational guarantees that we need to control space usage. (For time complexity, this is actually not a problem because up to a constant factor, lazy evaluation is always faster than eager evaluation.) The concepts are certainly related, as I will discuss below, but strictness alone doesn’t help you with space usage at all.

The other important space invariant for a container is the property of *value-strictness*

value-strict = if the container is in WHNF, then each value stored in the container is in WHNF

In other words, the container will make sure that all values are stored in an evaluated form, which in turn allows us to apply other space invariants for the values and hence control the total memory usage of the container. Again, my reservations about the use of the word “strict” apply here as well.

As you can see, space invariants are usually of the form “if this is in WHNF, then other things are in WHNF as well”. It is generally hard to say something about expressions that are not in WHNF, their size usually doesn’t bear any relation to their denotation.

How do you create space invariants? Well, that is easy, by using the `seq`

function. A fresh view on the definition

`x = a `seq` b`

is to not see it as a specification of evaluation order, but rather as a declaration of a relation between the evaluation of `x`

and the evaluation of `a`

: whenever `x`

is in WHNF, so is `a`

. Taking this further, another example of a space invariant is a property that I would like to call *deep-space*:

deep-space = if the value is in WHNF, then it is already in full normal form (NF)

In other words, a value is deep-space if it is either unevaluated or already in full normal form. This time, I have avoided the word “strictness” for good effect. :)

For algebraic data types, we can encode applications of the `seq`

combinator directly in the constructors via “strictness annotations”. The plain data type

`data Tree a = Bin (Tree a) (Tree a) | Leaf a`

has the spine-strict variant

`data Tree a = Bin !(Tree a) !(Tree a) | Leaf a`

and the both spine-strict and value-strict variant

`data Tree a = Bin !(Tree a) !(Tree a) | Leaf !a`

The exclamation mark before a constructor argument indicates that the corresponding value is in WHNF whenever the constructor of the algebraic data type is in WHNF. Note that binary trees do not have a value-strict variant that is not spine-strict as well, because you can build trees such as `Bin thunk1 thunk1`

out of unevaluated leaves. On the other hand, abstract container types can be made purely value-strict by making the `insert`

function use `seq`

on the value first.

Before continuing with space invariants and their relation to strictness, I would like to make an interlude for readers that are Haskell beginners and spell out some general information about lazy evaluation and space usage that I have already assumed implicitly in the previous section. If you’re an experienced Haskeller, you can skip to the next section.

The main cause for why space usage in a lazy language like Haskell is hard to reason about is because it is heavily *context-dependent*. For example, the expression `[1..n]`

uses *O(n)* memory when evaluated in full, but when it appears in a different context, for instance in the expression `head [1..n]`

, evaluating the whole program takes only *O(1)* memory. In contrast, the value represented by the expression `[1..n]`

is independent of the context, it is *compositional*. It does not matter how you represent the list of integers from `1`

to `n`

in the computer, calculating its `head`

will always give the same result `1`

.

Since memory usage depends on context, it seems that in order to reason about space usage, you have to trace the evaluation of a Haskell program in detail. Edward Yang has a beautiful visualization for how to do that, but unfortunately, it is utterly impossible to trace the evaluation for anything but trivial programs in this way. We need tools that abstract the details away: we don’t want to know about the details of how GHC manipulates heap objects, and we don’t really want to know how lazy evaluation works on a more abstract level either, we just want some sort of compositionality, just like we have in the case of semantics.

As a first step, understanding GHC’s evaluation machinery has fortunately never been necessary. You can understand lazy evaluation entirely on the source code level, in terms of *graph reduction*. The basic idea is that lazy evaluation deals with expressions. For instance, every one of the following lines

```
5
2*3 - 1
1 + (1 + (1 + (1 + 1)))
```

is an expression that *denotes* the value 5, but they are still *different* expressions for the same semantic value. The *size* of an expression tells you how much *memory* is needed to store it in the computer. The last expression is very long and takes five memory cells to store, while the first expression is small and takes only one memory cell to store.

Executing a Haskell program is about evaluating expressions, essentially by applying the definitions of functions like `*`

or `+`

repeatedly. Usually, evaluation of an expression stops when it has reached *weak head normal form* (WHNF). Of the example expressions above, only `5`

is in WHNF. For tree-like data types, WHNF means that we can see the outermost constructor, but its arguments may still contain arbitrary expressions. For instance, for the type

`data Tree a = Bin (Tree a) (Tree a) | Leaf a`

the expressions

```
Leaf (1+1)
Bin (insert 17 (Leaf 1)) undefined
```

are in WHNF.

Concerning terminology, we will call an expression that is not in WHNF an *unevaluated expression*, or *thunk* for short. However, keep in mind that the latter term originally referred to a particular representation of unevaluated expressions in GHC. I would like to repeat again: no understanding of GHC internals is necessary to reason about space usage in Haskell, it can all be done on the level of expressions (graphs). The more details you have to keep track of, the less you understand your program. But it seems to me that the term “thunk” has become synonymous with “unevaluated expression” in current usage, so I will use it in this way as well.

An even higher level of abstraction is the notion of *strictness*. It answers the following question: a function `foo`

is called *strict* if evaluating `foo expr`

to WHNF will also evaluate the expression `expr`

to WHNF. In other words, to evaluate the result, you have to look at the argument. The great thing about strictness is that it can be formulated in terms of denotations alone, which are independent evaluation order. In particular, the function `foo`

is strict iff

`foo ⊥ = ⊥`

where `⊥`

is called “bottom” and denotes a “diverging” value, for instance an expression whose evaluation does not terminate. In other words, if evaluating the argument to WHNF will throw you into an infinite loop, then evaluating the function result will also do that, because it needs to peek at the argument. (See the Wikibook for more on ⊥).

So much for the general setup.

As we have seen, strictness is a denotational property that can express some relations between WHNF, but it is not powerful enough to express space invariants in general. Neither do space invariants imply strictness properties. However, I feel there is some correspondence between the two.

To explore the correspondence, let us focus once again on container structures. The main functionality that a container offers is the `lookup`

function

`lookup :: key -> Map key value -> Maybe value`

If the container is spine-strict, then the following property seems natural for the `lookup`

function:

If looking up any one key gives ⊥, then the container was ⊥ to begin with.

In formulas:

∀ key. (lookup key container = ⊥ → container = ⊥)

This formula may take a while to digest, it looks a bit like a “reverse strictness” of the `lookup`

function. The idea is that if you have evaluated the container to WHNF (`container ≠ ⊥`

), then you already “evaluated the possible lookups” to WHNF as well (`lookup key container ≠ ⊥`

). If you imagine a tree, then this means that the branches may not contain thunks; otherwise, you could replace the thunk with ⊥ and blow up when you try to evaluate a lookup that takes this particular branch.

Unfortunately (and unlike I excitedly thought for some time), this property alone is not enough to imply a space invariant. In fact, it is not necessarily implied by a space invariant either. You can always have a funny implementation like

`lookup key container = container `seq` ...`

that has this strictness property, but doesn’t give any guarantees about space usage of the container itself. Likewise, it is also possible to have extra data in the container, so that a diverging `lookup`

is not enough to guarantee that the whole container has to be ⊥ as well.

Still, I think that this denotational property is very much a close relative to a space invariant. In particular, this formulation allows us to gain some partial confidence whether container library really does provide a spine-strict data structure, for example by using StrictCheck for testing.

In the same spirit, a strictness property corresponding to the space invariant of value-strictness (oh, the misnomer!) would be

If looking at a value gives ⊥, then the container was ⊥ to begin with.

In formulas:

∀ key. (lookup key container = Just ⊥ → container = ⊥)

This description may take some time to understand as well. Note how we use of the lazy `Maybe`

type here: in the spine-strict case, our premise is that the lookup yields a result at all, while in the case of value-strictness, the premise is that the lookup succeeds with a value (but we don’t say anything about the case where it diverges).

I’m sure that “reverse strictness” formulas such as the ones above have been considered in projection-based strictness analysis.

I found all of the above considerations very helpful for fixing some space leaks in my reactive-banana library, as it gave me a clear picture of what I can rely on and what is impossible.

Namely, trying to understand the details of the evaluation process is absolutely hopeless, you never really know how soon an expression is going to be evaluated to WHNF. If you have an expression `foo`

that you want to be in WHNF, you can try to force its evaluation by writing `evaluate foo`

instead, but that doesn’t help at all, because how do you ensure that the larger expression `(evaluate foo)`

itself is evaluated? Oops. This is the same as putting a bang pattern on a function that is already strict, a common act of desperation amongst Haskell programmers when confronted with the evil space leak invaders.

However, what you *can* do is to *link* the evaluation of two expressions. This is precisely what a space invariant expresses and what the `seq`

combinator can do: by writing `seq expr1 expr2`

, you indicate that `expr1`

will be in WHNF whenever `expr2`

is. “If you shoot my friend, then you also have to shoot me”. No idea if a value will ever be evaluated, but when it is, then another value will be evaluated as well.

The situation in reactive-banana was quite hairy. Namely, I had a lazy state monad

`type Eval a = Control.Monad.State.Lazy.State (Container,Hangman) a`

and profiling indicated that the state of type `Container`

was leaking like a bucket without bottom. The obvious try would be to use a strict state monad, but unfortunately, I couldn’t do that: I needed the laziness to support a tie-a-knot-around-your-neck kind of recursion with the `Hangman`

type, and the values in the `Container`

type got populated in a somewhat recursive fashion as well. It was simply not possible to make this state monad strict and control the evaluation of the `Container`

type in a step-by-step fashion.

At first, I was not sure whether using a spine-strict and value-strict container would help me in this situation, but then I became happy with the idea of only being able to link evaluation instead of being able to force it. Running the state monad was part of a larger computation step, so sneaking an `evaluate`

in the IO monad at the end of every step should allow the laziness I needed and yet force my container to attain a predictable size in the end. Thanks to space invariants, I can be confident that this strategy works.

Unfortunately, that didn’t quite help yet, though. The reason was building the new container involved reading values from the old container and many values were actually of function type, like

```
value :: [A] -> [B]
value = map (fromJust $ lookup key oldContainer)
```

The problem is that this expression is (almost) in WHNF already, so the `lookup`

didn’t get evaluated and `oldContainer`

is still lingering about. In other words, evaluating the values to WHNF was *not* sufficient to ensure a predictable memory size. Fortunately, the solution is to add new space invariant that links the WHNF of the `value`

to the WHNF of the expression involving `lookup`

. Problem solved!

So much for space invariants. I think they offer a fresh perspective on how to reason about space usage in Haskell.

In particular, they explain why the changes to the unordered-containers library mentioned in the introduction were a rather odd idea: the strictness of a function has, unfortunately, little to do with a guarantee on space usage.

Some HTML formatting is allowed in comments.

blog comments powered by Disqus