## 24 June 2011

### The Essence of the Iterator Pattern

"The Essence of the Iterator Pattern"(EIP) is the paper I liked the most last year. It gave me a brand new look over something which I had be using for years: the `for` loop.

In this post I'll try to present some ideas of that paper and show how to implement the solution described by the authors using Scalaz-like code. A minimum previous exposure to functional programming, functors and monads will definitely help!

## What's in a for loop?

That was really what hooked me. What do you mean "what's in a `for` loop"?. Is there anything magic in that construct I've been using for years?

The introduction of EIP shows an example of a `for` loop to iterate on elements (not the C-like `for` used with an index). I'm transmogrifying it here to Scala but the idea remains the same:

``  val basket: Basket[Fruit] = Basket(orange, apple)  var count = 0  val juices = Basket[Juice]()  for (fruit <- basket) {    count = count + 1    juices.add(fruit.press)  }``

We start from a "container" of fruits: `Basket`. It could actually be anything, a `List`, a `Tree`, a `Map`... Then the `for` loop actually does 3 things:

1. it returns a container having the same "shape"; `juices` is still a `Basket`
2. it accumulates some kind of measure. Here, the number of fruits in the `count` variable
3. it maps the elements to other elements: pressing the fruits to get some juice

And this `for` loop is actually not the most complex:

• the `count` variable could influence the mapping of elements: `juices.add(fruit.press(harder=count))`
• we could have several variables depending on each other: `cumulative = cumulative + count`
• the mapping could also influence a "measure" variable: `liquid = liquid + fruit.press.quantity`

The purpose of EIP is to show that the "essence" of what happens in the `for` loop above can be abstracted by an Applicative Traversal. And the authors go on showing that given this Applicative abstraction, we get an incredible modularity for programming.

## The Applicative typeclass

How can an Applicative traversal be better than a for loop, and what does that even mean?? EIP has a lot of sentences and expressions which can be hard to grasp if you don't have a strong functional programming / Haskell background. Let's try to dissect that slowly and start with the formal definitions anyway.

### What is a Functor?

The first thing we need to talk about is the `Functor` typeclass:

``  trait Functor[F[_]] {    def fmap[A, B](f: A => B): F[A] => F[B]  }``

One way of interpreting a `Functor` is to describe it as a computation of values of type `A`. For example `List[A]` is a computation returning several values of type `A` (non-deterministic computation), `Option[A]` is for computations that you may or may not have, `Future[A]` is a computation of a value of type `A` that you will get later, and so on. Another way of picturing it is as some kind of "container" for values of type `A`.

Saying that those computations are `Functors` is essentially showing that we can have a very useful way of combining them with regular functions. We can apply a function to the value that is being computed. Given a value `F[A]` and a function `f`, we can apply that function to the value with `fmap`. For example, `fmap` is a simple `map` for a `List` or an `Option`.

### Pointed Functor

By the way, how do you even create a value of type `F[A]`? One way to do that is to say that `F[_]` is `Pointed`:

``  trait Pointed[F[_]] {    def point[A](a: => A): F[A]  }``

That is, there is a `point` function taking a (call by name) value of type `A` and returning a `F[A]`. For example, a regular `List` is `Pointed` just by using the constructor for Lists:

``  object PointedList extends Pointed[List] {    def point[A](a: => A) = List(a)  }``

Then combining the 2 capabilities, pointed and functor, gives you a `PointedFunctor`:

``  trait PointedFunctor[F[_]] {    val functor: Functor[F]    val pointed: Pointed[F]    def point[A](a: => A): F[A] = pointed.point(a)    def fmap[A, B](f: A => B): F[A] => F[B] = functor.fmap(f)  }``

The `PointedFunctor` trait is merely the aggregation of a `Pointed` and a `Functor`.

What about `Applicative` then? We're getting to it, the last missing piece is `Applic`.

### Applic

`Applic` is another way to combine a "container" with a function.

Instead of using `fmap` to apply the function to the computed value we suppose that the function is itself a computed value inside the container `F` (`F[A => B]`) and we provide a method `applic` to apply that function to a value `F[A]`:

``  trait Applic[F[_]] {    def applic[A, B](f: F[A => B]): F[A] => F[B]  }``

Let's take an example. Say I have a way to compute the price of a `Fruit` when the market is open:

``  def pricer(market: Market): Option[Fruit => Double]``

If the market is closed, `pricer` returns `None`, because we don't know what are the prices. Otherwise it returns a pricing function. Now if I have a `grow` function possibly returning a `Fruit`:

``  def grow: Option[Fruit]``

Then, using the `Applic` instance, you can price the `Fruit`:

``  val price: Option[Double] = applic(pricer(market)).apply(grow)``

The `price` will necessarily be an `Option` because you may not have a pricer nor a `Fruit` to price. And a bit of renaming and pimping reveals why we're using the term "Applicative":

``  val pricingFunction = pricer(market)  val fruit = grow  val price: Option[Double] = pricingFunction ⊛ fruit``

In a way we're just doing a normal function application, but we're just doing it inside the Applicative container. Now we have all the pieces to build the `Applicative` functor that EIP is talking about.

### Applicative Functor

An `Applicative Functor` is the aggregation of an `Applic` and a `PointedFunctor`:

``  trait Applicative[F[_]] {    val pointedFunctor: PointedFunctor[F]    val applic: Applic[F]    def functor: Functor[F] = new Functor[F] {       def fmap[A, B](f: A => B) = pointedFunctor fmap f     }    def pointed: Pointed[F] = new Pointed[F] {       def point[A](a: => A) = pointedFunctor point a     }    def fmap[A, B](f: A => B): F[A] => F[B]     = functor.fmap(f)    def point[A](a: => A): F[A]                 = pointed.point(a)    def apply[A, B](f: F[A => B]): F[A] => F[B] = applic.applic(f)  }``

Let's see how that can be implemented for a `List`. `fmap` and `point` are straightforward:

``   def fmap[A, B](f: A => B): F[A] => F[B] = (l: List[A]) => l map f   def point[A](a: => A): F[A]             = List(a)``

`apply` turns out to be more interesting because there are 2 ways to implement it, both of them being useful:

1. apply a list of functions to each element and gather the results in a List:

``def apply[A, B](f: F[A => B]): F[A] => F[B] = (l: List[A]) => for { a <- l; func <- f } yield func(a)``
2. `zip` the list of functions to the list of elements to apply each function to each element

``def apply[A, B](f: F[A => B]): F[A] => F[B] = (l: List[A]) => (l zip f) map (p => p._2 apply p._1)``

There is even a third way to use `List` as an `Applicative` by using the fact that `List` is a `Monoid`. But more on that later, for now we still have to see how all of this relates to the `for` loop...

## Traversing the structure

When we do a `for` loop, we take a "structure" containing some elements and we "traverse" it to return:

• that same structure containing other elements
• a value computed from the structure elements
• some combination of above

Gibbons & Oliveira argue that any kind of for loop can be represented as the following `traverse` operation:

``  trait Traversable[T[_]] {    def traverse[F[_] : Applicative, A, B](f: A => F[B]): T[A] => F[T[B]]  }``

That is, if the container/structure of type `T` has this `traverse` function using an `Applicative F`, then we can do whatever we would do with a `for` loop on it.

To get a better feel for this `traverse` function, we're going to implement the `Traversable` trait for a binary tree and then we'll see how can we `loop` on that tree.

### A Binary Tree

For all the other examples in this post, we're going to use a very simple binary tree:

``  sealed trait BinaryTree[A]  case class Leaf[A](a: A) extends BinaryTree[A]  case class Bin[A](left: BinaryTree[A], right: BinaryTree[A]) extends BinaryTree[A]``

On the other hand, the first shot at the `Traversable` implementation is barely readable!

`` def BinaryTreeIsTraversable[A]: Traversable[BinaryTree] = new Traversable[BinaryTree] {   def createLeaf[B] = (n: B) => (Leaf(n): (BinaryTree[B]))   def createBin[B]  = (nl: BinaryTree[B]) =>      (nr: BinaryTree[B]) => (Bin(nl, nr): BinaryTree[B])   def traverse[F[_] : Applicative, A, B](f: A => F[B]):      BinaryTree[A] => F[BinaryTree[B]] = (t: BinaryTree[A]) => {     val applicative = implicitly[Applicative[F]]     t match {       case Leaf(a)   => applicative.apply(applicative.point(createLeaf[B]))(f(a))       case Bin(l, r) =>         applicative.apply(applicative.apply(applicative.point(createBin[B]))(traverse[F, A, B](f).apply(l))).         apply(traverse[F, A, B](f).apply(r))     }   } }``

This is a shame because the corresponding Haskell code is so concise:

``  instance Traversable Tree where  traverse f (Leaf x)  = pure Leaf ⊛ f x  traverse f (Bin t u) = pure Bin  ⊛ traverse f t ⊛ traverse f u``

A bit of pimping to the rescue and we can improve the situation:

``def traverse[F[_] : Applicative, A, B](f: A => F[B]): BinaryTree[A] => F[BinaryTree[B]] = (t: BinaryTree[A]) => {  t match {    case Leaf(a)   => createLeaf[B] ∘ f(a)    case Bin(l, r) => createBin[B]  ∘ (l traverse f) <*> (r traverse f)  }}``

Informally the `traverse` method applies the function `f` to each node and "reconstructs" the tree by using the `apply` method (`<*>`) of the `Applicative` functor.

That's certainly still some Ancient Chinese to you (as it was for me) so we'd better see the `traverse` method at work now. But we need to take another detour :-)

### Applicative Monoid

One simple thing we might want to do, when iterating on a `BinaryTree`, is to get the content of that tree in a `List`. To do that, we're going to use the 3rd way to use `List` as an `Applicative`, as mentioned earlier. It turns out indeed that any `Monoid` (what is it?) gives rise to an `Applicative` instance but in a way that's a bit surprising.

``  /** Const is a container for values of type M, with a "phantom" type A */ case class Const[M, +A](value: M) implicit def ConstIsPointed[M : Monoid] = new Pointed[({type l[A]=Const[M, A]})#l] {   def point[A](a: => A) = Const[M, A](implicitly[Monoid[M]].z) } implicit def ConstIsFunctor[M : Monoid] = new Functor[({type l[A]=Const[M, A]})#l] {   def fmap[A, B](f: A => B) = (c: Const[M, A]) => Const[M, B](c.value) } implicit def ConstIsApplic[M : Monoid] = new Applic[({type l[A]=Const[M, A]})#l] {   def applic[A, B](f: Const[M, A => B]) = (c: Const[M, A]) => Const[M, B](implicitly[Monoid[M]].append(f.value, c.value)) } implicit def ConstIsPointedFunctor[M : Monoid] = new PointedFunctor[({type l[A]=Const[M, A]})#l] {   val functor = Functor.ConstIsFunctor   val pointed = Pointed.ConstIsPointed } implicit def ConstIsApplicative[M : Monoid] = new Applicative[({type l[A]=Const[M, A]})#l] {   val pointedFunctor = PointedFunctor.ConstIsPointedFunctor   val applic = Applic.ConstIsApplic }``

In the code above, `Const` is the `Applicative` instance for a given `Monoid`. `Const` contains values of type `T` where `T` is a `Monoid` and we progressively establish what are the properties that `Const` must satisfy to be `Applicative`.

• it must first be `Pointed`. Informally, the `point` method puts the neutral element of the `Monoid` in a `Const` instance

• then it must be a `Functor`. Here the `fmap` function doesn't do anything but changing the type of `Const` from `Const[M, A]` to `Const[M, B]`

• finally it must be an `Applic` where the `apply` method of `Applic` uses the `append` method of the `Monoid` to "add" 2 values and return the result in a `Const` instance.

There is unfortunately a lot of typing vodoo thing here:

• the type declaration for `Const` is `Const[A, +B]`. It has a type parameter `B` which is actually not represented by a value in the `Const` class! It is a phantom type. But it is actually indispensable to match the type declarations of the typeclasses

• the type `F` that is supposed to be `Applicative` is... `({type l[A] = Const[T, A]})#l`. Ouch, this deserves some explanation!

What we want is not so hard. The type `Const[A, B]` has 2 type parameters. We need a way to fix `A` to be `T` and get the resulting type which will have only one type parameter. The expression above is the most concise way to get this desired type:

• `{ type l = SomeType }` is an anonymous type with a type member called `l`. We can access that type `l` in Scala by using `#`: `{ type l = SomeType }#l`

• Then, in `{ type l[A] = SomeType[T, A] }#l`, `l` is a higher-kinded type, having a type variable `A` (actually `SomeType[T, A]` where `T` is fixed)

That was a really long detour for a mere `for` loop, isn't it? Now... profit!

### Contents of a BinaryTree...

We're going to use the `Traversable` instance for the `BinaryTree` and the `List Monoid Applicative` to get the contents of a `BinaryTree`:

``  import Applicative._  val f    = (i: Int) => List(i)  val tree = Bin(Leaf(1), Leaf(2))  (tree.traverse[...](f)).value must_== List(1, 2)``

Simple, for each element of the tree, we put it in a `List` then we let the `List Monoid` do its magic and aggregate all the results as we traverse the tree. The only difficulty here is the limits of Scala type inference. The `...` stands for type annotations that the compiler requires:

``  tree.traverse[Int, ({type l[A]=Const[List[Int], A]})#l](f)``

Not pretty :-(

Update: As pointed out by Ittay Dror in the comments, `List[Int]` is not an applicative by itself and we need to put this list into a `Const` value to make it usable by the `traverse` function.

This is actually done by an implicit conversion method, `liftConst`, provided by the `Applicative object`:
``  implicit def liftConst[A, B, M : Monoid](f: A => M): A => Const[M, B] =      (a: A) => Const[M, B](f(a))``

#### Profit time

Not everything is lost! We can encapsulate a bit the complexity in this case. We can extract part of the code above and create a `contents` method which will work on any of `Traversable` instance (assume I'm pimping the following examples so that I can write `tree.method` instead of `method(tree)`):

``  val tree: BinaryTree[Int] = Bin(Leaf(1), Leaf(2))  tree.contents must_== List(1, 2)``

This is based on the following definition:

``  def contents[A]: T[A] => List[A] = {    val f = (a: A) => Const[List[A], Any](List(a))    (ta: T[A]) => traverse[({type l[U]=Const[List[A], U]})#l, A, Any](f).apply(ta).value  }``

It also turns out that the `contents` function is a specialized version of something even more generic, the `reduce` function, working with any `Monoid`:

``  def contents[A]: T[A] => List[A] = reduce((a: A) => List(a))  def reduce[A, M : Monoid](reducer: A => M): T[A] => M = {    val f = (a: A) => Const[M, Any](reducer(a))    (ta: T[A]) => traverse[({type l[A]=Const[M, A]})#l, A, Any](f).apply(ta).value  }``

The `reduce` function can traverse any `Traversable` structure with a function mapping each element to a `Monoid` element. We've used it to get the contents of the tree but can as easily get the number of elements:

``  def count[A]: T[A] => Int = reduce((a: A) => 1)  tree.count must_== 2``

Can it get simpler than this :-)? Actually in that case it can! Since we don't need `(a: A)` at all we can use `reduceConst`:

``  def reduceConst[A, M : Monoid](m: M): T[A] => M = reduce((a: A) => m)  def count[A]: T[A] => Int = reduceConst(1)``

It's like a Scala standard `reduce` on steroids because instead you don't need to provide a binary operation, you just need a `Monoid` instance.

### .... and shape of a BinaryTree

We've addressed the question of doing some kind of accumulation based on the elements in the tree, now we're going to "map" them.

The following `map` method can be derived from the `traverse` method (note that no type annotations are necessary in that case, yes!):

``  def map[A, B](mapper: A => B) = (ta: T[A]) => traverse((a: A) => Ident(mapper(a))).apply(ta).value``

Here we're traversing with an `Applicative` which is very simple, the `Ident` class:

``  case class Ident[A](value: A)``

The `Ident` class is a simple wrapper around a value, nothing more. That simple class is an `Applicative`. But how?

Easy. `Ident` is actually a `Monad` and we can construct an `Applicative` instance from every Monad. This comes from the fact that a `Monad` is both a `PointedFunctor` and an `Applic`:

``  trait Monad[F[_]] {    val pointed: Pointed[F]    val bind: Bind[F]    def functor: Functor[F] = new Functor[F] {      def fmap[A, B](f: A => B): F[A] => F[B] = (fa: F[A]) =>         bind.bind((a: A) => pointed.point(f(a))).apply(fa)    }    def pointedFunctor: PointedFunctor[F] = new PointedFunctor[F] {      val functor = Monad.this.functor      val pointed = Monad.this.pointed    }    def applic: Applic[F] = new Applic[F] {      def applic[A, B](f: F[A => B]) = a =>         bind.bind[A => B, B](ff => functor.fmap(ff)(a))(f)    }    def applicative: Applicative[F] = new Applicative[F] {      val pointedFunctor = Monad.this.pointedFunctor      val applic = Monad.this.applic    }  }``

And the `Ident` class is trivially a `Monad` (having a `pointed` and a `bind` member):

``  implicit def IdentIsMonad = new Monad[Ident] {    val pointed = new Pointed[Ident] {      def point[A](a: => A): Ident[A] = Ident(a)    }    val bind = new Bind[Ident] {      def bind[A, B](f: A => Ident[B]): Ident[A] => Ident[B] =         (i: Ident[A]) => f(i.value)    }  }``

We can use our brand new `map` function now:

``  tree.map((i: Int) => i.toString) must_== Bin(Leaf("1"), Leaf("2"))``

We can even use it to get the "shape" of our container and discard all the elements:

``  tree.shape must_== Bin(Leaf(()), Leaf(()))``

The `shape` method just maps each element to `()`.

### Decompose / Compose

I recap. We implemented a very generic way to iterate over a structure, any kind of structure (as long as it's `Traversable`), containing elements, any kind of element, with a function which does an "application", any kind of application. Among the possible "applications", we've seen 2 examples: collecting and mapping which are the essential operations that we usually do in a `for` loop.

Specifically we were able to get the `contents` of a tree and its `shape`. Is there a way to compose those 2 operations into a `decompose` operation that would get both the content and the shape at once? Our first attempt might be:

``  def decompose[A] = (t: T[A]) => (shape(t), contents(t))  tree.decompose must_== (Bin(Leaf(()), Leaf(())), List(1, 2))``

This works but it is pretty naive because this requires 2 traversals of the tree. Is that possible to do just one?

#### Applicative products

This is indeed possible by noticing the following: the product of 2 Applicatives is still an Applicative.

Proof, proof. We define `Product` as:

``  case class Product[F1[_], F2[_], A](first: F1[A], second: F2[A]) {    def tuple = (first, second)  }``

I spare you the full definition of `Product` as an `Applicative` to just focus on the `Applic` instance:

``  implicit def ProductIsApplic[F1[_] : Applic, F2[_] : Applic] =    new Applic[({type l[A]=Product[F1, F2, A]})#l] {      val f1 = implicitly[Applic[F1]]      val f2 = implicitly[Applic[F2]]      def applic[A, B](f: Product[F1, F2, A => B]) = (c: Product[F1, F2, A]) =>        Product[F1, F2, B](f1.applic(f.first).apply(c.first),                            f2.applic(f.second).apply(c.second))   }``

That's not too complicated, you just have to follow the types. What's more troubling is the amount of type annotations which are necessary to implement `decompose`. Ideally we would like to write:

``  def decompose[A] = traverse((t: T[A]) => shape(t) ⊗ contents(t))``

Where `⊗` is an operation taking 2 `Applicative`s and returning their product. Again the lack of partial type application for `Const` muddies the whole (upvote SI-2712 please!):

``val shape   = (a: A) => Ident(())val content = (a: A) => Const[List[A], Unit](List(a))val product = (a: A) => (shape(a).⊗[({type l[T] = Const[List[A], T]})#l](content(a)))implicit val productApplicative =   ProductIsApplicative[Ident, ({type l1[U] = Const[List[A], U]})#l1](ta: T[A]) => { val (Ident(s), Const(c)) =   traverse[({type l[V] = Product[Ident, ({type l1[U] = Const[List[A], U]})#l1, V]})#l, A, Unit](product).   apply(ta).tuple  (s, c)}``

We can improve the code sligthly by moving the implicit definition for `productApplicative` inside the `Applicative` companion object:

``  object Applicative {   ...   implicit def ProductWithListIsApplicative[A[_] : Applicative, B] =      ProductIsApplicative[A, ({type l1[U] = Const[List[B], U]})#l1] }``

Then no `implicit val productApplicative` is necessary and the `Applicative` imports will be all we need.

#### Collection and dispersal

There is another way to do things "in parallel" while traversing the structure. The `collect` method that we're going to build will do 2 things:

• it will accumulate some kind of state, based on the elements that we meet

• it will map each element to another kind of element

So, as we're iterating, we can do a regular mapping while computing some kind of measure. But before that, we need to take a little detour (again?? Yes, again) with the `State` monad.

##### The `State` monad

The `State` Monad is defined by:

``  trait State[S, +A] {    def apply(s: S): (S, A)  }``

It is basically:

• an object keeping some previous "state", of type `S`
• a method to extract a meaningful value from this "state", of type `A`

• this method computes a new "state", of type `S`

For example, a simple counter for the number of elements in a `List[Int]` can be implemented by:

``  val count = state((n: Int) => (n+1, ()))``

It takes the previous "count" number `n` and returns the new state `n+1` and the extracted value (`()` here, because we don't need to extract anything special).

The `State` type above is a `Monad`. I encourage you to read "Learn You a Haskell" to get a better understanding on the subject. I will just show here that the `flatMap` (or `bind`) method of the `Monad` typeclass is central in putting that `State` to work:

``  val count = (s: String) => state((n: Int) => (n+1, s + n))  (count("a-") flatMap count flatMap count).apply(0) must_== (3, "a-012")``

The `count` function takes the latest computed string and returns a `State` where we increment the current "state" by 1 and we have a new String as the result, where the current count is appended. So when we start with the string `"a-"` and we `flatMap` `count` 2 times, we get `(3, "a-012")` where 3 is the number of times we've applied the `n+1` function and `"a-012"` the result of appending to the current string.

By the way, why do we need to `apply(0)`?

When we do all the `flatMap`s, we actually store "stateful computations". And they are executed only once we provide the initial state: `0`!

##### Collecting elements

Let's now define a `collect` operation on `Traversable` which will help us to count:

``  def collect[F[_] : Applicative, A, B](f: A => F[Unit], g: A => B) = {    val applicative = implicitly[Applicative[F]]    import applicative._    val application = (a: A) => point((u: Unit) => g(a)) <*> f(a)    traverse(application)  }``

This `collect` operation, defined in EIP, is different from the `collect` operation on Scala collections which is the equivalent of `filter + map`. The `collect` of EIP is using 2 functions:

• `f: A => F[Unit]` which collects data from each element "effectfully" (that is, possibly keeping state)

• `g: A => B` which maps each element to something else

So we could say that the EIP `collect` is a bit like `fold + map`. Knowing this, we can use `collect` to count elements and do some mapping:

``  val count = (i: Int) => state((n: Int) => (n+1, ()))  val map   = (i: Int) => i.toString  tree.collect[({type l[A]=State[Int, A]})#l, String](count, map).apply(0) must_==   (2, Bin(Leaf("1"), Leaf("2")))``

Here again the type annotations are obscuring the intent a bit and if type inference was perfect we would just read:

``  val count = (i: Int) => state((n: Int) => (n+1, ()))  val map   = (i: Int) => i.toString  tree.collect(count, map).apply(0) must_== (2, Bin(Leaf("1"), Leaf("2")))``

I don't know about you, but I find this a bit magical. With the `Applicative` and `Traversable` abstractions, we can assemble our program based on 2 independent functions possibly developed and tested elsewhere.

##### Dispersing elements

The next utility function proposed by EIP is the `disperse` function. Its signature is:

``  def disperse[F[_] : Applicative, A, B, C](f: F[B], g: A => B => C): T[A] => F[T[C]]``

What does it do?

• `f` is the `Applicative` context that's going to evolve when we traverse the structure, but regardless of what the elements of type `A` are
• `g` is a function which, for each element of type `A` says what to do with the current context value, `B`, and map that element back to the structure

Say I want to mark each element of a `BinaryTree` with its "number" in the Traversal (the "label"). Moreover I want to use the element name to be able to qualify this label:

``  // a BinaryTree of Doubles val tree: BinaryTree[Double] = Bin(Leaf(1.1), Bin(Leaf(2.2), Leaf(3.3))) // the "label" state returning integers in sequence val labelling: State[Int, Int] = state((n: Int) => (n+1, n+1)) // for each element in the tree, and its label,  // produce a String with the name and label val naming: Double => Int => String = (p1: Double) => (p2: Int) => p1+" node is "+p2 // testing by applying an initial state (label `0`) and // taking the second element of the pair `(last label, resulting tree)` tree.disperse[elided for sanity](labelling, naming).apply(0)._2 must_==   Bin(Leaf("1.1 node is 1"), Bin(Leaf("2.2 node is 2"), Leaf("3.3 node is 3")))``

Note that the `naming` function above is curried. A more familiar way to write it would be:

``  val naming: (Double, Int) => String = (p1: Double, p2: Int) => p1+" node is "+p2``

But then you would have to curry that function to be able to use it with the `disperse` function:

``  tree.disperse[...](labelling, naming.curried)``

The implementation of `disperse` is:

``  def disperse[F[_] : Applicative, A, B, C](f: F[B], g: A => B => C) = {    val applicative = implicitly[Applicative[F]]    import applicative._    val application = (a: A) => point(g(a)) <*> f    traverse(application)  }``

It is using the very capabilities of the applicative functor, the `point` method and the `<*>` application.

### An overview of traversals

We've seen in the 2 examples above that we get different, specialized, versions of the `traverse` function by constraining how mapping and `Applicative` effects occur. Here's a tentative table for classifying other specialized versions of the `traverse` function:

function map element create state mapped depend on state state depend on element
`collect` X X X
`disperse` X X X
`measure` X X
`traverse` X X X X
`reduce` X X
`reduceConst` X
`map` X

The only function we haven't shown before is `measure`. It is mapping and accumulating state but this accumulation does not depend on the current element. Here's an example:

``  val crosses = state((s: String) => (s+"x", ()))  val map     = (i: Int) => i.toString  tree.measure(crosses, map).apply("") must_==  ("xxx", Bin(Leaf("1"), Bin(Leaf("2"), Leaf("3"))))``

Other than not looking very useful, the code above is also lying! It is not possible to have a `measure` function accepting a `State` monad without having to provide the usual ugly type annotations. So the actual example is:

``  tree.measureState(crosses, map).apply("") must_==   ("xxx", Bin(Leaf("1"), Bin(Leaf("2"), Leaf("3"))))``

where `measureState` is a specialization of the `measure` method to `States`. I think that one take-away of this post is that it might be beneficial to specialize a few generic functions in Scala , like `traverse`, `collect`... for `Const` and `State` in order to avoid type annotations.

## Composing traversals

There's another axis of composition that we haven't exploited yet.

In a `for` loop, without thinking about it, you may write:

``  for (a <- as) {    val currentSize = a.size    total += currentSize    result.add(total)  }``

In the body of that `for` loop, you have statements with dependency on each other. In an `Applicative` traversal, this translates to the Sequential composition of Applicatives. From 2 `Applicatives`, we can create a third one which is their Sequential composition. More precisely, this means that if `F1[_]` and `F2[_]` are `Applicatives` then `F1[F2[_]]` is an `Applicative` as well. You want the demonstration? Ok, go.

First, we introduce a utility function on `ApplicFunctor`s:

``  def liftA2[A, B, C](function: A => B => C): F[A] => F[B] => F[C] =     fa => applic.applic(functor.fmap(function)(fa))``

`liftA2` allows to `lift` a regular function of 2 arguments to a function working on the `Applicative` arguments. This is using the fact that an `ApplicFunctor` is a `Functor` so we can apply `function: A => B => C` to the "`a` in the box", to get a `F[B => C]` "in the box". And then, an `ApplicFunctor` is an `Applic`, so we can "apply" `F[B]` to get a `F[C]`

Armed with this function, we can write the `applic` method for `F1[F2[_]]`:

``  implicit val f1ApplicFunctor = implicitly[ApplicFunctor[F1]]  implicit val f2ApplicFunctor = implicitly[ApplicFunctor[F2]]  val applic = new Applic[({type l[A]=F1[F2[A]]})#l] {    def applic[A, B](f: F1[F2[A => B]]) = (c: F1[F2[A]]) => {      f1ApplicFunctor.liftA2((ff: F2[A => B]) => f2ApplicFunctor.apply(ff))(f).apply(c)    }  }``

It's not so easy to get an intuition for what the code above is doing except that saying that we're using previous definitions to allow a `F1[F2[A => B]]` to be applied to `F1[F2[A]]`.

In mere mortal terms, this means that if we do an `Applicative` computation inside a loop and if we reuse that computation in another `Applicative` computation, we still get an `Applicative` computation. The EIP illustration of this principle is a crazy function, the `assemble` function.

### The `assemble` function

The `assemble` function takes the shape of a `Traversable` and a list of elements. If there are enough elements it returns `Some[Traversable]` filled with all the elements (+ the reminder), otherwise it returns `None` (and an empty list). Let's see it in action:

``        // the "shape" to fill       val shape: BinaryTree[Unit] = Bin(Leaf(()), Leaf(()))       // we assemble the tree with an exact list of elements       shape.assemble(List(1, 2)) must_== (List(), Some(Bin(Leaf(1), Leaf(2))))       // we assemble the tree with more elements       shape.assemble(List(1, 2, 3)) must_== (List(3), Some(Bin(Leaf(1), Leaf(2))))       // we assemble the tree with not enough elements       shape.assemble(List(1)) must_== (List(), None)``

What's the implementation of the `assemble` function? The implementation uses 2 `Monads` (which are also `Applicatives` as we know now):

• the `State[List[Int], _] Monad` is going to keep track of what we've already consumed
• the `Option[_] Monad` is going to provide, or not, an element to put in the structure
• the composition of those 2 monads is `State[List[Int], Option[_]]` (our `F1[F2[_]]` in the `ApplicFunctor` definitions above

So we just need to traverse the `BinaryTree` with one function:

``def takeHead: State[List[B], Option[B]] = state { s: List[B] =>  s match {    case Nil     => (Nil, None)    case x :: xs => (xs, Some(x))  }}``

The `takeHead` function is a `State` instance where each state application removes the first element of the list of elements if possible, and returns it in an Option.
This is why the result of the `assemble` function, once we apply it to a list of elements, is of type `(List[Int], Option[BinaryTree[Int]])`.

#### A recursive implementation

Just for the fun of comparison, I'm going to write a recursive version doing the same thing:

``  def assemble(es: List[Int], s: BinaryTree[Unit]) : (List[Int], Option[BinaryTree[Int]]) = {    (es, s) match {      case (Nil, _)                      => (es, None)      case (e :: rest, Leaf(()))         => (rest, Some(Leaf(e)))      case (_, Bin(left, right))         => {        assemble(es, left) match {          case (l, None)       => (l, None)          case (Nil, Some(l))  => (Nil, None)          case (rest, Some(l)) => assemble(rest, right) match {            case (r, None)            => (r, None)            case (finalRest, Some(r)) => (finalRest, Some(Bin(l, r)))          }        }      }    }  }  assemble(List(1, 2, 3), shape) must_== (List(3), Some(Bin(Leaf(1), Leaf(2))))``

It works, but it makes my head spin!

#### A classical for-loop implementation

By the way, what would be the real `for` loop version of that functionality? That one is not so easy to come up with because AFAIK there's no easy way to iterate on a `BinaryTree` to get a similar `BinaryTree` with just a `for` loop! So, for the sake of the argument, we're going to do something similar with just a `List` structure:

``  def assemble[T](es: List[T], shape: List[Unit]) = {    var elements = es    var list: Option[List[T]] = None    for (u <- shape) {      if (!elements.isEmpty) {        list match {          case None    => list = Some(List(elements.first))          case Some(l) => list = Some(l :+ elements.first)        }        elements = elements.drop(1)      } else {        list = None      }    }    (elements, list)  }  assemble(List(1, 2, 3), List((), ())) must_== (List(3), Some(List(1, 2)))``

Contrast and compare with:

``  List((), ()).assemble(List(1, 2, 3)) must_== (List(3), Some(List(1, 2)))``

where you just define `List` as a `Traversable`:

``  implicit def ListIsTraversable[A]: Traversable[List] = new Traversable[List] {    def traverse[F[_] : Applicative, A, B](f: A => F[B]): List[A] => F[List[B]] =       (l: List[A]) => {        val applicative = implicitly[Applicative[F]]        l match {          case Nil       => applicative.point(List[B]())          case a :: rest =>            ((_:B) :: (_: List[B])).curried ∘ f(a) <*> (rest traverse f)      }    }  }``

The Applicative composition is indeed very powerful, but we're going to see that there are other ways to compose functions and use them with `Traversables`.

This paragraph is exploring the fine relationships between applicative composition and monadic composition when doing traversals. We've seen that `Applicative` instances can be composed and that `Monads` can be `Applicative`. But `Monads` can also be composed using the so-called Kleisli composition. If we have:

``  val f: B => M[C]  val g: A => M[B]``

Then

``  val h: A => M[C] = f ∎ g // is also a function from a value to a Monad``

If we have 2 "monadic" functions `f` and `g`, we can then compose them, in the Kleisli sense, and use the composed version for a traversal. Indeed we can, but does this traversal have "nice properties"? Specifically, do we have:

``  traverse(f ∎ g) == traverse(f) ∎ traverse(g)``

EIP shows that, if the `Monad` is commutative, then this will always be true. What is a commutative `Monad` you ask?

A `Monad` is commutative if for all `mx: M[X]` and `my: M[Y]` we have:

``    val xy = for {     x <- mx     y <- my   } yield (x, y)   val yx = for {     y <- my     x <- mx   } yield (x, y)   xy == yx``

This is not the case with the `State Monad` for example:

``   val mx = state((n: Int) => (n+1, n+1))   val my = state((n: Int) => (n+1, n+1))   xy.apply(0) must_== (2, (1, 2))   yx.apply(0) must_== (2, (2, 1))``

Another slightly different situation is when we have a non-commutative `Monad` but commutative functions:

``  val plus1  = (a: A) => state((n: Int) => (n+1, a))  val plus2  = (a: A) => state((n: Int) => (n+2, a))  val times2 = (a: A) => state((n: Int) => (n*2, a))``

Here `plus1` and `times2` are not commutative:

``  (0 + 1) * 2 != (0 * 2) + 1``

However it is obvious that `plus1` and `plus2` are commutative. What does that mean when we do a traversal?

If we traverse a simple List of elements using monadic composition we get:

• `List(1, 2, 3).traverse(times2 ∎ plus1) === 22`
• `List(1, 2, 3).traverse(times2) ∎ List(1, 2, 3).traverse(plus1) === 32`

We get different results. However, when `f` and `g` commute we get the same result:

• `List(1, 2, 3).traverse(plus2 ∎ plus1) === 10`
• `List(1, 2, 3).traverse(plus2) ∎ List(1, 2, 3).traverse(plus1) === 10`

#### Applicative composition vs Monadic composition

Another question we can ask ourselves is: if we consider the monadic functions as applicative functions (because each `Monad` is `Applicative`), do we get the nice "distribution" property we're after? The answer is yes, even when the functions are not commutative:

• `List(1, 2, 3).traverse(times2 ⊡ plus1) === 4`
• `List(1, 2, 3).traverse(times2) ⊡ List(1, 2, 3).traverse(plus1) === 4`

Well... more or less. The real situation is a bit more complex. `List(1, 2, 3).traverse(times2 ⊡ plus1)` returns a `State[Int, State[Int, List[Int]]]` while the second expression returns a `State[Int, List[State[Int, Int]]` so what I'm hiding here is some more manipulations to be able to query the final result with some kind of `join`.

## Conclusion

You wouldn't believe it but I've only shown here half of the ideas presented in EIP!

To finish off this post here's 3 take-away points that I've learned while writing it:

• functional programming is also about mastering some of these higher-level control structures like `Applicative`. Once you master them, your toolbox expands considerably in power (just consider the `assemble` example)

• Scalaz is an incredible library but somewhat obscure to the beginner. For this post I've rewritten all the typeclasses I needed to have, and lots of examples (using specs2 of course). That gave me a much better understanding of the Scalaz functionality. You may consider doing the same to learn Scalaz (my code is available on github)

• Scala is lacking behind Haskell in terms of type inference and it's a real pain for higher-order, generic programming. This can be sometimes encapsulated away by specializing generic functions to very common types (like `traverseState` instead of `traverse`). Again, please upvote SI-2712!

Finally, I want to mention that there are many other Haskell functional pearls waiting to be transliterated to Scala. I mean, it's a shame that we don't have yet any equivalent for "Learn you a Haskell" or "Typeclassopedia" in the Scala world. I hope that my post, like this other one by Debasish Ghosh, will also contribute to bridge the gap.