29 September 2018

ICFP 2018

This year again ICFP and the Haskell Symposium are full of interesting talks that I want to shortly present. I highlighted a single sentence in each section to try to summarize the main idea so that you can skim the whole thing and stop at what looks interesting to you.

ICFP Day 1

Capturing the Future by Replaying the Past (Functional Pearl)

Delimited continuations are incredibly powerful because they allow us to implement any monadic effect. They also allow some elegant implementations, for example the 8 queens problem can be implemented using a non-determinism effect. Unfortunately delimited continuations are not supported by every programming language. Or are they? Actually the authors of this paper show that mutable state + exceptions is all that is required to implement delimited continuations (except in C, read the paper for the details). There is a performance hit if we compare their implementation with a direct support from languages which support delimited continuations but this is not that bad.

For me this paper was the opportunity, one more time, to try to wrap my head around continuations. For example I realized that the notation with `reset` and `shift` was a bit confusing. In an expression such as `reset (shift (\k -> k 2))` I actually need to focus my attention on the body of the `shift` rather than the body of the `reset`. And the body of the shift says “do `k 2`”. What’s the `k`? Well it is whatever is outside of the `shift` and inside the `reset` so that’s `1 + _` and eventually the result is `3`. It becomes a bit trickier to interpret expressions like `reset (shift (1 + k 2) * shift (1 + k 3))` but still possible. “Do `1 + k 2`”. What is `k`? It is `x * shift (1 + k 3)`. Which by the previous reasoning is `k = 1 + x * 3`. Now applied to `1 + k 2` gives `1 + 1 + 2 * 3` which equals to `8`. This also explains the term “thermometer continuations” where the authors store a list of executions created by the various `shifts` in a `reset` expression.

Versatile Event Correlation with Algebraic Effects

This paper presents the design of language supporting “versatile joins”, that is the many ways we would like to correlate different sources of (asynchronous) events: “when you receive a ‘stock price update’ event and ‘user buy event’ for the same quantity and a ‘market is opened event’ then emit a ‘stock buy event’”. Conceptually this means making a cartesian product of all the event sources and restricting the resulting tuples to the ones that are “interesting”. Said another way, this can be interpreted as having a way to enumerate this cartesian product, filter out some tuples, combine them in some way and `yield` the result. It turns out that all those actions can be implemented as effects in a language supporting algebraic effects and effect handlers.

For example selecting the last event for a given event stream corresponds to the handling of a `push` effect for that stream. and joining several streams corresponds to the interpretation of several `push 1, push 2, ..., push n` effects. This provides a very modular way to describe combinations like “with the latest occurrence of stock and price for a given article emit an availability event”.

The paper shows that with a limited set of effects, like `push` but also `trigger` to trigger the materialization of a tuple or `get/set` to temporarily store events, we can reproduce most of the behaviours exposed by so-called “complex event processing” (CEP) libraries: windowing, linear/affine consumption, zipping, and so on.

The simple essence of automatic differentiation

One of the distinguished papers of the conference, classical Conal Elliot work, a work of art. The explosion of machine learning methods using neural networks brought back the need to have efficient ways to compute the derivative of functions. When you try to fit the parameters of a neural network to match observed data you typically use “gradient descent” methods which require to compute the partial derivative of functions with thousands of input variables and generally just one output!

The basic idea behind “automatic differentiation” is to build functions with their associated derivative functions. Since many functions are not differentiable, what we can do is to build the ones that are! You start by creating simple functions for which you know the derivative and use some rules for creating more complex functions from the simple ones, calculating the corresponding derivative as you go, using well-known rules for deriving functions. For example the “chain rule” for the derivative of the composition of 2 functions. In practice the derivative of a function can be built out of a few operations: function composition, parallel product, cartesian product, and linear functions (like `+`).

Actually those operations are pretty universal. If we abstract a bit by using category theory concepts we can define the derivative of a function in terms of operations from a category, a “monoidal” category, a “cartesian” category. Then by varying the category, taking other categories than Haskell functions for example, we can derive very useful programs. This is not new and was presented in “compiling to categories”, there is even a Haskell plugin supporting this construction automatically!

The paper builds on this idea for automatic differentiation and shows that using the “continuation of a category”, or the “dual of a category” or taking matrices as a category we get straightforward implementations of the differentiation of functions. In particular we get a simple implementation of the “reverse mode” of differentiation which does not mutate state like traditional algorithms and which can hence be easily parallelizable.

ICFP Day 2

Competitive Parallelism: Getting Your Priorities Right

This is yet another “let’s create a language to solve a specific problem” but an important one which is the definition of priorities in a concurrent program. What we typically want is to be able to specify a partial order to define which thread “has a higher priority”, make sure that high-priority threads don’t depend on low-priority ones (the “priority inversion” problem), get an efficient way to schedule those threads on processors and get some bounds on the total computation time/latency of a high priority thread for a given program.

The authors have defined and implemented a language called “PriML” extending Standard ML with some new primitives, `priority`, `order`, a modal type system and a scheduler to support all these objectives. I wonder how we could design languages and compilers so that anyone can benefit from those features rather sooner than later but this seems to provide a good solution to the priority inversion problem that jeopardized the Mars Pathfinder mission.

Fault Tolerant Functional Reactive Programming (Functional Pearl)

Ivan Perez and his team have already shown how to use a “Monadic Stream” representation to implement all the operators of various FRP (Functional Reactive Programming) libraries in Haskell. FRP can be used to represent various components of a Mars rover for example where there are various sensors and processors controling the behaviour of the rover.

Since this “monadic streams” representation allows you to change the “base” monad for streaming values they now use a variant of the `Writer` monad to represent fault information in computations: “what’s is the probability that the value returned by a sensor is incorrect?”, “if it is incorrect, what is the likely cause for the failure?”. Then combining different reactive values will cumulate failure probabilities. However you can also introduce redundant components which will reduce the failure rate! Their library also tracks the failure causes at the type level so that you can have ways to handle a failure and the compiler can check that you have handled all possible failures for a given system.

Report on ICFP and Climate Change

Not a technical paper but a report on what SIGPLAN plans to do to reduce carbon emissions. Benjamin Pierce presented the rationale for doing something and announced that next year ICFP 2019 might become carbon neutral by raising the price of tickets to buy carbon offsets. I personally bought my own carbon offset for my trip to ICFP this year and I hope this will inspire other people to do the same, we just don’t have much time left to act on climate change.

What You Needa Know about Yoneda: Profunctor Optics and the Yoneda Lemma (Functional Pearl)

What category theory is good for again? Well some results from category theory show up in many contexts and give us practical ways to transform some problems into others that are easier to deal with. The Yoneda lemma is one such result. It kind of says that the result of “transforming” a value `a`, noted `f a` is entirely determined by how all the objects that have a relation with `a` are being transformed: `forall x . (a -> x) -> f x`.

Using Haskell and assuming that `f` is a functor the equivalence is pretty obvious. If I have `forall x . (a -> x) -> f x` I can just set `x` to be `a` to get a `f a`. In the other direction, if I have an `f a` I also have a `forall x . (a -> x) -> f x`, that’s the `fmap` operation! In retrospect that’s kind of meh, what’s the big deal? Well we can prove tons of stuff with this lemma. For example monads have a `bind` operation: `bind :: m a -> (a -> m b) -> m b`. By using the Yoneda lemma we can prove that this is totally equivalent to having a `join` operation: `join :: m (m a) -> m a`!

Another application of this lemma and the meat of this paper is the derivation of profunctor optics as found in the Haskell lens library, from the “classical” representation as a getter and setter. Why is that even useful? It is useful because the profunctor formulation uses a simple function to support lens operations. This means that to compose 2 lenses you can just use function composition. The Yoneda lemma helped us going from one representation to a more useful one!

Generic Deriving of Generic Traversals

This is a new Haskell library which gives us lenses for any datatype deriving `Generic`. One application is to solve the “record problem” in Haskell where there can be clashes when 2 different records have the same name for one of their fields. In that case `person ^. field@"name"` will return the name of a `Person` and `dog ^. field@"name"` will work similarly. But the library can do much more. It allows you to select/update elements in a data structure by type, position, constraint (all the elements of the `Num` typeclass for example) and even by structure! So you can focus on all the fields of a `Scrollbar` that makes it a `Widget` and apply a function to modify the relevant `Widget` fields. This will return a `Scrollbar` with modified fields.

ICFP Day 3

Partially-Static Data as Free Extension of Algebras

Some programs can use “multi-stage” programming to create much faster versions of themselves when we know some of the parameters. For example a `power` function for a given power, say 4, can be specialized into `x * x * x * x` which requires no conditionals, no recursion. However this misses another optimisation. Once we have computed the first `x * x` we could reuse it to compute the final result `let y = x * x in y * y`.

This paper shows how to use the properties of some algebras to optimize code generated by staged programs. This is a nice improvement and general framework for a domain that is not mainstream yet (staged programming) but we know that this is probably the future for writing programs which are both nicely abstract and very efficient.

Relational Algebra by Way of Adjunctions

Wonderful presentation by Jeremy Gibbons and Nicolas Wu where they decide to present their paper as a conversation around a cup of coffee and some equations and diagrams on napkins. The essence of their paper is to present operations and optimisations on sql tables (the “relational algebra”) as given by adjunctions. Adjunctions are a tool from category theory showing how some “problems” in a given domain can be translated and solved in another domain, then the solution can be “brought back” to the original domain (this is all of course a lot more formal than what I just wrote :-)).

Something to be noted, in their description of table joins they need to use a monad which is a “graded” monad where each `bind` operation aggregates some “tags”, here the keys to the elements of the tables. This is way over my head but circling around the subject over and over I might understand everything one day :-).

If you consider throwing exceptions as having a monadic effect and laziness (consuming or not a value) a comonadic effect then you can transform expressions having those effects into a pure language having monads and comonads. Unfortunately they don’t always compose, unless when they are “distributive”, if there’s a way to permute the monad `m` and the comonad `c`. The paper proposes to “force” one or the other interpretation to solve this dilemma. Either have a “monadic priority” which gives us “strict semantic” for those expressions having both effects or use a “comonadic priority” which gives us a “lazy semantic” for those expressions.

The cool theorem is that if the monad and comonad distribute then both choices give us the same result. The corollary is that if they don’t distribute then choosing a lazy or a strict semantic always give us different results!

What’s the Difference? A Functional Pearl on Subtracting Bijections

This is more of a mathematical puzzle than something that’s useful for day to day programming. When trying to evaluate some number of combinations (“how many ways can 3 people of a group have the same birthday?”), it can be useful to “remove” parts of some sets that we can count because they are in bijection with other well known sets (like the set of all the natural numbers). This is also a tricky operation.

The paper proves that there is an algorithm for substracting bijections which makes sense and which always terminates. I still have the naive feeling that the algorithm they presented could be simplified but I’m probably very wrong.

Good news: the Haskell containers library is bug free! How was that proven? By taking the Haskell code and translating it to Coq code, using as a specification typeclass laws and quickcheck properties from the test suite. This tool `hs-to-coq` is now used in other contexts to write some specifications in Haskell and prove them with Coq. It can not yet be used to prove concurrency properties but the team is thinking about it.

Neither Web nor Assembly

Indeed WebAssembly is designed to be a low-level VM, performant and secure. The design of WA is supported by a formal specification, and a mechanical proof (in Isabel, Coq and K are underway). It is entirely standardised and supported, that’s a first, by all major browser vendors. So much that the proposal process mandates that any proposal must be implemented by 2 major implementations at least.

So far mostly C/C++ and Rust are supported because some higher-level language features required by other languages are still not available like tail calls (kind of important for functional languages :-)), exception management or garbage collection. A Haskell compiler (Asterius) is underway, being done by Tweag, and the vision is to be able to replace GHCJS on the mobile client wallets which are executing code for the Cardano blockchain.

AutoBench

This is a neat tool coupling QuickCheck and Criterion to benchmark different functions and see which ones have the best runtime behaviour. Some limitations though: benchmarks are only performed across one notion of “size” whereas general functions might depend on various parameters and also this is limited to pure functions for now.

Improving Typeclass Relations by Being Open

A great idea. Introduce a typeclass “morphism” to automatically create instances. For example if you have a typeclass for a partial order you would like to automatically get an instance for it everytime you have an `Ord a` instance for any type `a`. This new declaration `class morphism Ord a => PartialOrd a` would have solved many compatibility issues with the introduction of the infamous `Functor / Applicative / Monad` modification in Haskell.

Rhine: FRP with Type-Level Clocks

This is an addition to the “monadic streaming” framework introduced by Ivan Perez and his team. This provides some clock abstraction and synchronization mechanisms to make sure that different event sources emitting events at different rates can properly work together. It also provides a conceptual unification of “events” and “behaviours” from FRP: “events are clocks and behaviours are clock-polymorphic”.

Ghosts of departed proofs (Functional Pearl)

“Existential types” are a great way to hide information in an API. Indeed a function can give you a value of a given type without you being able to do anything with that value instead of returning it to the API, proving that you are really returning something you have been given, not something that you have fabricated. But you don’t necessary need a value to enforce constraints on your API, you can use a “phantom type” to tag a computation with a specific type, for which there are no corresponding value.

One cool thing you can do is encode proofs with these phantom types. For example you can embed the proof that a list has been sorted. This is all free at runtime because those proofs only exist at compile time. Even better you can create a type representing proofs, with all the classical logical combinators `and`, `or`, `implies`,… and associate them with values. This is all implemented in the gdp library.

Deriving Via: or, How to Turn Hand-Written Instances into an Anti-pattern

`DerivingVia` is a way to reduce boilerplate which just landed in GHC 8.6.1. With DerivingVia you can declare trivial instances for data types by reusing known instances for other types. For example you can declare a Monoid instance for the datatype `newtype Pair = Pair Int Int` with `deriving Monoid via (Sum Int, Mul Int)`. The generated instance will add the first element of the pair and multiply the second one.

Type Variables in Patterns

This is the complement to type applications in expressions: type applications in patterns. Being able to “extract” types from a pattern allows to properly name a given type to use it to type another expression. For example it will be possible to bind the existential type `a` in `data MyGadt b where Mk :: (Show a) => a -> MyGadt Int` in a pattern match by writing

``````case expr of MyGadt @a a ->
print a where

-- here the type `a` has been bound in the pattern match
print :: a -> Text
print = T.pack (show a)``````

The implementation should start before the end of the year.

The Thoralf Plugin: For Your Fancy Type Needs

This Haskell plugin uses a SMT solver to solve some of the obvious type constraints which GHC is not able to solve by itself.

Suggesting Valid Hole Fits for Typed-Holes (Experience Report)

Another useful tool for programming in Haskell. In Haskell we can use “type holes” giving the type that is expected in a given place and also some values which can fit that hole. Unfortunately type holes can be pretty useless with some expressions, using lenses for example. Now the suggestions for what can be put in the hole have been vastly improved by looking at all the functions in scope which could create a value for the hole. For example `_ :: [Int] -> Int` will suggest `Prelude.head` and `Prelude.last`. But we can go further and ask for some “refinement”: which function, having itself a hole could be used to fill in the hole? In the example above `foldr` could also be used provided that `foldr` has the right parameters.

And… this is all available in GHC 8.6.1 already and even able to use the new `doc` feature adding documentation to the suggestions.

A Promise Checked Is a Promise Kept: Inspection Testing

When the documentation of a library like `generic-lens` says “the generated code is performant because it is the same as manually written code”, can you really trust it? It turns out that version `0.4.0.0` of that library was breaking that promise and it was fixed is `0.4.0.1`.

Fortunately, `inspection-testing` is a GHC plugin to save the day. With that plugin you can instruct GHC to check the generated GHC Core code: `inspect \$('genericAgeLens === 'manualAgeLens)`. You can also test if you have unwanted allocations in your code. For example the `Data.Text` package has many functions which are supposed to be “subject to fusion” but inspection testing proves that they are not.

Branching Processes for QuickCheck Generators

Dragen is library generating QuickCheck’s `Arbitrary` instances for recursive datastructures. The cool thing is that you can specify the “depth” and the distribution of all constructors in the generated values. So you can ask for trees of depth 5 with a proportion of `Int` nodes which is twice the number of `Text` nodes.

Coherent Explicit Dictionary Application for Haskell

Finally a way to avoid the “one instance of a typeclass per type only!” limitation. This implemented proposal allows to materialize a typeclass instance as a “Dictionary” and then do an explicit “dictionary application” to specify which dictionary you want to apply. For example you can write `nubCI = nub @{ eqOn (map toLower) }` where `eqOn` produces a dictionary.

The main concern with this type of proposal is that we could break coherence which is guaranteed to happen where there’s only one instance per type. `Set` operations for example can not be used with 2 different `Ord` dictionaries. This can actually be checked by looking at the “role” of the type variable `a` in `Set a`. It is “nominal” whereas this proposal only works with “representational” roles. Otherwise if you try to do a dictionary application where there’s any possibility that you then get 2 different instances for the same typeclass then you get a warning.

Theorem Proving for All: Equational Reasoning in Liquid Haskell (Functional Pearl)

Such a brilliant idea, since theorems are types and proofs are code, let’s write the theorems as Liquid Haskell types and proofs as Haskell code! Then LiquidHaskell is able check if your proof of `reverse (reverse xs) == xs` is correct. You will not get all the support you can get from a general theorem prover environment but I think this is a neat teaching tool at least. And the proofs disappear at runtime, nice.