I already blogged last year on ICFP, the awesome conference on Functional Programming, to share a neat trick.
This year I would like to do a small brain dump on some of the sessions which I really liked.
Workshop on higher-programming with effects
Programming a web server with effects
This is one of several talks showing the benefits of programming with a language with natively supports the description of effects in the type system: Koka. Not only Daan Leijen is a great presenter whose enthusiasm is contagious but with Koka it feels entirely natural to write asynchronous code with interleaving, cancellation, resource-cleanup and so on.
The also hints at something which got me really interested, the yield
operator to create iterators can be seen as an effect. Coupled to the async support above this makes me think that a whole streaming library could be created as a set of effects!
Workshop on type-driven development
Type-directed diffing of structured data
A fantastic idea: can be do better than line-diffing to diff code and do AST diffs? The presenter introduced a language for representing AST patches (not as easy as it seems) then different possibilities for algorithms doing the diff (clearly a hard problem if you want to find the minimum diff).
ICFP
Super 8 languages for making movies
I’m not a big fan of dynamic languages but the power of Racket is impressive. The presenter shows how she created a “Tower of DSLs” to create a video-editing tool. Complete with parser, GUI, documentation, integration with video libraries in a lot less lines of code than anything else I know.
Faster coroutine pipelines
Coroutine pipelines are the functional way to create streaming libraries. However they can be made faster. How? But using an encoding based on continuations rather than the “direct” encoding. This trick seems to be pervasive. For example, for algebraic effects, ScalaEffekt uses such an encoding and gets much better performances than eff.
A unified approach for solving seven programming problems
Racket is at it again but with a secret weapon, miniKanren. If you view a computation as a “logical” relationship between a program and some outputs, it is very reasonable to ask your solver to find the programs which give you the right outputs. And find, for example, Quines which are programs which output themselves!
Generic functional parallel algorithms: parallel scan
Scans, computing the prefix sums of a list of ints, are an important ingredient in some algorithms, like regular expression search. If they can be performed in parallel it’s even better! This talk shows that a scan algorithm can be defined for generic datastructures, just defined by sums, products and functor composition. Then by adjusting the decomposition you get different amounts of parallelism and work to compose results.
Effect-driven quickchecking of compilers
Very entertaining presentation showing that it is entirely possible to generate well-type terms to test a compiler.
Compiling to categories
My favourite talk this year. What if you consider lambda
and apply
in the lambda calculus as 2 operations which could be interpreted differently? Conal Elliott developed a GHC plugin to transform Haskell expressions into objects in a category where function composition corresponds to the composition of morphisms in a category. What does that give us? Instead of applying functions and getting a result we can generate circuits computing them, differentiate them or do interval computations where the results are intervals instead of just being values.
Staged Generic Programming
Generic programming is cool. Write a few lines of code and let the compiler figure the right data structure and / or program. Unfortunately the performances might be very bad compared to hand-written code. With staging we can let the compiler be smarter and generate “unrolled” code which will perform as well as the hand-written one.
I was also glad to get to talk to Jeremy Yallop, the presenter, who is a very nice guy, now working at Docker on unikernels, something to keep an eye on!
Whip: higher-order contracts for modern services
This is looks more like a CUFP talk, with real applications in the industry :-). The idea is to install some “decorators” on webservices to check where contracts are being broken. They have a nice website and I am tempted to try this at work.
Haskell Symposium
Algebraic graphs with Class
A simple but very powerful to describe graphs “algebraically”. Some atoms: the empty graph, the singleton graph and 2 operations: overlay
(sort of addition) and connect
(sort of multiplication). Based on that you can represent any graph and encode any algorithm against that representation, which can then optimised for different purposes.
Quickspec / Speculate
Those 2 tools will generate the equations that you API is supposed to satisfy. For example, given []
, ++
and reverse
they should be able to generate reverse ([] ++ xs) == reverse xs
. But even more, they can now generate implications and inequalities!
Composable networks and remote monads
The remote monad explores different ways of “grouping” operations when addressing remote services. After tried to do something similar for eff
I appreciate a lot more the subject :-).
Testing with crowbar
Impressive example about finding bugs in a unicode library in 10 minutes which QuickCheck can not find in 1 week. The secret? Fuzzing and code coverage. Generate random sequence, turn them to test cases and mutate the sequences until you find some “interesting” part of the code. Those are likely to contain bugs.
ML Family workshop
State machines all the way down
My periodic reminder that I need to find some time to use Idris. Even more so after Edwin Brady said that the state machine programming with the type system tracking before/after state subsumes the effects library! He also told me that he was implementing the Idris compiler in Idris and this was a good example for him about “when not to go too far with types”.
Effects without monads: non determinism
Oleg Kiselyov is back, implementing non-determinism with TTFI (typed tagless final interpreters). This example is particularly interesting because he shows that monads are not necessary to interpret this mini-language. Not only that but that but they cannot even be used if your translation is about generating code. Because you cannot write code which will write code!
Bioinformatics: the typed tagless final way
A concrete example of people using TTFI in real life for large and extensible DSLs.
CUFP
Bonsai: DSL for serverless decisioning
This DSL is pretty simple, it is just decision trees (if / then / else
) but compiled for maximum throughput for the ad industry.
Gens N’ Roses: appetite for reduction
Jacob Stanley talk on Hedgehog where shrinks come for free and failures are beautiful. A “must-use” for Haskell users.
Haskell implementors workshop
Syntactic musings
This was a lightning talk but it made me laugh. Those are proposals to improve haskell syntax with almost zero possibilities to be adopted :-):
How to write bracket
without parenthesis:
Instead of
bracket
(acquire some resources)
(do your stuff which might be
pretty long)
(and clean up)
Go (yes, bullet points!)
bracket
. acquire some resources
. do your stuff which might be
pretty long
. and clean up
Then set a value for a bunch of functions
context fixed (value) where
add a = value + a
mul a = value * a
to avoid the repetition of value
and to avoid accidentally messing with it. This screams like “module/functor envy” to me.
The last one is crazy but brilliant. Get away with grouping with parentheses by removing whitespace where you want to indicate higher precedence
a+b * c+d
Why not :-)?
An experiment in fragment-based code distribution
I thought this idea could really not roll but now I am intrigued. Following Joe Armstrong’s “Why do we need modules at all?” Philip Schuster implemented a build system for haskell where each program is decomposed into “slices” containing just one function and its dependencies. Everything identified by numbers. Advantages: just compile and package what you need, never care about version numbers. But the job of curating all of that to get a coherent snapshot, oh boy! As a data point, running anything on Scotty requires 12.000 slices.
Conclusion
A very intense week as in the previous editions I have been to. I don’t know yet what will stick with me and influence my future work but there’s plenty to think about. I also enjoyed very much the discussions I had with Jonathan and Philipp, the authors of scala-effekt.