The computational view of monads

I originally wrote this article as a comment to the programming subreddit, however I’ve found myself referring to it a few times since then, and reproduce it here for posterity. It attempts to motivate monads for people coming from imperative, stateful languages (already living in IO).


In Haskell, a purely functional language, we use monads to provide a library-based approach to sequential, imperative evaluation: the famous IO monad. No wonder Haskell people talk about it a lot: library-based imperative programming in a purely functional language is fun.

But monads don’t stop there! In languages with builtin sequential evaluation the IO monad must seem boring or silly — no wonder it all seems a puzzle.

This is where it gets interesting. Since monads provide an abstraction over evaluation order, they can be used in other languages (and Haskell) to implement *non*-sequential evaluation order, as a library, also!

So instead of special language support for, say, exceptions and callcc, you just implement monads as a library, and get:

Perhaps this gives a flavour for non-Haskell people for why they’re so useful. Continuations as a library! Non-deterministic evaluation as a library!

So just as its well known that with continuations you can implement threaded control or exceptions as a library, with monads you can implement continuations themselves as a library, along with all the other fun toys. Then, using monad transformers, you can compose separate monads, providing, say, exception handling over a state encapsulation: you can specify precisely what programming language concepts a particular program requires. Here is:

Users of other languages are likely, and rightly so, to be less interested in the solution to IO based on monads, and far far more interested in things like library-based continuations or exceptions (see for example monadic delimited continuations in OCaml).

Hope that gives a hint towards what this monad stuff is all about.

Building GHC in under /8/ minutes

In response to my last post, Simon Marlow suggested building the first GHC stage with -O, hoping that an optimised GHC in turn will be faster compiling the libraries and second stage. Here’s the result:

With the following build.mk:

SRC_HC_OPTS     = -H64m -Onot -fasm
GhcStage1HcOpts = -O -fasm
GhcStage2HcOpts = -Onot -fasm
GhcLibHcOpts    = -Onot -fasm
GhcLibWays      =
SplitObjs       = NO

On a 2 processor, 4 core linux machine, running with -j10:

make -j10 > /dev/null  1015.77s user 155.19s system 249% cpu 7:49.50 total

Great! A new GHC build record (I think). Can’t wait till the 16 core machine arrives…