Today at HacPDX I did more work on the Judy bindings described yesterday. Here is judy 0.2.1 up (lunchtime release). These are bindings to the (in)famous judy arrays library, allowing Haskell keys and values to be stored in fast, mutable maps and hash structures. You can currently index via words (i.e. you have to hash keys yourself), and you can store value types or pointers to Haskell values allocated with stable pointers.
Today, I started by filling out the API some more, and benchmarking of how each API function scales. The functions I’m interested in are:
size :: JudyL a -> IO Int insert :: JA a => Key -> a -> JudyL a -> IO () lookup :: JA a => Key -> JudyL a -> IO (Maybe a) delete :: Key -> JudyL a -> IO ()
Let’s see how the performance of these functions scales with input. Benchmarking is performed via criterion (newly released to the public!) with random input keys and values generated from the mersenne twister.
Implementation methodology
My general strategy with constructing a high performance Haskell library is to:
- Pick a candidate benchmark set.
- Pick a good candidate data structure to make the benchmark faster.
- Carefully implement the type, or the binding to the foreign type and its representation in Haskell
- Carefully abstract out the unsafeties in any low level interface via type or language features.
- Function-by-function, check by eye the optimized code that GHC produces (with ghc-core) (in particular, inlining, specialization and unboxing).
- Function-by-function, benchmark the absolute performance as input grows (with criterion), with reference to model structures.
- Use QuickCheck to show correctness against a model, with HPC to confirm code coverage of the test.
- Document extensively with haddock.
- Collect ideas on how to generalize the interface (via type classes, more flexible types, representation-changing interfaces via type families).
size
The size function tells you how many keys are in the Judy array. It has O(1) complexity, as we see from empirical measurement, here with randomized arrays with Int values and keys. We get the same cost whether it is 100k, 1M or 10 million elements in the structure. size is fast and scales.
lookup
A common operation on associative arrays is lookup(). It better be fast, and not degrade.
If we plot the mean lookup times against N, we’ll see the lookup time grows slowly (taking twice as long when the data is 100 times bigger). (N.B. I forgot to change the titles on the above graphs).
delete
Delete has an almost identical profile to insert, unsurprisingly. Time to delete an element grows very slowly with N, and in absolute terms if very fast.