This is a partial transcription of an extendedabstract talk I gave at the TyDe 2018 TypeDriven Development workshop, colocated with ICFP.
The main idea was to revisit an old concept — monoid comprehensions — and to explore what it gives us in the context of a functional programming language with support for type classes, as well as how it compares to the more traditional monad comprehension approach.
In summary, we argue that for expressing queries over collections of data, monoid comprehension can be more flexible, simpler, more efficient, and safer than its monadic counterpart. I substantiate these points in the following blog post, and recapitulate them in the conclusion.
In a future followup to this post, I will transcribe the rest of my TyDe talk. We will see how to embed and extend the original monoid comprehension calculus (MCC), using the host language’s static type system to enforce some properties of monoid queries at compile time.
Background on Comprehension
Note: This sections just explains some background on the ‘comprehension’ syntax. If you already know this stuff or are not interested, skip to the next section!
Origins
The comprehension syntax dates all the way back to Cantor’s work on set comprehension (circa 1874). A set comprehension is an expression of the form { x  f(x) }, which denotes the set of all x for which f(x) holds.
Unfortunately, this approach turned out to lead to inconsistencies, as it was too flexible. For example, consider set S defined as:
S = { x  x ∉ S }
Then, what should be the value of the expression “S ∈ S”? If it is true, then by definition it must be false, and vice versa. (This is known as Russell’s paradox.)
With their ZFC theory, Zermelo (1908) and Fraenkel (1922) restricted the syntactic form of set comprehensions, so that the defined set elements be required to “range” over an existing set X, and that set X as well as the predicate f were not allowed to refer to the set being defined:
S = { x  x ∈ X, f(x) }
where S ∉ FV(X) ∪ FV(f)
This avoided the paradox, but required new axioms to be added to the theory, as a result of the reduction in the expressiveness of set comprehension.
On the other hand, restricting the scope of the comprehension syntax actually made it useful for programming! As early as in the 1970s, programming languages started incorporating an analogous syntax for building lists of elements from preexisting lists. For example:
L = [ (x, y)  x ← X, f(x), y ← Y, g(x,y) ]
which iterates over two existing lists X and Y and produces a list of pairs, while filtering the output using predicates f and g.
Now, of course someone had to ask “why just lists, though?”, and indeed Philip Wadler showed (1990) that the comprehension syntax could be generalized to any monad! And since in Haskell, the input/output effect type IO
is a monad, one could now conveniently write effectful expressions such as:
P = [ ( )  x ← getLine, ( ) ← print x ]
where P
is given type IO ()
, the type of programs that performs side effects and then return the unit value ()
.
As an aside, a different syntax was soon preferred for such monadic expressions, the “do”notation, which looked closer to the imperative programming constructs that it could be used to express:
P = do { x ← getLine ; ( ) ← print x ; return ( ) }
Comprehension for Queries
We’ve known for a while that list comprehensions are a very natural way to express certain queries over relational databases [Trinder & Wadler (1991), Grust (2014)]. For example, consider the following SQL query:
q0 = SELECT name, age FROM Persons WHERE age > 18
…which can be expressed as the list comprehension:
q0 = [ (name p, age p)  p ← Persons, age p > 18 ]
However, not all SQL query features fit well in this encoding. For example, it is not easy to express grouping aggregations (the SQL GROUP BY form) and ordering (ORDER BY in SQL).
Thus, Peyton Jones & Wadler (2007) extended the syntax of list comprehension with generalizations of these two constructs, and Giorgidze & al. (2011) generalized them to all monadic comprehensions. Below is an example of that extended syntax (we explain it later in this post):
[ f b  (a,b) ← xs, then group by a using g ]
In parallel to all this, another interpretation of comprehension in the context of programming was devised by Fegaras & al. (1995, 2000), who generalized list comprehension to monoid comprehension, instead of going for monads. Here is what that approach looks like:
++{ (x, y)  x ← X, f(x), y ← Y, g(x,y) }
where ++
here is lis concatenation. It indicates that we are building a list as a result of the comprehension, telling us how to combine intermediate results. We will explain in more detail how this works, and why I think it is often better at expressing queries than monad comprehension.
The goal of my talk was to shed some light on the possibilities of monoid comprehension in the context of functional programming with type classes. I believe the approach has not received the interest it deserves from our community, partly because it had never been properly embedded within the type system of a functional language.
What’s the Deal with Monoid Comprehension?
Before we go further, let’s quickly recall the semantics of monad comprehension and then explore that of monoid comprehension, based on some examples.
Semantics of List and Monad comprehension
Given a list comprehension such as:
q1 = [ g x y  x ← xs, y ← ys, f x y ]
the Haskell compiler will (at least conceptually) emit the following code:
q1 = concatMap (\x > concatMap (\y > if f x y then [g x y] else []) ys) xs
where concatMap :: (a > [b]) > [a] > [b]
takes a list, maps a listreturning function over it, and returns the concatenation of all the resulting lists.
The monadic generalization is very similar, but obviously uses monadic instead of list operators:
q1 = (>>= \x > (>>= \y > if f x y then return $ g x y else mzero) ys) xs
The first thing to notice is that this form of comprehension requires full homogeneity between the sources being iterated, as well as the result being built. In the example above, xs, ys, and q must all be of the same Monad
type m
.
Embedding Monoid comprehension in Haskell
In this post, I propose an embedding of monoid comprehension in Haskell where type class resolution is used to implicitly resolve which monoid instance we want to use.
That is, instead of writing ++{ e  … } (for the list monoid) or &&{ e  … } (for the boolean ‘and’ monoid) as shown in the previous section, we will write just { e  … } and let the type checker infer which monoid interpretation to use based on the type of expression e.
This may seem like a very minor change, but I argue that it actually unleashes the expressive power of monoid comprehension. Indeed, not all monoids can be characterized by a single operator, as some monoid instances are derived automatically from more primitive ones. We will see interesting examples of that towards the end of this post.
Semantics of Monoid comprehension
Given the following hypothetical syntax for monoid comprehension:
q2 = { g x y  x ← xs, y ← ys, f x y }
The translation would be very close to that of list (or monad) comprehension, except that instead of concatMap
(or >>=
) we use foldMap
, and instead of wrapping the yielded expression into a singleton list (or return
), we do not wrap it at all:
q2 = foldMap (\x > foldMap (\y > if f x y then g x y else mempty) ys) xs
where foldMap :: (Foldable f, Monoid m) => (a > m) > f a > m
takes a monoidreturning function, maps it over anything that can be folded, and returns the monoidal merge of all the results.
(Recall that a monoid is a type with an associative binary operation and an ‘identity’ element.)
The important thing to notice now is that contrary to the monadic interpretation, this requires no homogeneity at all between the different parts of the query: xs and ys can be different Foldable
types, and q can be any Monoid
!
This becomes more apparent if we examine the type of the query after parameterizing it with all its free variables:
query f g xs ys = { g x y  x ← xs, y ← ys, f x y }
query :: (Monoid m, Foldable f1, Foldable f2) =>
(a → b → bool) → (a → b → m) → f1 a → f2 b → m
…where we can see that query can be invoked with any combinations of two Foldable
inputs and a Monoid
output.
As a concrete example of monoid comprehension defined on heterogeneous types, the following query computes a sum from iterating over a list and a set:
{ Sum (length y + x)  x ← [1,2,3], y ← Set.fromList [“a”,”bb”,”ccc”] }
= Sum {getSum = 36}
(In Haskell, the Sum
“newtype” is used to wrap a numeric type giving it the monoid instance corresponding with summation — indeed, there are other possible monoid interpretations for these types, such as Product
.)
Encoding
It is clear that the result of a monad comprehension [ec...]
that works on foldable inputs c...
and a monoid result e
(which is the case of most comprehensions on collections of data) should correspond to the result of an equivalent monoid comprehension {ec...}
, so we say that monoid comprehension “subsumes” monoidal monad comprehension.
On the other hand, we can encode a monoid comprehension with a list comprehension in a straightforward way. All we need to do is wrap the comprehension in a fold
, and wrap each source in a toList
(both from the Data.Foldable
module):
q3 = { g x y  x ← xs, y ← ys, f x y }
q3 = fold [ g x y  x ← toList xs, y ← toList ys, f x y ]
so monoid comprehension offers no gain in terms of pure expressive power; but besides the slight reduction in syntactic overhead and the added flexibility given by heterogeneity, it still has several advantages over monad comprehension, as we will see below.
Update [2018.10.09]: as pointed out in the reddit comments, it is also possible to encode monoid comprehension using the continuation monad and conversions from the input collections that wrap up a call to foldMap
in the continuation, so that the result is precisely equivalent to the desugaring of monoid comprehension presented above. This was demonstrated in a gist by user ElvishJerricco.
Space Efficiency
First of all, notice that using list comprehension to model database queries needlessly creates a lot of intermediate list data structures.
In the case of q3 as defined in the previous section, not only will we create one extraneous list for each source collection (if it is not already a list) but we will also create the cartesian product of xs and ys as a list of size length xs × length ys, and fold the result into the monoid returned by g (though if that monoid is sufficiently lazy, the cartesian product list will not be fully materialized in memory, and will instead be progressively produced and consumed in constant space).
Compare that with the monoid comprehension approach, which does not create any intermediate data structures at all and is thus asymptotically better in terms of space complexity.
So, why not comprehend monoids directly?
Update [2018.10.09]: it turns out that GHC is smarter than that, and desugars simple list comprehensions to something more efficient than calls to concatMap
, so that it does not create intermediate data structures. (While concatMap
is still the official semantics of list comprehension, the Haskell language definition does not mandate a particular desugaring.) Neat! However, monad comprehensions cannot use the same trick, and has to pay the cost of >>=
based desugaring.
SQLlike Grouping and Ordering
One of the nice things about monoid comprehension is that it gives us constructs for grouping and ordering for free, without any additions to the syntax. But first, let us explain how it is currently done in monad comprehension.
Grouping in monad comprehension
In query languages such as SQL, it is possible to group the result of a query by some fields from the original input. For example, the SQL query:
q4 = SELECT Avg(p.Salary) FROM Person p GROUP BY p.Age
…computes, for each known age, the average salary of the persons of that age found in table Person
.
The grouping construct of SQL does not fit well within the monadic interpretation of comprehension, which is why the list/monad comprehension syntax was extended to accommodate precisely that construct, along with a construct for ordering.
For example, the SQL query above can now be written in Haskell as:
[ average (map salary p)  p ← persons, then group by age p using groupWith ]
where groupWith
is the function used to do the grouping, and average
simply computes the average of a list. The trick is that after the then group by expression (and also on the lefthand side of the 
), the meaning of binding p changes from “the person currently iterated” to “the group of persons currently iterated”, where the groups are determined by the arguments given to group by and using.
So while p
has type Person
right after the “p ← persons” generator, it becomes a list of type [Person]
after the group by statement, and in the “map salary p” expression.
While clever, this is a nontrivial and quite idiosyncratic mechanism (I don’t know of any other language in which a construct modifies the meaning of certain bindings without even mentioning them). And to add to the scoping conundrum, the variables defined in the comprehension are not in scope of the function passed to using
, whereas they syntactically seem to be.
Moreover, if we want to remember the age associated with each average, we have to write:
q4 = [ (the a, average s)  p ← persons,
let a = age p, let s = salary p, then group by a using groupWith ]
or slightly more succinctly:
q4 = [ (the age, average salary)  Person{age,salary} ← persons,
then group by age using groupWith ]
where the :: Eq a => [a] > a
returns the head of a list and makes sure all the elements in that list are equal. This is somewhat unsatisfactory, because the
is a partial function that throws runtime exceptions, and it is easy to get that part wrong, for example by writing age $ the p
on the lefthand side of the 
(where p
is the [Person]
variable), which crashes at runtime, instead of extracting age
from the current Person
(also named p
), on the righthand side of the 
and then writing the age
on the lefthand side, as above.
Grouping in monoid comprehension
Now, what does it take to have grouping in monoid comprehension? Perhaps surprisingly, the answer is nothing at all! In fact, grouping is trivial in monoid comprehension.
Remember that all grouping does is aggregate elements into “buckets” based on some “key” extracted from the iterated elements. Well, this is what we would normally use a Map
for — we just need a Map
with an instance of Monoid
that combines the values of shared keys based on their own Monoid
(or Semigroup
) instance, such as Data.HashMap.Monoidal
provided by monoidalcontainers.
Here is how to write the example above using a monoid comprehension:
q4 = { Average (salary p) `groupBy` age p  p ← persons }
where Average
is a newtype to aggregate values using their Fractional
instance, and groupBy = flip Data.HashMap.Monoidal.singleton
is just syntactic sugar for creating a singleton Map
object.
Assuming the salary
field of Person
is a Float
, the expression above results in a HashMap Int (Average Float)
mapping each age to the average salaray of the persons of that age.
Performance of grouping
According to the spec, the monad comprehension encoding of q4
essentilly desugars to:
groupWith (\(age,salary) > age) [ (age,salary)  Person{age,salary} < persons ] >>= \ys >
case (fmap (\(age,salary) > age) ys, fmap (\(age,salary) > salary) ys) of
(age,salary) > return (the age, average salary)
Notice how this expression computes many intermediate lists. Worse, it traverses the entire data a grand total of at least six times! (The traversals are done by the call to groupWith
, the conversion of the input list to a list of tuples, the two fmap
applications, and finally calls to the
and average
. )
On the other hand, the monoid comprehension form desugars to just:
foldMap (\Person{age,salary} > Average salary `groupBy` age) persons
…which computes its result in a single list traversal and creates no intermediate lists at all. Of course, we are building a Map
, with overall complexity n.log(n), but this complexity is also present in the monadic form since groupWith
needs to presort its input list.
Note that list fusion will likely not remove all the extraneous intermediate lists of the monadic form (even when specialized to lists), since it makes nonlinear use of lists like ys
and the argument to groupWith
.
Generality of grouping
You may be thinking that the extended monad comprehension syntax is more flexible, because it allows us to define separate aggregations on the result, as in:
q5 = [ (sum x, average y)  (x,y,z) ← ls group by z using groupWith ]
but in fact, this is trivially expressed in monoid comprehension as well, thanks to the fact that a tuple of monoids is also a monoid, as in:
q5 = { (Sum x, Average y) `groupBy` z  (x,y,z) ← ls }
…which, contrary to the monadic version, only performs a single traversal of the source data, accumulating the result of Sum
and Average
“in parallel”. In fact, there are other parallel monoid aggregations which cannot actually be expressed with a pure monad comprehension (using only the ‘group by’ extended syntax), such as:
{ ( Sum x `groupBy` y, Average y `groupBy` x )  (x,y) ← ls }
Another interesting flexibility of the monad comprehension form is that one can use an arbitrary grouping function in the using clause, not just groupWith. This flexibility is also present in the monoid comprehension form, where we would simply use a Map
type with a different Monoid
instance as the result of a new groupBy'
function.
Ordering
The extended monad comprehension syntax also has builtin support for processing the current results of the comprehension, which can be used for ordering it, or dropping some elements from it, etc.
For example, the following expression first “drops” 1 element of its input list, groups the remaining elements computing their sum, and then “takes” the first two such grouped sums:
q6 = [ sum x  (x,y) ← xs, then drop 1, group by y using groupWith, then take 2 ]
The monoid comprehension syntax does not have a builtin way of doing that, but it is easily encoded by directly applying the processing functions in question:
q6 = take 2 { sum x `groupBy` y  (x,y) ← drop 1 xs }
On the other hand, we can again define a new SQLlike construct by simply wrapping things into a type with the right monoid instance. In the case of ordering, all we have to do is to use a monoidal map that orders its elements, such as Data.Map.Monoidal
provided by the monoidalcontainers package. We can define syntax sugar orderBy = flip Data.Map.Monoidal.singleton
, and then use it as in:
q7 = { ( count, [ x ] `orderBy` Down y ) `groupBy` z  (x,y,z) ← xs }
…which has type HashMap z (Sum Int, Map (Down y) [x])
. This query iterates over the (x,y,z) element of xs, creating one group for each distinct z, and in each of these groups:

counts the number of elements in it (where
count = Sum 1
); 
creates one list of x for each distinct y, ordering these lists by y in descending order (
Down
is a newtype that reverses the canonical orderOrd
of a type).
This query is interesting because it is neither expressible in SQL nor in pure (extended) monad comprehension, which demonstrates the versatility of using monoid instances to compose queries with various meanings.
Conclusions
In summary, we have seen that monoid comprehension is a useful alternative to list or monad comprehension, especially in the context of expressing queries over collections of data. Indeed, in this specific context it is:

more flexible, since it allows iterating over heterogeneous data sources without coercions while being able to produce any monoid result in one go, and since it can express advanced queries (that monad comprehension cannot directly) simply by composing monoid instances together;

simpler, as it has a straightforward desugaring, and does not need any extensions for expressing SQLlike queries (we have grouping and ordering “for free”);

more efficient, because it requires fewer traversals of the processed data and fewer intermediate collections — in fact exhibiting asymptotically better space efficiency;

safer, as it does not require the use of partial functions like
the
to perform basic tasks such as grouping while retaining the grouping key.
In an upcoming blog post, I will touch on the second part of my talk, which dealt with encoding the original monoid comprehension calculus (MCC) in the type system, to statically prevent unspecified/undesirable semantics of monoid queries in the context of heterogeneous comprehension.
There are also wellknown optimization techniques for MCC, which could probably be applied via GHC rewrite rules.
Further reading
Trinder, Philip W. “Comprehensions, a Query Notation for DBPLs.” DBPL. 1991.