Category Theory For Programmers I - Monoids

This is a series of posts aiming to demystify some of the core concepts of category theory and by extension functional programming. This is part 1.

Category Theory

Category theory is the study of how maths works. If that sounds mental, that's because it is. Usefully the things discovered in category theory have some practical applications for programmers, and in some ways form the foundations of a lot of functional programming.

Learning about category theory, then, is to learn about functional programming. This is why it's important. In fact some languages like haskell make very little sense until we can grasp these basic concepts. With that, let's take a few of these concepts in turn and see what they are all about


Before we look at what a monoid is, let's talk about abstractions.

As programmers we work in abstractions. To utilise an abstraction is to find some commonality that can be shared, to isolate it then share it. An example in ruby:

  class Square
    def initialize(side)
      @side = side

    def area
      @side * @side

    def perimeter(side)
      @side * 4

  class Circle
    def initialize(radius)
      @radius = radius

    def area
      @radius * @radius * pi

    def perimeter
      2 * pi * radius

The commonality here is the methods they implement. This commonality hints that both Circle and Square are two variations of the same type of thing. If we were to name the type, we would call it a Shape. We can imagine more Shapes, but each one would have to have both an area and a perimiter. That is to say, no matter what else they may or may not have, any shape must have an area and a perimiter. We can isolate these properties and make them available to instances of those types by doing something like this:

  class Shape
    def area
      raise NotImplementedError

    def perimiter
      raise NotImplementedError

  class Square < Shape
    def initialize(side)
      @side = side

    def area
      @side * @side

    def perimeter
      @side * 4

  class Circle < Shape
    def initialize(radius)
      @radius = radius

    def area
      @radius * @radius * pi

    def perimeter
      2 * pi * radius

This means if we tried to make a Triangle we would have to implement an area and permiter method for it.

The flip side of this is that if we know something is a shape, we know that it has an area and a perimiter method. If we know those things for certain there are certain assumptions we can make about shapes. We could write this method for example, and know that it will work for any shape we pass it by definition:

    def fits_in_box?(shape)
      shape.area < 10

Pretty powerful stuff.


On to monoids. A monoid is a type of abstraction. To see which kind we are going to look at different instances of things that are monoids and identify the commonality between them. Then we can look at what that buys us.

Addition of Numbers

What can we say about addition of numbers? The first thing we can note is that it is Binary. That is to say it requires two numbers to add together, but returns precisely one number.1 + 2 = 3. Three is a bigger number, but it is just one number.

The next thing we can say is that addition of numbers is Associative. What this means is if we add three numbers together, it doesn't matter if we do it like this:

    (1 + 2) + 3 == 6
Or like this:
    1 + (2 + 3) == 6
we'll get the same answer.

Note that associativity IS NOT ORDER. It is better described as grouping. For addition of numbers we can say that order also doesn't matter. What that means is

    1 + 2 == 3
    2 + 1 == 3
This is known as being Commutative

The final thing to note about addition of numbers is that there is an Identity value. An identity value is a zero value. A zero value is a value whereby if you add to it any other number, you get that number back unchanged. For addition of numbers the zero value is literally 0:

    1 + 0     == 1     # get 1 back unchanged
    45 + 0    == 45    # get 45 back unchanged
    0 + 45854 == 45854 # you get the picture

Concatenating Arrays

Now let's look at another example - concatenating arrays.

To Concatenating two arrays, you take the all the elements out of the first array and then all the elements out of the second array and put them all into a new array.

[1,2].concat([3,4]) == [1,2,3,4]

In order to do that you need two arrays, and we can see from above that we are returned one array. It's a bigger array, but it is just one array. So concatenation of arrays is Binary .

Now imagine concatenating three arrays. It doesn't matter if we do it like this:

Or like this:
we'll get the same answer. Concatenating arrays is Associative .

Note here however that order DOES matter.

[1,2].concat([3,4]) != [3,4].concat([1,2])
Concatenating arrays is NOT commutative.

Now imagine concatenating an array to an empty array. We would take the empty out of the first array, and the values out of the second array and make a new array containing both those things.

[].concat([1,2]) == [1,2]
This means whenever we concatenate an empty array we are returned the array we concatenate it with unchanged.
[1].concat([])     == [1]     # get [1] back unchanged
[45].concat([])    == [45]    # get [45] back unchanged
[].concat([45854]) == [45854] # you get the picture
Arrays, then, have an Identity Value - []

Merging of Hashes

The last example we will look at is the merge operation for hashes

Binary - you need two hashes to merge together, and you are returned one hash as a result.

Identity - the identity value is an empty hash {}

Associativity - the grouping of merges doesn't matter:

  a = {a: :b}
  c = {c: :d}
  e = {e: :f}
a.merge(c.merge(e)) == {a: :b, c: :d, e: :f}
a.merge(c).merge(e) == {a: :b, c: :d, e: :f}

HOWEVER, note that order does matter. Imagine this scenario

  first  = { foo: :bar }
  second = { foo: :fighters }
  first.merge(second) == {foo: :fighters}
  second.merge(first) == {foo: :bar}

When the two hashes being merged share the same key, the order in which you merge the hashes will determine which key is overwritten.

Bringing it all together

The commonality here is these properties:

  • Identity Value
  • Associativity
  • Binary

This commonality hints that the adding of numbers, and the concatenating of arrays and hashes are three variations of the same type of thing. If we were to name that type of thing, we might call it being Reduceable. We can imagine more Monoids, but each one would have to be Reduceable by obeying these three rules. That is to say, no matter what else they may or may not have, any monoid must be associative, binary and have an identity value. We can isolate these properties and ensure they hold for new kinds of monoids by doing something like this:

require "spec_helper"
require "number"

RSpec.describe Number do
  let(:ten)   { }
  let(:three) { }
  let(:five)  { }
  let(:identity_value)  { }

  it "Is Binary" do
    expect(ten.multiply(three).class).to eq Number

  it "Is Associative" do
    expect(ten.multiply(five).multiply(three)).to eq ten.multiply(five.multiply(three))

  it "Has an Identity value" do
    expect(ten.multiply(identity_value)).to eq ten
    expect(identity_value.multiply(ten)).to eq ten

class Number
  def initialize(value)
    @value = value

  attr_reader :value

  def multiply(other)
    return other if @value == 1
    return self if other.value == 1 * other.value)

  def ==(other)
    other.value == value

This means we can be sure our new monoid would obey those laws.

If we are sure what we have are monoids, then we can do things which leverage those laws and know that our monoid type things will not break. But what kind of things can we do?

What do monoids buy us?

If we know that something is associative, then we know that, as long as order is maintained, we can break up the total workload in any arbritrary way and we would always get the same result. Imagine if we had to compute an expensive addition operation. We could split the overall problem into many subproblems and pass each subproblem off to different threads. We could still guarantee the answer would be the same as if we didn't, because we know the law of associativity holds!

Pretty powerful stuff.

Back to Front(end)

This is the story of my recent foray back into the world of front end web development. I have been working almost exclusively as a Ruby developer for a little while now, but was recently offered the chance to head up a new project at work: implementing a frontend to a bunch of micro services that form our internal tools. In javascript.

I love Ruby, but I also love learning new things. And challenging myself… And self harm, apparently.

It will be fun I thought. I’ve made websites I thought. I know javascript, I thought.

You see it turns out the first thing I had to do was learn javascript. ES6 to be specific. With just a little bit of ES7. Or is that ES2015? Even though it’s 2017.

Either way javascript isn’t javascript anymore. Except to run it in most browsers it has to be javascript again, so looks like I’ll need Babel. And probably Webpack.

I’ll just read up on them.

Yes, it’s as scary as it looks.

It looks like React is a good choice for our use case, and there will be some relatively complex state management so we’ll probably use Redux. I’ll just learn about those.

I’m three articles deep, I think I know what JSX is and I am thoroughly convinced that html in javascript is actually fine. I’m even coming round to inline styles. Probably ready to give this baby a whirl!**

I’ll just have a look at the react docs and see how we go about starting a react app…

Nope! Nope. Absolutely nope. I’m using create-react-app. In fact, their README is a bit good, wish I’d seen that earlier.

I’m not going to be the only developer on this (I hope) and so when it’s my turn to cry in the corner, I’d like the developer that follows to have some conventions to, um, follow. Which means Linting!

Looks like a different app we have used SUIT CSS naming conventions which worked out marvellously. Let me just brush up on what that is.

Luckily there weren’t four thousand choices, so stylelint for CSS and eslint for javascript will do me nicely. You can even steal Air Bnbs configuration! I made a couple of npm scripts to kick off linting, and ran them on the scaffolded app to ensure the settings were all good.

Oh. It wants me to call js files jsx. Seems reasonable.

Wait now the react server can’t find the js file. Because we changed it to jsx…

Okay we got there. One WORKING app, with react, that compiles AND lints our css and javascript. Only a small part of me is wishing I learnt Elm.

Except it’s not working exactly, because we haven’t actually written any code yet. I’m sure my product manager will understand.

Okay, code. So, tests? I just need to write a test! Should be easy enough….

Holy functional monads. My kingdom for RSpec!

To test in javascript you need a test runner, an assertion library, a mocking library, (instead of just rspec), a browser environment (this is all just for unit tests) AND a test framework. Plus some extra if you want code coverage.

That’s fine, there can’t be that many choices and anyway I’m sure the community has come to some kind of consensus. We only have to choose from:

Air Bnb also wrote enzyme because you need that too. I love those people.

I carefully selected Jest (because it came with create react app) and I’ll worry about E2E tests later.

So now we have it. A passing smoke test, with a component that I managed to apply linted styles to!

Just need the rest of the functionality now….

Part 2 to here

**disclaimer: please don’t whirl babies

Back to Front(end) II

Javacript and Me, 2017

In Part 1, I set up a new react project with tests, linting and redux. Apparently that wasn’t enough because ‘the app has to actually do stuff’. Well, excuseee me Mr Product Manager.

Since single page apps aren’t single pages apps anymore, the next thing we needed was routing. I think there’s only react router (no I didn’t check), I’ll just read up on that.

Oh, it looks like I just spent half a day reading about the old react router. Turns our theres version 3. Still, it’s probably mostly similar…

Oh, looks like I just spent half a day reading about an old react router. Turns our there’s a version 4. Still, it’s probably nearly the same… I wonder how elm handles routing.

If you can sift through the blog posts and stack overflow questions to ensure you’re reading about the correct version, it actually makes a lot of sense. I wonder how we have to configure the server when we deploy this thing… I’ll let dev-ops worry about that.

By this point it has occurred to me that there’s a small obligation for this thing to actually look nice, and that furthermore, I am in not a designer. Not at all. As such, the hunt began for something like bootstrap** that looked not like bootstrap. And lo, the universe provided Material UI, and I saw that it was good.

As mentioned in part one this project is the front end to a bunch of microservices, which means we’re about to make plenty of API calls.

I initially assumed that axios calls in componentDidMount, would be fine. Then I read this, and decided redux-thunk would be good choice. Then I read this and decided redux-saga would be a good choice. Then I read this (written by the creator of redux) and decided to have a lie down.

Thunks can become a little difficult to test. I figured I could avoid that headache, preempt it even, by learning sagas. Sort of like the way you might avoid ever stubbing your toe by cutting off your foot. Or like when you strategically avoid the hangover by continuing to drink — if I never stop reading about these libraries, no one can make me use them!

I’m exaggerating of course ES6 generators weren’t thattt hard. In fact the hardest part was convincing my co-workers that all this really was worth it. “All that just for easier tests?!” (Um, what do you mean just?). Just wait until I tell them we should scrap sagas anyway and re-write*** the whole thing with graphQL! And none of that REST nonsense, what is this 2016?!

In the end the team switched to using Admin-on-rest, a library that implements all those things I learnt, in such a way that you don’t really need to learn them. Wonderful. So glad I learnt them.

The moral

There’s two morals to this tale. Those more experienced amongst you will note that perhaps I was premature in including all these technologies. That I shouldn’t have been making choices upfront, but should have started the app and refactored into the libraries that made sense.

I completely agree. The truth is each library has it’s tradeoffs. That means, if you reach the pain point the library was birthed to solve, you already know that the library’s tradeoffs will probably be a net win. You don’t have to take it on faith that thunks become hard to test, because you know it already.

That’s exactly why more experienced devs can move quickly; they already know lots of the pros and cons of different approaches. But here’s my point, they didn’t just magically learn that. At some point that investment is needed.

From DHH in a recent quora question on why Rails is still worth learning in 2017:

“Back then the complexity merchant of choice was J2EE, but the complaints are uncannily similar to those levelled against JavaScript today. That people spent hours, if not days, just setting up the skeletons. The basic build configurations. Assembling and cherry-picking from lots of little libraries and frameworks to put together their own snowflake house variety.”

Oof. Right in the feels. He goes on to say:

It’s somewhat surprising to me that despite the astounding success of Rails, that there haven’t been more competition for this proposition

I couldn’t agree more. Configuration comes at a cost. That cost is complexity. Configuration kills productivity in the short term - that was the insight from rails. It may often pay off in the long run (that was the insight from vim), but to do so doesn’t mean we have to sacrifice short term gains of convention over configuration. The two aren’t mutually exclusive. Even Vim has sensible vim.

Or as Alfred Whitehead said

It is a profoundly erroneous truism… that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them

Javascript, please, don’t make me think.

**in fairness I would have totally used bootstrap, but react bootstrap hadn’t made a version 1.0 yet. ***once we’ve actually written it of course.


Currying. Another in a long list of words that anyone outside of programming hears and just assumes you are making up.

Currying is a way to partially apply a function. What the hell does that mean?

It turns a function that takes two arguments into a function that: takes-one-argument-and-returns-a-function-which-takes-the-other-argument.



In javascript, instead of having this:

let add = function(x, y) {
  return x + y
we could do this:
let add = function(x){
  return function(y){
    return x + y
We could then call it like so:
let addTen = add(10);
When we do this the 10 is passed in as x:
let add = function(10){
  return function(y){
    return 10 + y
which means we are returned this function:
function(y) { 10 + y };
So when you call:
you are really calling:
function(y) { 10 + y };
Which means if you do this:
it's the same as:
function(4) { 10 + 4} // 14
So our addTen() function always adds ten to whatever we pass in. We can make similar functions in the same way:
let addTwo = add(2)       // addTwo(); will add two to whatever you pass in
let addSeventy = add(70)  // ... and so on...

Curry! Huh! What is it good for...?

Okay so we get the gist of how currying works. But what on earth would we use it for?

Well it turns out to be good for several things. First of all, lets write a function without currying.

var dinner = function(firstIngredient, secondIngredient, restOfTheFuckingIngredients) {
  return firstIngredient + secondIngredient + restOTheFuckingIngredients

And if we wanted to make dinner we would do this:

var tuesdaysDinner   = dinner(chicken, rice, tumeric)
var wednesdaysDinner = dinner(beef, rice, tumeric)

Easy enough.

But we are lazy programmers. We hate repitition - we can do better! Rice and tumeric is time consuming to create. If we can create a massive batch of rice and tumeric, we could use it for both meals! But how could we do this....?

YOU GUESSED IT. With currying.

First of all let's turn our function that takes three arguments into a nested series of functions that all take one argurment, and return a function.

var partiallyAppliedDinner = function(firstIngredient) {
  return function(secondIngredient) {
    return function(lastIngredient) {
      return firstIngredient + secondIngredient + lastIngredient

Now we have curried our function, we can do this:

var prep = partiallyAppliedDinner(rice)

rice will be passed in as firstIngredient which means we are returned a function that looks like this:

function(secondIngredient) {
  return function(lastIngredient) {
    return rice + secondIngredient + lastIngredient
which means we can do this:
var preppedDinner = partiallyAppliedDinner(rice)(tumeric)
and tumeric will be passed in as secondIngredient.

This is amazing. Our preppedDinner var evaluates to a function that looks like this:

function(lastIngredient) {
  return rice + tumeric + lastIngredient
So we can now do this:
var tuesdaysDinner   = preppedDinner(chicken)
var wednesdaysDinner = preppedDinner(beef)
We now have the expensive rice / tumeric operation memoized for re-use!

We have also discovered an abstraction. We could name this abstraction something like sideDish. We reduced duplication of a common operation (the adding of rice and tumeric) by abstracting it up to a function of it's own - our preppedDinner variable. Crucially, notice how we didn't need to write any other functions to achieve this abstraction. Without currying we could have written two functions, an addRiceAndTumeric() and an addLastIngredient().Then we could have done some sort of addRiceAndTumeric(rice, tumeric) + addLastIngredient(chicken) travesty. Observe:

Without Currying

var addRiceAndTumeric = function(rice, tumeric){
  return rice + tumeric
var addLastIngredient = function(lastIngredient, restOfTheIngredients){
  return lastIngredient + restOfTheIngredients

var tuesdaysDinner   = addLastIngredient(chicken, addRiceAndTumeric(rice, tumeric))
var wednesdaysDinner = addLastIngredient(beef, addRiceAndTumeric(rice, tumeric))

With Currying

var partiallyAppliedDinner = function(firstIngredient) {
  return function(secondIngredient) {
    return function(lastIngredient) {
      return firstIngredient + secondIngredient + lastIngredient

var tumericRiceAnd = partiallyAppliedDinner(rice)(tumeric)

var tuesdaysDinner   = tumericRiceAnd(chicken)
var wednesdaysDinner = tumericRiceAnd(beef)


To summarise currying is the process of taking a function that takes one or more arguments and turning that into a series of functions that take one argument each, and each return a function. Currying allows us to:

  1. memoize an expensive operation
  2. calculate part of a function before we have all of the parameters available
  3. achieve abstractions in functional paradigms

All the tests you cannot see

I’ve been thinking about testing lately and recently came across a situation at work which served to illustrate brilliantly something I’ve been trying to articulate.

I work for an e-commerce website. As part of the backend we have batch files. These provide a way for non-technical staff to be able to add, update and remove things from the DB safely. For example, they might add information that links a product to a particular category.

I recently began implementing a way for Customer Service Agents to be able to add fraudulent email addresses to a table, such that orders made with those email addresses were marked as ‘risky’.

I wrote this something like this:

  def process
    RiskyEmail.find_or_create_by!(email: contents)

where contents was the contents of the batchfile.

Then I thought, wait I need a test. So I wrote:

  it "adds an email to the risky_emails table" do
    expect { validate_and_process(batchfile) }.to change { RiskyEmail.count}.from(0).to(1) )

Job done!

You might have already spotted my mistake, but instead of pointing it out yet let’s imagine I had written the test first. I would have put:

  it "adds an email to the risky_emails table" do
    expect { validate_and_process(batchfile) }.to change { RiskyEmail.count}.from(0).to(1) )

Then the code I would have written to pass it would have been something like:

  def process
    RiskyEmail.find_or_create_by!(email: "")

Wait, now I need a new test. It’s obvious that the production code isn’t going to get the behaviour I needed if every time I process the batch file it adds the email “”. Worryingly this means that there is some kind of case my original test isn’t covering. What could that be?

My next test would have looked something like:

  it "inserts the correct data" do
    create(:batchfile, contents: "")
    expect( eq ""

The test that was missing was checking that the batchfile saved the data that we wanted. As it turned out, this was an important part to miss. Because of how the batchfiles were implemented, when I wrote:

  def process
    RiskyEmail.find_or_create_by!(email: contents)

the contents was actually returning [“”]

Had I written the test first, my test would have failed and said umm buddy, you told us you wanted but the code gave us [“”] I could have said “Oh yes” and marvelled at my own stupidity in silence. I would have put:

  def process
    RiskyEmail.find_or_create_by!(email: contents.first)

there and then, instead of having to hastily add a new PR.

That would have been nice.

If you write your code first and then test it, you force yourself into having to imagine what kind of problems you will encounter.

If you write your test first, you no longer have to imagine what situations you might encounter because you will simply encounter them. If you are disciplined about only taking the smallest step needed to make your test pass, then you will expose ways your code could fail that mightn't have been obvious otherwise.