﻿

In functional programming, a monad is a programming structure that represents computations. Monads are a kind of abstract data type constructor that encapsulate program logic instead of data in the domain model. A defined monad allows the programmer to chain actions together and build different pipelines that process data in various steps, in which each action is decorated with additional processing rules provided by the monad; for example a sequence of arithmetic operations can be controlled to avoid division by zero in intermediate results. Programs written in functional style can make use of monads to structure procedures that include sequenced operations,[1][2] or to define some arbitrary control flows (like handling concurrency, continuations, side effects such as input/output, or exceptions).

Formally, a monad is constructed by defining two operations (bind and return) and a type constructor M that must fulfill several properties to allow the correct composition of monadic functions (i.e. functions that use values from the monad as their arguments). The return operation takes a value from a plain type and puts it into a monadic container of type M. The bind operation performs the reverse process, extracting the original value from the container and passing it to the associated next function in the pipeline.

A programmer uses an existing monad by composing monadic functions to define a new data-processing pipeline. The monad acts as a framework, as it's a reusable behavior that decides the order in which the specific monadic functions in the pipeline are called, and manages all the undercover work required by the computation.[3] The bind and return operators interleaved in the pipeline will be executed after each monadic function returns control, and will take care of the particular aspects handled by the monad.

The name is taken from the mathematical monad construct in category theory, though the two concepts are not identical.

## History

Eugenio Moggi first described the general use of monads to structure programs.[4] Several people built on his work, including programming language researchers Philip Wadler and Simon Peyton Jones (both of whom were involved in the specification of Haskell). Early versions of Haskell used a problematic "lazy list" model for I/O, and Haskell 1.3 introduced monads as a more flexible way to combine I/O with lazy evaluation.

In addition to I/O, scientific articles and Haskell libraries have successfully applied monads to topics including parsers and programming language interpreters. The concept of monads along with the Haskell do-notation for them has also been generalized to form arrows.

Haskell and its derivatives have been for a long time the only major users of monads in programming. There also exist formulations in Scheme, Perl, Racket, Clojure and Scala, and monads have been an option in the design of a new ML standard. Recently F# has included a feature called computation expressions or workflows, which are an attempt to introduce monadic constructs within a syntax more palatable to programmers with an imperative background.[5]

Effect systems are an alternative way of describing side effects as types.

## Background

Some primary uses of monads in functional programming are to express input/output (I/O) operations and changes in state without using language features that introduce side effects.[6] Although a function cannot directly cause a side effect, it can construct a value describing a desired side effect, that the caller should apply at a convenient time. One value in such monadic function represents a state of the world. When passing in the state of the world to a function, the monad will create a new world back where the state has been changed according to the function's return value. The state created in this way can be passed to another function, thus defining a series of functions which will apply in order as steps of state changes. This process is similar to how a temporal logic represents the passage of time using only declarative propositions.

In the following example, putStrLn and getLine are monadic functions defined in terms of the I/O monad (see below). The monad controls the changes of state in the input and output streams, and threads each world state from one line to the next in the do-block.

do
name <- getLine
putStrLn ("Nice to meet you, " ++ name ++ "!")


However, I/O and state management are by no means the only uses of monads. They are useful in any situation where the programmer wants to carry out a purely functional computation while a related computation is carried out on the side. In imperative programming the side effects are embedded in the semantics of the programming language; with monads, they are made explicit in the monad definition, thus avoiding errors by action at a distance.

The name monad derives from category theory, a branch of mathematics that describes patterns applicable to many mathematical fields. (As a minor terminological mismatch, the term monad in functional programming contexts is usually used with a meaning corresponding to that of the term strong monad in category theory, a specific kind of category-theoretical monad.)[4]

The Haskell programming language is a functional language that makes heavy use of monads, and includes syntactic sugar to make monadic composition more convenient. All of the code samples in this page are written in Haskell unless noted otherwise.

## Concepts

### Definition

A monad is a construction that, given an underlying type system, embeds a corresponding type system (called the monadic type system) into it (that is, each monadic type acts as the underlying type). This monadic type system preserves all significant aspects of the underlying type system, while adding features particular to the monad.

The usual formulation of a monad for programming is known as a Kleisli triple, and has the following components:

1. A type construction that defines, for every underlying type, how to obtain a corresponding monadic type. In Haskell's notation, the name of the monad represents the type constructor. If M is the name of the monad and t is a data type, then "M t" is the corresponding type in the monad.
2. A unit function that maps a value in an underlying type to a value in the corresponding monadic type. The result is the "simplest" value in the corresponding type that completely preserves the original value (simplicity being understood appropriately to the monad). In Haskell, this function is called return due to the way it is used in the do-notation described later. The unit function has the polymorphic type t→M t.
3. A binding operation of polymorphic type (M t)→(t→M u)→(M u), which Haskell represents by the infix operator >>=. Its first argument is a value in a monadic type, its second argument is a function that maps from the underlying type of the first argument to another monadic type, and its result is in that other monadic type. The binding operation can be understood as having four stages:
1. The monad-related structure on the first argument is "pierced" to expose any number of values in the underlying type t.
2. The given function is applied to all of those values to obtain values of type (M u).
3. The monad-related structure on those values is also pierced, exposing values of type u.
4. Finally, the monad-related structure is reassembled over all of the results, giving a single value of type (M u).

In object-oriented programming terms, the type construction would correspond to the declaration of the monadic type, the unit function takes the role of a constructor method, and the binding operation contains the logic necessary to execute its registered callbacks (the monadic functions).

In practical terms, a monad (seen as special result values carried throughout the pipeline) stores function results and side-effect representations. This allows side effects to be propagated through the return values of functions without breaking the pure functional model. For example, Haskell's Maybe monad can have either a normal return value, or Nothing. Similarly, error monads (such as Either e, for some type e representing error information) can have a normal return value or an error value. Consider how these apply to the example of "Maybe" type, as explained in the Examples section.

### Axioms

For a monad to behave correctly, the definitions must obey a few axioms.[7] (The ≡ symbol is not Haskell code, but indicates an equivalence between two Haskell expressions.)

• "return" acts approximately as a neutral element of >>=.
(return x) >>= f ≡ f x
m >>= return ≡ m

• Binding two functions in succession is the same as binding one function that can be determined from them.
(m >>= f) >>= g ≡ m >>= ( \x -> (f x >>= g) )


In the last rule, the notation \x -> defines an anonymous function that maps any value x to the expression that follows.

In mathematical notation, the axioms are:

$(\text{return} \; x) \gg\!= f \equiv f \; x$
$m \gg\!= \text{return} \equiv m$
$(m \gg\!= f) \gg\!= g \equiv m \gg\!= \lambda x \; . \; (f \, x \gg\!\!= g)$

The axioms can also be expressed in do-block style (see below):

do { f x } ≡ do { v <- return x; f v }

do { m }   ≡ do { v <- m; return v }

do { x <- m;
y <- f x;
g y }
≡
do { y <- do { x <- m; f x };
g y }


A monad can optionally define a "zero" value for every type. Binding a zero with any function produces the zero for the result type, just as 0 multiplied by any number is 0.

mzero >>= f ≡ mzero


Similarly, binding any m with a function that always returns a zero results in a zero

m >>= (\x -> mzero) ≡ mzero


Intuitively, the zero represents a value in the monad that has only monad-related structure and no values from the underlying type. In the Maybe monad, "Nothing" is a zero. In the List monad, "[]" (the empty list) is a zero.

An additive monad is a monad endowed with a monadic zero and an operation (called mplus) satisfying the monoid laws, with the monadic zero as unit. The operation has type M tM tM t (where M is the monad constructor and t is the underlying data type), satisfies the associative law and has the zero as both left and right identity. (Thus, an additive monad is also a monoid.)

## An introductory example as a control structure

The following example shows usage of a monad as a control structure in the well-known context of real division. It uses the Maybe monad, which is designed to handle exceptional cases in operations, to check for divisions by 0.

This requires the definition of a special division operator // to perform a sequence of divisions, either of which could result in an operating error. The // operator checks whether the divisor is 0, in which case it returns a special value Nothing that is chained through the other operations in the procedure. The // operator is defined in terms of the Maybe constructor and its corresponding bind and return operators.

The following pseudocode defines a safeDivisions function performing a sequence of divisions and assignments to variables. This function is called with the input value 2, which returns the value 1, and input value 0, which performs an invalid division by zero operation and returns the special value Nothing.

safeDivisions x = do
val1 <- 6' // x'
val2 <- val1' // 3'
return val2

safeDivisions 2
-> 1'

safeDivisions 0
-> Nothing


The values 6', 3' and the variables x', val1', val2' are modified versions of the equivalent numbers transformed to match the correct type required by the // operator. Operations on these transformed values are augmented with the error-control aspect provided by the Maybe monad. In this particular case, the monad is used to define the behavior of a not-a-number special value, which is often found in programming languages as part of the language definition but in this case is provided as a library function.

### Motivation

Checking for the presence of a previous error is performed in the Maybe monad definition, so it doesn't need to be replicated in every new operator that uses it. The operator only needs to test for errors it produces, not previous errors in the pipeline; the monad itself abstracts in a reusable way the error-checking in composite functions.

The whole process in the example above is written in a functional programming style, thus retaining referential transparency. This implies that all computation steps, even jumps in the control flow, are expressed as a part of the program. Contrast this with exceptions in imperative programming in which the effects of changes in execution state are defined outside the programming language, encapsulated as opaque language primitives.

The division function used in the previous example is undefined for some known values, such as zero. Division might occur repeatedly in a calculation. A programmer might want to express a sequence of divisions without caring for error control, like the following one which returns the resistance of two electrical resistors in parallel:

-- par is a function that takes two real numbers and returns another
-- It is an example taken from electrical engineering (see above) to
--   illustrate why implementing [[NaN]]-semantics on top of the current
--   division system is useful; it is not related to Monads.
par :: Float -> Float -> Float -- resistance(R1), resistance(R2) -> resistance(R1 in parallel with R2)
par R1 R2 = 1 / ((1 / R1) + (1 / R2)) -- formula given from electrical engineering


Instead of avoiding any errors by checking whether each divisor is zero, it might be convenient to have a modified division operator that does the check implicitly, as in the following pseudocode:

-- // is an operator that takes two "Maybe Float"s and returns another.
-- "Maybe Float" extends the Float type to represent calculations that may fail.
(//) :: Maybe Float -> Maybe Float -> Maybe Float
_ // Just 0 = Nothing
Just x // Just y = Just (x / y)
_ // _ = Nothing

parM :: Float -> Float -> Maybe Float
parM R1 R2 = 1' // ((1' // R1') +' (1' // R2'))
-- where 1', R1', R2' are the "Maybe" versions of 1, R1, R2 and +' "adds" Maybe Floats.
-- See [[#Maybe monad|below]] for details.


With the // operator, dividing by zero anywhere in the computation will result in the entire computation returning a special value of the Maybe monad called "Nothing", which indicates a failure to compute a value; when one // operator returns Nothing, the definition of the Maybe monad ensures that the rest of the expression will not be executed. Otherwise, the computation will produce a numerical result, contained in the other Maybe value, which is called "Just". The result of this division operator can then be passed to other functions. This concept of "maybe values" is one situation where monads are useful.

## do-notation

Although there are times when it makes sense to use the bind operator >>= directly in a program, it is more typical to use a format called do-notation (perform-notation in OCaml, computation expressions in F#), that mimics the appearance of imperative languages. The compiler translates do-notation to expressions involving >>=. For example, the following code:

a = do x <- [3..4]
[1..2]
return (x, 42)


is transformed during compilation into:

a = [3..4] >>= (\x -> [1..2] >>= (\_ -> return (x, 42)))


It is helpful to see the implementation of the list monad, and to know that concatMap maps a function over a list and concatenates (flattens) the resulting lists:

instance Monad [] where
m >>= f  = concatMap f m
return x = [x]
fail s   = []


Therefore, the following transformations hold and all the following expressions are equivalent:

a = [3..4] >>= (\x -> [1..2] >>= (\_ -> return (x, 42)))
a = [3..4] >>= (\x -> concatMap (\_ -> return (x, 42)) [1..2] )
a = [3..4] >>= (\x -> [(x,42),(x,42)] )
a = concatMap (\x -> [(x,42),(x,42)] ) [3..4]
a = [(3,42),(3,42),(4,42),(4,42)]


Notice that the list [1..2] is not used. The lack of a left-pointing arrow, translated into a binding to a function that ignores its argument, indicates that only the monadic structure is of interest, not the values inside it, e.g. for a state monad this might be used for changing the state without producing any more result values. The do-block notation can be used with any monad as it is simply syntactic sugar for >>=.

The following definitions for safe division for values in the Maybe monad are also equivalent:

x // y = do
a <- x  -- Extract the values "inside" x and y, if there are any.
b <- y
if b == 0 then Nothing else Just (a / b)

x // y = x >>= (\a -> y >>= (\b -> if b == 0 then Nothing else Just (a / b)))


A similar example in F# using a computation expression:

let readNum () =
let succ,v = Int32.TryParse(s)
if (succ) then Some(v) else None

let secure_div =
maybe {
if (y = 0)
then None
else return (x / y)
}


The syntactic sugar of the maybe block would get translated internally to the following expression:

maybe.Delay(fun () ->
if (y=0) then None else maybe.Return( x/y ))))


Given values produced by safe division, we might want to carry on doing calculations without having to check manually if they are Nothing (i.e. resulted from an attempted division by zero). We can do this using a "lifting" function, which we can define not only for Maybe but for arbitrary monads. In Haskell this is called liftM2:

liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c
liftM2 op mx my = do
x <- mx
y <- my
return (op x y)


Recall that arrows in a type associate to the right, so liftM2 is a function that takes a binary function as an argument and returns another binary function. The type signature says: If m is a monad, we can "lift" any binary function into it. For example:

(.*.) :: (Monad m, Num a) => m a -> m a -> m a
x .*. y = liftM2 (*) x y


defines an operator (.*.) which multiplies two numbers, unless one of them is Nothing (in which case it again returns Nothing). The advantage here is that we need not dive into the details of the implementation of the monad; if we need to do the same kind of thing with another function, or in another monad, using liftM2 makes it immediately clear what is meant (see Code reuse).

Mathematically, the liftM2 operator is defined by:

$\text{liftM2} \colon \forall M \colon \text{monad}, \; (A_1 \to A_2 \to R) \to M \, A_1 \to M \, A_2 \to M \, R =$$op \mapsto m_1 \mapsto m_2 \mapsto \text{bind} \; m_1 \; (a_1 \mapsto \text{bind} \; m_2 \; (a_2 \mapsto \text{return} \; (op \, a_1 \, a_2)))$

## Examples

### I/O

A monad for I/O operations is usually implemented in the language implementation rather than being defined publicly. The following example demonstrates the use of an I/O monad to interact with the user.

do
name <- getLine
putStrLn ("Nice to meet you, " ++ name ++ "!")


The do notation of this procedure resembles an imperative program. But the expansion of the do-block shows how the procedure is defined in pure functional terms thanks to the I/O monad, and makes explicit the information flow from one action to the next:

putStrLn "What is your name?" >>=
(\_ ->
getLine >>=
(\name ->
putStrLn ("Nice to meet you, " ++ name ++ "!")))


The pipeline structure of the bind operator ensures that the getLine and putStrLn operations get evaluated only once and in the given order, so that the side-effects of extracting text from the input stream and writing to the output stream are correctly handled in the functional pipeline. This remains true even if the language performs out-of-order or lazy evaluation of functions.

The Maybe monad is an option type: it handles exceptional cases in operations as special values in an underlying type.

A Maybe type is defined as the combination of "just the underlying type" (represented by wrapping the type with the Just constructor), with a value representing "nothing", i.e. undefined.

data Maybe t = Just t | Nothing


The Maybe value corresponding to an underlying value, is just that value (represented by wrapping with Just).

return x = Just x


Binding a function to something that is just a value means applying it directly to that value (the function must return a monadic type). Binding a function to nothing produces nothing.

(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
(Just x) >>= f = f x
Nothing >>= f = Nothing


For the safe division example, (/) is the underlying function, (//) is the safe monadic version. There are two Maybe inputs. If either input is Nothing, then Nothing is returned. Otherwise the inputs are Just x and Just y, from which the operator extracts the x and y values in the underlying type. If y is zero, (/) cannot be applied, so Nothing is returned, otherwise Just (x / y) is returned:

(//) :: Maybe a -> Maybe a -> Maybe a
_ // Nothing = Nothing
Nothing // _  = Nothing
_ // Just 0 = Nothing
Just x // Just y = Just (x / y)


A more general version that applies to all types m such that m is an instance of the Monad class:

(//) :: (Fractional a, Monad m) => a -> a -> m a
_ // 0 = fail "//: divide by zero"
x // y = return (x / y)


This is the definition of the same Maybe monad in the F# language:[8]

type MaybeBuilder() =
member this.Bind(x, f) =
match x with
| Some(x) -> f(x)
| _ -> None
member this.Delay(f) = f()
member this.Return(x) = Some x

let maybe = MaybeBuilder()


The following definitions complete the original motivating example of the "par" function.

add x y = do
x' <- x
y' <- y
return (x' + y')

par x y = let
one = return 1
jx = return x
jy = return y
in one // (add (one // jx) (one // jy))


If the result of any division is Nothing, it will propagate through the rest of the expression.

The // operator actually requires parameters of type Maybe a, so the pseudocode example in the introduction paragraph is invalid. Here is the correct version which lifts values of the basic type into monadic ones:

safeDivisions x = do
val1 <- return 6 // return x
val2 <- return 3 // return 1
val3 <- val1 // val2
return val3


From the category theory point of view, the Maybe monad is derived from the adjunction between the free functor and the underlying functor between the category of sets and the category of pointed sets.

The simplest monad is the identity monad, which attaches no information to values.

Id t = t
return x = x
x >>= f = f x


A do-block in this monad performs variable substitution; do {x <- 2; return 3*x} results in 6.

From the category theory point of view, the identity monad is derived from the adjunction between identity functors.

### Collections

Some familiar collection types, including lists, sets, and multisets, are monads. The definition for lists is given here.

-- "return" constructs a one-item list.
return x = [x]
-- "bind" concatenates the lists obtained by applying f to each item in list xs.
xs >>= f = concat (map f xs)
-- The zero object is an empty list.
mzero = []


List comprehensions are a special application of the list monad. For example, the list comprehension [ 2*x | x <- [1..n], isOkay x] corresponds to the computation in the list monad do {x <- [1..n]; if isOkay x then return () else mzero; return (2*x)}.

The notation of list comprehensions is similar to the set-builder notation, but sets can't be made into a monad, since there's a restriction on the type of computation to be comparable for equality, whereas a monad does not put any constraints on the types of computations. Actually, the Set is a restricted monad.[9] The monads for collections naturally represent nondeterministic computation. The list (or other collection) represents all the possible results from different nondeterministic paths of computation at that given time. For example, when one executes x <- [1,2,3,4,5], one is saying that the variable x can non-deterministically take on any of the values of that list. If one were to return x, it would evaluate to a list of the results from each path of computation. Notice that the bind operator above follows this theme by performing f on each of the current possible results, and then it concatenates the result lists together.

Statements like if condition x y then return () else mzero are also often seen; if the condition is true, the non-deterministic choice is being performed from one dummy path of computation, which returns a value we are not assigning to anything; however, if the condition is false, then the mzero = [] monad value non-deterministically chooses from 0 values, effectively terminating that path of computation. Other paths of computations might still succeed. This effectively serves as a "guard" to enforce that only paths of computation that satisfy certain conditions can continue. So collection monads are very useful for solving logic puzzles, Sudoku, and similar problems.

In a language with lazy evaluation, like Haskell, a list is evaluated only to the degree that its elements are requested: for example, if one asks for the first element of a list, only the first element will be computed. With respect to usage of the list monad for non-deterministic computation that means that we can non-deterministically generate a lazy list of all results of the computation and ask for the first of them, and only as much work will be performed as is needed to get that first result. The process roughly corresponds to backtracking: a path of computation is chosen, and then if it fails at some point (if it evaluates mzero), then it backtracks to the last branching point, and follows the next path, and so on. If the second element is then requested, it again does just enough work to get the second solution, and so on. So the list monad is a simple way to implement a backtracking algorithm in a lazy language.

From the category theory point of view, collection monads are derived from adjunctions between a free functor and an underlying functor between the category of sets and a category of monoids. Taking different types of monoids, we obtain different types of collections.

type of collections type of monoids
list monoid
finite multiset commutative monoid
finite set idempotent commutative monoid
finite permutation idempotent non-commutative monoid

A state monad allows a programmer to attach state information of any type to a calculation. Given any value type, the corresponding type in the state monad is a function which accepts a state, then outputs a new state along with a return value.

type State s t = s -> (t, s)


Note that this monad, unlike those already seen, takes a type parameter, the type of the state information. The monad operations are defined as follows:

-- "return" produces the given value without changing the state.
return x = \s -> (x, s)
-- "bind" modifies m so that it applies f to its result.
m >>= f = \r -> let (x, s) = m r in (f x) s


Useful state operations include:

get = \s -> (s, s) -- Examine the state at this point in the computation.
put s' = \s -> ((), s') -- Replace the state.
modify f = \s -> ((), f s) -- Update the state


Another operation applies a state monad to a given initial state:

runState :: State s a -> s -> (a, s)
runState t s = t s


do-blocks in a state monad are sequences of operations that can examine and update the state data.

Informally, a state monad of state type S maps the type of return values T into functions of type $S \rarr T \times S$, where S is the underlying state. The return function is simply:

$\text{return} \colon T \rarr S \rarr T \times S = t \mapsto s \mapsto (t, s)$

The bind function is:

$\text{bind} \colon (S \rarr T \times S) \rarr (T \rarr S \rarr T' \times S) \rarr S \rarr T' \times S$$\ = m \mapsto k \mapsto s \mapsto (k \ t \ s') \quad \text{where} \; (t, s') = m \, s$.

From the category theory point of view, a state monad is derived from the adjunction between the product functor and the exponential functor, which exists in any cartesian closed category by definition.

The environment monad (also called the reader monad and the function monad) allows a computation to depend on values from a shared environment. The monad type constructor maps a type T to functions of type ET, where E is the type of the shared environment. The monad functions are:

$\text{return} \colon T \rarr E \rarr T = t \mapsto e \mapsto t$
$\text{bind} \colon (E \rarr T) \rarr (T \rarr E \rarr T') \rarr E \rarr T' = r \mapsto f \mapsto e \mapsto f \, (r \, e) \, e$

The following monadic operations are useful:

$\text{ask} \colon E \rarr E = \text{id}_E$
$\text{local} \colon (E \rarr E) \rarr (E \rarr T) \rarr E \rarr T = f \mapsto c \mapsto e \mapsto c \, (f \, e)$

The ask operation is used to retrieve the current context, while local executes a computation in a modified subcontext. As in the state monad, computations in the environment monad may be invoked by simply providing an environment value and applying it to an instance of the monad.

The writer monad allows a program to compute various kinds of auxiliary output which can be "composed" or "accumulated" step-by-step, in addition to the main result of a computation. It is often used for logging or profiling. Given the underlying type T, a value in the writer monad has type W × T, where W is a type endowed with an operation satisfying the monoid laws. The monad functions are simply:

$\text{return} \colon T \rarr W \times T = t \mapsto (\epsilon, t)$
$\text{bind} \colon (W \times T) \rarr (T \rarr W \times T') \rarr W \times T' = (w, t) \mapsto f \mapsto (w * w',\, t') \quad \text{where} \; (w', t') = f \, t$

where ε and * are the identity element of the monoid W and its associative operation, respectively.

The tell monadic operation is defined by:

$\text{tell} \colon W \rarr (W \times 1) = w \mapsto (w, ())$

where 1 and () denote the unit type and its trivial element. It is used in combination with bind to update the auxiliary value without affecting the main computation.

A continuation monad with return type R maps type T into functions of type $\left( T \rarr R \right) \rarr R$. It is used to model continuation-passing style. The return and bind functions are as follows:

$\text{return} \colon T \rarr \left( T \rarr R \right) \rarr R = t \mapsto f \mapsto f \, t$
$\text{bind} \colon \left( \left( T \rarr R \right) \rarr R \right) \rarr \left( T \rarr \left( T' \rarr R \right) \rarr R \right) \rarr \left( T' \rarr R \right) \rarr R$$= c \mapsto f \mapsto k \mapsto c \, \left( t \mapsto f \, t \, k \right)$

The call-with-current-continuation function is defined as follows:

$\text{call/cc} \colon \left( \left( T \rarr \left( T' \rarr R \right) \rarr R \right) \rarr \left( T \rarr R \right) \rarr R \right) \rarr \left( T \rarr R \right) \rarr R$$= f \mapsto k \mapsto \left( f \left( t \mapsto x \mapsto k \, t \right) \, k \right)$

### Others

Other concepts that researchers have expressed as monads include:

## fmap and join

Although Haskell defines monads in terms of the return and bind functions, it is also possible to define a monad in terms of return and two other operations, join and fmap. This formulation fits more closely with the definition of monads in category theory. The fmap operation, with type (tu) → (M t→M u), takes a function between two types and produces a function that does the "same thing" to values in the monad. The join operation, with type M (M t)→M t, "flattens" two layers of monadic information into one.

The two formulations are related as follows. As before, the ≡ symbol indicates equivalence between two Haskell expressions.

(fmap f) m ≡ m >>= (\x -> return (f x))
join n ≡ n >>= id

m >>= g ≡ join ((fmap g) m)


Here, m has the type M t, n has the type M (M r), f has the type tu, and g has the type t → M v, where t, r, u and v are underlying types.

The fmap function is defined for any functor in the category of types and functions, not just for monads. It is expected to satisfy the functor laws:

fmap id = id
fmap (f . g) = (fmap f) . (fmap g)


The return function characterizes pointed functors in the same category, by accounting for the ability to "lift" values into the functor. It should satisfy the following law:

return . f = fmap f . return


join . fmap join = join . join
join . fmap return = join . return = id
join . fmap (fmap f) = fmap f . join


Comonads are the categorical dual of monads. They are defined by a type constructor W T and two operations: extract with type W TT for any T, and extend with type (W T → T') → W T → W T' . The operations extend and extract are expected to satisfy these laws:

$\text{extend} \,\, \text{extract} = \text{id}$
$\text{extract} \circ (\text{extend} \, f) = f$
$(\text{extend} \, f) \circ (\text{extend} \, g) = \text{extend} \, (f \circ (\text{extend} \, g))$

Alternatively, comonads may be defined in terms of operations fmap, extract and duplicate. The fmap and extract operations define W as a copointed functor. The duplicate operation characterizes comonads: it has type W T → W (W T) and satisfies the following laws:

$\text{extract} \circ \text{duplicate} = \text{id}$
$\text{fmap} \, \text{extract} \circ \text{duplicate} = \text{id}$
$\text{duplicate} \circ \text{duplicate} = \text{fmap} \, \text{duplicate} \circ \text{duplicate}$

The two formulations are related as follows:

$\text{fmap}: (A \rarr B) \rarr \mathrm{W} \, A \rarr \mathrm{W} \, B = f \mapsto \text{extend} \, (f \circ \text{extract})$
$\text{duplicate}: \mathrm{W} \, A \rarr \mathrm{W} \, \mathrm{W} \, A = \text{extend} \, \text{id}$
$\text{extend}: (\mathrm{W} \, A \rarr B) \rarr \mathrm{W} \, A \rarr \mathrm{W} \, B = f \mapsto (\text{fmap} \, f) \circ \text{duplicate}$

Whereas monads could be said to represent side-effects, a comonads W represents a kind of context. The extract functions extracts a value from its context, while the extend function may be used to compose a pipeline of "context-dependent functions" of type W AB.

The identity comonad is the simplest comonad: it maps type T to itself. The extract operator is the identity and the extend operator is function application.

The product comonad maps type T into tuples of type $C \times T$, where C is the context type of the comonad. The comonad operations are:

$\text{extract}: (C \times T) \rarr T = (c, t) \mapsto t$
$\text{extend}: ((C \times A) \rarr B) \rarr C \times A \rarr C \times B = f \mapsto (c, a) \mapsto (c, f \, (c, a))$
$\text{fmap}: (A \rarr B) \rarr (C \times A) \rarr (C \times B) = (c, a) \mapsto (c, f \, a)$
$\text{duplicate}: (C \times A) \rarr (C \times (C \times A)) = (c, a) \mapsto (c, (c, a))$

The function comonad maps type T into functions of type $M \rarr T$, where M is a type endowed with a monoid structure. The comonad operations are:

$\text{extract}: (M \rarr T) \rarr T = f \mapsto f \, \varepsilon$
$\text{extend}: ((M \rarr A) \rarr B) \rarr (M \rarr A) \rarr M \rarr B = f \mapsto g \mapsto m \mapsto f \, (m' \mapsto g \, (m * m'))$
$\text{fmap}: (A \rarr B) \rarr (M \rarr A) \rarr M \rarr B = f \mapsto g \mapsto (f \circ g)$
$\text{duplicate}: (M \rarr A) \rarr M \rarr (M \rarr A) = f \mapsto m \mapsto m' \mapsto f \, (m * m')$

where ε is the identity element of M and * is its associative operation.

The costate comonad maps a type T into type $(S \rarr T) \times S$, where S is the base type of the store. The comonad operations are:

$\text{extract}: ((S \rarr T) \times S) \rarr T = (f, s) \mapsto f \, s$
$\text{extend}: (((S \rarr A) \times S) \rarr B) \rarr ((S \rarr A) \times S) \rarr (S \rarr B) \times S \,$$= f \mapsto (g, s) \mapsto ((s' \mapsto f \, (g, s')), s)$
$\text{fmap}: (A \rarr B) \rarr ((S \rarr A) \times S) \rarr (S \rarr B) \times S = f \mapsto (f', s) \mapsto (f \circ f', s)$
$\text{duplicate}: ((S \rarr A) \times S) \rarr (S \rarr ((S \rarr A) \times S)) \times S = (f, s) \mapsto (s' \mapsto (f, s'), s)$

## References

1. ^ Wadler, Philip. Comprehending Monads. Proceedings of the 1990 ACM Conference on LISP and Functional Programming, Nice. 1990.
2. ^ Wadler, Philip. The Essence of Functional Programming. Conference Record of the Nineteenth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages. 1992.
3. ^ A physical analogy for monads, explains monads as assembly lines.
4. ^ a b Moggi, Eugenio (1991). "Notions of computation and monads". Information and Computation 93 (1).
5. ^
6. ^ Peyton Jones, Simon L.; Wadler, Philip. Imperative Functional Programming. Conference record of the Twentieth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Charleston, South Carolina. 1993