
\chapter{Functors and contrafunctors\label{chap:Functors,-contrafunctors,-and}}

\global\long\def\gunderline#1{\mathunderline{greenunder}{#1}}%
\global\long\def\bef{\forwardcompose}%
\global\long\def\bbnum#1{\custombb{#1}}%
Type constructors such as \lstinline!Seq[A]! or \lstinline!Array[A]!
are data structures that hold or \textsf{``}wrap\textsf{''} zero or more values of
a given type \lstinline!A!. These data structures are fully parametric:
they work in the same way for every type \lstinline!A!. Working with
parametric \textsf{``}data wrappers\textsf{''} or \textsf{``}data containers\textsf{''} turns out
to be a powerful design pattern of functional programming. To realize
all its benefits, we will formalize the concept of data wrapping through
a set of mathematical laws. We will then extend that design pattern
to all data types for which the laws hold.

\section{Practical use}

\subsection{Motivation. Type constructors that wrap data}

How to formalize the idea of wrapped data? An intuitive view is that
the data is \textsf{``}still there\textsf{''}, i.e., we should be able to manipulate
the data held within the wrapper. In functional programming, to manipulate
means to apply functions to data. So, if an integer value $123$ is
\textsf{``}wrapped\textsf{''}, we should be able somehow to apply a function such
as \lstinline!{x => x * 2}! and obtain a \textsf{``}wrapped\textsf{''} value $246$.

Let us look at some often used type constructors defined in the Scala
standard library, such as \lstinline!Seq[A]!, \lstinline!Try[A]!,
and \lstinline!Future[A]!. We notice the common features:
\begin{itemize}
\item There are some methods for creating a data structure that wraps zero
or more values of a given type. For example, the Scala code \lstinline!List.fill(10)(0)!
creates a list of ten zeros of type \lstinline!Int!.
\item There are some methods for reading the wrapped values, if they exist.
For example, the \lstinline!List! class has the method \lstinline!headOption!
that returns a non-empty \lstinline!Option! when the first element
exists.
\item There are some methods for manipulating the wrapped values while \emph{keeping}
them wrapped. For example, \lstinline!List(10, 20, 30).map(_ + 5)!
evaluates to \lstinline!List(15, 25, 35)!.
\end{itemize}
The data types \lstinline!Seq[A]!, \lstinline!Try[A]!, and \lstinline!Future[A]!
express quite different kinds of wrapping. The data structure implementing
\lstinline!Seq[A]! can hold a variable number of values of type \lstinline!A!.
The data structure \lstinline!Try[A]! holds either a successfully
computed value of type \lstinline!A! or a failure. The data structure
\lstinline!Future[A]! implements a computation that has been scheduled
to run but may not have finished yet, and may compute a value of type
\lstinline!A! (or fail) at a later time.

Since the meaning of the \textsf{``}wrappers\textsf{''} \lstinline!Seq!, \lstinline!Try!,
and \lstinline!Future! is quite different, the methods for creating
and reading the wrapped values have different type signatures for
each wrapper. However, the method \lstinline!map! is similar in all
three examples. We can say generally that the \lstinline!map! method
will apply a given function $f^{:A\rightarrow B}$ to the data of
type $A$ held inside the wrapper, and the new data (of type $B$)
will remain within a wrapper of the same type:
\begin{lstlisting}
val a = List(x,y,z).map(f) // Result is List(f(x), f(y), f(z)).
val b = Try(x).map(f)      // Result is Try(f(x)).
val c = Future(x).map(f)   // Result is Future(f(x)).
\end{lstlisting}
This motivates us to use the \lstinline!map! function as the requirement
for the wrapping functionality: A type constructor \lstinline!Wrap[A]!
is a \textsf{``}wrapper\textsf{''} if there exists a function \lstinline!map! with
the  type signature
\begin{lstlisting}
def map[A, B]: Wrap[A] => (A => B) => Wrap[B]
\end{lstlisting}

We can see that \lstinline!Seq!, \lstinline!Try!, and \lstinline!Future!
are \textsf{``}wrappers\textsf{''} because they have a suitable \lstinline!map! method.
This chapter focuses on the properties of \lstinline!map! that are
common to \emph{all} wrapper types. We will ignore all other features
\textemdash{} reading data out of the wrapper, inserting or deleting
data, waiting until data becomes available etc., \textemdash{} implemented
by different methods specific to each wrapper type.

\subsection{Extended example: \texttt{Option} and the identity law\label{subsec:f-Example:-Option-and}}

As another example of a \textsf{``}data wrapper\textsf{''}, consider the type constructor
\lstinline!Option[A]!, which is written in the type notation as 
\[
\text{Opt}^{A}\triangleq\bbnum 1+A\quad.
\]
The type signature of its \lstinline!map! function is
\[
\text{map}^{A,B}:\bbnum 1+A\rightarrow\left(A\rightarrow B\right)\rightarrow\bbnum 1+B\quad.
\]
This function produces a new \lstinline!Option[B]! value that wraps
transformed data. We will now use this example to develop intuition
about manipulating data in a wrapper.

Two possible implementations of \lstinline!map! fit the type signature:

\begin{wrapfigure}{l}{0.585\columnwidth}%
\vspace{-0.7\baselineskip}
\begin{lstlisting}
def mapX[A, B](oa: Option[A])(f: A => B): Option[B] = None

def mapY[A, B](oa: Option[A])(f: A => B): Option[B] =
  oa match {
    case None      => None
    case Some(x)   => Some(f(x))
  }
\end{lstlisting}
\vspace{-1\baselineskip}
\end{wrapfigure}%

\noindent The code of \lstinline!mapX! loses information\index{information loss}
since it always returns \lstinline!None! and ignores all input. The
implementation \lstinline!mapY! is more useful since it preserves
information. 

How can we formulate this property of \lstinline!mapY! in a rigorous
way? The trick is to choose the argument $f^{:A\rightarrow B}$ in
the expression \lstinline!map(oa)(f)! to be the identity function
$\text{id}^{:A\rightarrow A}$ (setting \lstinline!map!\textsf{'}s type parameters
as $A=B$, so that the types match). Applying an identity function
to a value wrapped in an \lstinline!Option[A]! should not change
that value. To verify that, substitute the identity function instead
of \lstinline!f! into \lstinline!mapY! and compute:

\begin{wrapfigure}{l}{0.58\columnwidth}%
\vspace{-0.6\baselineskip}
\begin{lstlisting}
mapY[A, A](x: Option[A])(identity[A]: A => A): Option[A]
  == x match {
        case None      => None        // No change.
        case Some(x)   => Some(x)     // No change.
     } == x
\end{lstlisting}
\vspace{0.1\baselineskip}
\end{wrapfigure}%

\noindent The result is always equal to \lstinline!x!. We can write
that fact as an equation,\vspace{-0.3\baselineskip}
\[
\forall x^{:\text{Opt}^{A}}.\,\,\text{map}\,(x)(\text{id})=x\quad.
\]
\vspace{-0.85\baselineskip}

\noindent This equation is called the \textbf{identity law}\index{identity laws!of functors}
of \lstinline!map!. The identity law is a formal way of expressing
the information-preserving property of the \lstinline!map! function.
The implementation \lstinline!mapX! violates the identity law since
it always returns \lstinline!None! and so \lstinline!mapX(oa)(id) == None!
and not equal to \lstinline!oa! for arbitrary values of \lstinline!oa!.
A data wrapper should not unexpectedly lose information when we manipulate
the wrapped data. So, the correct implementation of \lstinline!map!
is \lstinline!mapY!. The code notation for \lstinline!map! is\vspace{-0.4\baselineskip}
\[
\text{map}^{A,B}\triangleq p^{:\bbnum 1+A}\rightarrow f^{:A\rightarrow B}\rightarrow p\triangleright\begin{array}{|c||cc|}
 & \bbnum 1 & B\\
\hline \bbnum 1 & \text{id} & \bbnum 0\\
A & \bbnum 0 & f
\end{array}\quad.
\]

When writing code, it is convenient to use the \lstinline!map! method
defined in the Scala library. However, when reasoning about the properties
of \lstinline!map!, it turns out to be more convenient to flip the
order of the curried arguments and to use the equivalent function,
called \lstinline!fmap!, with the type signature
\[
\text{fmap}^{A,B}:\left(A\rightarrow B\right)\rightarrow\bbnum 1+A\rightarrow\bbnum 1+B\quad.
\]
The Scala implementation and the code notation for \lstinline!fmap!
are shorter than those for \lstinline!map!:

\begin{wrapfigure}{l}{0.55\columnwidth}%
\vspace{-0\baselineskip}
\begin{lstlisting}
def fmap[A, B](f: A => B): Option[A] => Option[B] = {
  case None      => None
  case Some(x)   => Some(f(x))
}
\end{lstlisting}
\vspace{-2\baselineskip}
\end{wrapfigure}%

~\vspace{-0.7\baselineskip}
\begin{equation}
\text{fmap}\,(f^{:A\rightarrow B})\triangleq\begin{array}{|c||cc|}
 & \bbnum 1 & B\\
\hline \bbnum 1 & \text{id} & \bbnum 0\\
A & \bbnum 0 & f
\end{array}\quad.\label{eq:f-def-opt-fmap-matrix-notation}
\end{equation}
\vspace{-0.5\baselineskip}

The identity law also looks simpler if expressed in terms of \lstinline!fmap!,
namely $\text{fmap}\,(\text{id})=\text{id}$. In writing $\text{fmap}\,(\text{id})=\text{id}$,
we omitted the type parameters $A$ and $B$, which must be both equal.

Note that the type signature of \lstinline!fmap! looks like a transformation
from functions of type \lstinline!A => B! to functions of type \lstinline!Option[A] => Option[B]!.
This transformation is called \textbf{lifting}\index{lifting} because
it \textsf{``}lifts\textsf{''} a function $f^{:A\rightarrow B}$ operating on simple
values into a function operating on \lstinline!Option!-wrapped values. 

So, the identity law can be formulated as \textsf{``}a lifted identity function
is also an identity function\textsf{''}. If we lift an identity function and
apply the resulting function to a wrapper, we expect the wrapped data
not to change. The identity law expresses this expectation in a mathematical
equation.

\subsection{Motivation for the composition law}

The main feature of a \textsf{``}data wrapper\textsf{''} is to allow us to manipulate
the data inside it by applying functions to that data. The corresponding
Scala code is \lstinline!p.map(f)!, where \lstinline!p! is a value
of a wrapper type. It is natural to expect that lifted functions behave
in the same way as the \textsf{``}unlifted\textsf{''} ones. For example, suppose we
need to increment a counter \lstinline!c! of type \lstinline!Option[Int]!.
The \lstinline!Option! type means that the counter may be empty or
non-empty; if it is non-empty, we increment the integer value wrapped
inside the \lstinline!Option! using the incrementing function
\[
\text{incr}\triangleq x^{:\text{Int}}\rightarrow x+1\quad.
\]
In order to apply a function to the counter \lstinline!c!, we need
to lift that function. The Scala code is
\begin{lstlisting}
def incr: Int => Int = x => x + 1
val c: Option[Int] = Some(0)

scala> c.map(incr)
res0: Option[Int] = Some(1) 
\end{lstlisting}
If we apply the lifted function twice, we expect that the counter
will be incremented twice:
\begin{lstlisting}
scala> c.map(incr).map(incr)
res1: Option[Int] = Some(2)
\end{lstlisting}
This result is the same as when applying a lifted function $x\rightarrow x+2$:

\begin{wrapfigure}{l}{0.3\columnwidth}%
\vspace{-0.8\baselineskip}
\begin{lstlisting}
scala> c.map(x => x + 2)
res2: Option[Int] = Some(2)
\end{lstlisting}
\vspace{-1.2\baselineskip}
\end{wrapfigure}%

\noindent It would be confusing and counter-intuitive if \lstinline!c.map(x => x + 2)!
did not give the same result as \lstinline!c.map(incr).map(incr)!. 

We can formulate this property more generally: liftings should preserve
function composition for arbitrary functions $f^{:A\rightarrow B}$
and $g^{:B\rightarrow C}$. This is written as
\begin{lstlisting}
c.fmap(f).fmap(g) == c.fmap(f andThen g) == c.fmap(x => g(f(x))) 
\end{lstlisting}
\[
c^{:F^{A}}\triangleright\text{fmap}\,(f^{:A\rightarrow B})\triangleright\text{fmap}\,(g^{:B\rightarrow C})=c\triangleright\text{fmap}\,(f)\bef\text{fmap}\,(g)=c\triangleright\text{fmap}\,(f^{:A\rightarrow B}\bef g^{:B\rightarrow C})\quad.
\]
This equation is called the \textbf{composition law}\index{composition law!of functors}.
The law has the form $c\triangleright p=c\triangleright q$ with some
functions $p$ and $q$, which is the same as $\forall c.\,p(c)=q(c)$.
This means an equality between functions, $p=q$. So we may omit the
argument $c$ and rewrite the law in a shorter form as 
\[
\text{fmap}\,(f)\bef\text{fmap}\,(g)=\text{fmap}\,(f\bef g)\quad.
\]

Let us verify the composition law of the \lstinline!Option! type.
To practice the code derivations, we will perform the calculations
by using both the code notation and the Scala syntax.

The Scala code for the function \lstinline!fmap! was given in Section~\ref{subsec:f-Example:-Option-and}.
To evaluate $\text{fmap}\,(f\bef g$), we apply \lstinline!fmap(f andThen g)!,
where \lstinline!f: A => B! and \lstinline!g: B => C! are arbitrary
functions, to an arbitrary value \lstinline!oa:Option[A]!. In Scala
code, it is convenient to use the  method \lstinline!map! and write
\lstinline!oa.map(f)! instead of the equivalent expression \lstinline!fmap(f)(oa)!:
\begin{lstlisting}
fmap(f andThen g)(oa) == oa.map(f andThen g) == oa match {
  case None      => None
  case Some(x)   => (f andThen g)(x)
}
\end{lstlisting}
Since \lstinline!(f andThen g)(x) == g(f(x))!, we rewrite the result
as
\begin{lstlisting}
oa.map(f andThen g) == oa match {
  case None      => None
  case Some(x)   => g(f(x))
}
\end{lstlisting}
Now we consider the left-hand side of the law, $\text{fmap}\,(f)\bef\text{fmap}\,(g)$,
and write the Scala expressions:

\begin{wrapfigure}{l}{0.33\columnwidth}%
\vspace{-0.8\baselineskip}
\begin{lstlisting}
oa.map(f).map(g) == (oa match {
  case None      => None
  case Some(x)   => f(x)
}).map(g) == (oa match {
  case None      => None
  case Some(x)   => f(x)
}) match {
  case None      => None
  case Some(y)   => g(y)
} == oa match {
  case None      => None
  case Some(x)   => g(f(x))
}
\end{lstlisting}
\vspace{-5\baselineskip}
\end{wrapfigure}%

\noindent We find that the two sides of the law have identical code.

The derivation is much shorter in the matrix notation; we use Eq.~(\ref{eq:f-def-opt-fmap-matrix-notation})
as the definition of \lstinline!fmap! and omit the types:\vspace{-0.45\baselineskip}
\begin{align*}
 & \text{fmap}\,(f)\bef\text{fmap}\,(g)=\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & f
\end{array}\,\bef\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & g
\end{array}\\
{\color{greenunder}\text{matrix composition}:}\quad & =\begin{array}{||cc|}
\text{id}\bef\text{id} & \bbnum 0\\
\bbnum 0 & f\bef g
\end{array}=\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & f\bef g
\end{array}\\
{\color{greenunder}\text{definition of fmap}:}\quad & =\text{fmap}\,(f\bef g)\quad.
\end{align*}
\vspace{-1.4\baselineskip}

These calculations prove that the \lstinline!map! method of the \lstinline!Option!
type satisfies the composition law. If the composition law did not
hold, we would not be able to understand how \lstinline!map! manipulates
data within the \lstinline!Option! wrapper. Looking at the Scala
code example above, we expect \lstinline!c.map(incr).map(incr)! to
increment the data wrapped by \lstinline!c! two times. If the result
of \lstinline!c.map(incr).map(incr)! were not \lstinline!Some(2)!
but, say, \lstinline!Some(1)! or \lstinline!None!, our ordinary
intuitions about data transformations would become incorrect. In other
words, violations of the composition law prevent us from understanding
the code via mathematical reasoning about transformation of data values.

The composition law is a rigorous formulation of the requirement that
wrapped data should be transformed (by lifted functions) in the same
way as ordinary data. For example, the following associativity property
holds for lifted functions:

\subsubsection{Statement \label{subsec:f-Statement-composition-associativy-law}\ref{subsec:f-Statement-composition-associativy-law}}

For arbitrary functions $f^{:A\rightarrow B}$, $g^{:B\rightarrow C}$,
and $h^{:C\rightarrow D}$, we have
\[
\text{fmap}\,(f)\bef\text{fmap}\,(g\bef h)=\text{fmap}\,(f\bef g)\bef\text{fmap}\,(h)\quad.
\]


\subparagraph{Proof}

The left-hand side is rewritten as
\begin{align*}
 & \text{fmap}\,(f)\bef\,\gunderline{\text{fmap}\,(g\bef h)}\\
{\color{greenunder}\text{composition law for }\left(g\bef h\right):}\quad & =\text{fmap}\,(f)\bef\left(\text{fmap}\,(g)\bef\text{fmap}\,(h)\right)\\
{\color{greenunder}\text{associativity law (\ref{eq:associativity-of-function-composition})}:}\quad & =\gunderline{\left(\text{fmap}\,(f)\bef\text{fmap}\,(g)\right)}\bef\text{fmap}\,(h)\\
{\color{greenunder}\text{composition law for }\left(f\bef g\right):}\quad & =\text{fmap}\,(f\bef g)\bef\text{fmap}\,(h)\quad,
\end{align*}
which now equals the right-hand side. This proves the statement.

\subsection{Functors: definition and examples\label{subsec:Functors:-definition-and-examples}}

Separating the functionality of \textsf{``}data wrapper\textsf{''} from any other
features of a data type, we obtain:
\begin{itemize}
\item A data type with a type parameter, e.g., \lstinline!L[A]!. We will
use the notation $L^{\bullet}$ (in Scala, \lstinline!L[_]!) for
the type constructor itself when the name of the type parameter is
not needed.
\item A \index{fully parametric!function}fully parametric function \lstinline!fmap!
with type signature
\[
\text{fmap}_{L}:\left(A\rightarrow B\right)\rightarrow L^{A}\rightarrow L^{B}\quad.
\]
\item Two laws obeyed by the function \lstinline!fmap!:
\begin{align}
{\color{greenunder}\text{identity law of }L:}\quad & \text{fmap}_{L}(\text{id}^{:A\rightarrow A})=\text{id}^{:L^{A}\rightarrow L^{A}}\quad,\label{eq:f-identity-law-functor-fmap}\\
{\color{greenunder}\text{composition law of }L:}\quad & \text{fmap}_{L}(f^{:A\rightarrow B}\bef g^{:B\rightarrow C})=\text{fmap}_{L}(f^{:A\rightarrow B})\bef\text{fmap}_{L}(g^{:B\rightarrow C})\quad.\label{eq:f-composition-law-functor-fmap}
\end{align}
\end{itemize}
A type constructor $L^{\bullet}$ with these properties is called
a \textbf{functor}\index{functor}. The laws~(\ref{eq:f-identity-law-functor-fmap})\textendash (\ref{eq:f-composition-law-functor-fmap})
are the functor laws of identity and composition. 

When a law involves function compositions, it is helpful to draw a
type diagram\index{type diagram} to clarify how the functions transform
various types involved in the law. A \textbf{type diagram}\index{type diagram}
is a directed graph whose vertices are types and edges are functions
mapping one type to another. Function composition corresponds to following
a path in the diagram. A type diagram for the composition law~(\ref{eq:f-composition-law-functor-fmap})
is shown\begin{wrapfigure}{l}{0.4\columnwidth}%
\vspace{-1.9\baselineskip}
\[
\xymatrix{\xyScaleY{1.5pc}\xyScaleX{3pc} & L^{B}\ar[rd]\sp(0.6){~~\text{fmap}_{L}(g^{:B\rightarrow C})}\\
L^{A}\ar[ru]\sp(0.4){\text{fmap}_{L}(f^{:A\rightarrow B})\ ~}\ar[rr]\sb(0.5){\text{fmap}_{L}(f^{:A\rightarrow B}\bef g^{:B\rightarrow C})\ } &  & L^{C}
}
\]

\vspace{-2\baselineskip}
\end{wrapfigure}%
at left. There are two paths from $L^{A}$ to $L^{C}$; by Eq.~(\ref{eq:f-composition-law-functor-fmap}),
both paths must give the same result. Mathematicians call such diagrams
\textbf{commutative}\index{commutative diagram}.

Type diagrams are easier to read when using the \emph{forward} composition
$\left(f\bef g\right)$ because the order of edges is the same as
the order of functions in the composition. To see this, compare Eq.~(\ref{eq:f-composition-law-functor-fmap})
and the type diagram above with the same law written using the backward
composition,
\[
\text{fmap}_{L}(g^{:B\rightarrow C}\circ f^{:A\rightarrow B})=\text{fmap}_{L}(g^{:B\rightarrow C})\circ\text{fmap}_{L}(f^{:A\rightarrow B})\quad.
\]

The function \lstinline!map! is equivalent to \lstinline!fmap! and
can be defined through \lstinline!fmap! by
\begin{align*}
 & \text{map}_{L}:L^{A}\rightarrow\left(A\rightarrow B\right)\rightarrow L^{B}\quad,\\
 & \text{map}_{L}(x^{:L^{A}})(f^{:A\rightarrow B})=\text{fmap}_{L}(f^{:A\rightarrow B})(x^{:L^{A}})\quad.
\end{align*}

Each of the type constructors \lstinline!Option!, \lstinline!Seq!,
\lstinline!Try!, and \lstinline!Future! has its own definition of
\lstinline!map!; but the functor laws remain the same. We use the
subscript $L$ when writing $\text{map}_{L}$ and $\text{fmap}_{L}$,
in order to indicate clearly the type constructor those functions
work with.

We will now look at some examples of type constructors that are functors.

\paragraph{Standard data structures}

Many type constructors defined in the Scala library have a \lstinline!map!
method, and almost all of them are functors. The most often used functors
are:
\begin{itemize}
\item The standard disjunctive types \lstinline!Option!, \lstinline!Try!,
and \lstinline!Either[A, B]! (where, by default, transformations
apply to the type parameter \lstinline!B!).
\item The linear sequence \lstinline!Seq! and its various derived classes
such as \lstinline!List!, \lstinline!Range!, \lstinline!Vector!,
\lstinline!IndexedSeq!, and \lstinline!Stream!.
\item The \textsf{``}task-like\textsf{''} constructors: \lstinline!Future! and its alternatives:
\lstinline!Task! (provided by the \texttt{monix} library), \lstinline!Async!
and \lstinline!Concurrent! (provided by the \texttt{cats-effect}
library), \lstinline!ZIO! (provided by the \texttt{zio} library).
\item Dictionaries: \lstinline!Map[K, V]! with respect to the type parameter
\lstinline!V!. The method is called \lstinline!mapValues! instead
of \lstinline!map!: it transforms the values in the dictionary, leaving
the keys unchanged.
\end{itemize}
Application-specific, custom type constructors defined by the programmer,
such as case classes with type parameters, are often functors. Their
structure is simple and helps build intuition for functors, so let
us now consider some examples of case classes that are functors. In
this book, they are called polynomial functors.

\paragraph{Polynomial functors}

\index{types!polynomial type constructors}Type constructors built
with primitive types, type parameters, products, and disjunctions
(or \textsf{``}sums\textsf{''}) are often used to represent application-specific data.
Consider the code
\begin{lstlisting}
final case class Counted[A](n: Int, a: A) {
  def map[B](f: A => B): Counted[B] = Counted(n, f(a))
}
\end{lstlisting}
The data type \lstinline!Counted[A]! may be used to describe \lstinline!n!
repetitions of a given value \lstinline!a: A!. The code already defines
the  method \lstinline!map! for the \lstinline!Counted! class, which
can be used like this,
\begin{lstlisting}
scala> Counted(10, "abc").map(s => "prefix " + s)
res0: Counted[String] = Counted(10,prefix abc) 
\end{lstlisting}
It is often more convenient to implement \lstinline!map! as a class
method rather than as a function such as
\begin{lstlisting}
def map[A, B](c: Counted[A])(f: A => B): Counted[B] = c match {
  case Counted(n, a) => Counted(n, f(a))
}
\end{lstlisting}

The type notation for \lstinline!Counted! is
\[
\text{Counted}^{A}\triangleq\text{Int}\times A\quad,
\]
showing that \lstinline!Counted[_]! is a polynomial type constructor.
The existence of a \lstinline!map! method suggests that \lstinline!Counted[_]!
is a functor. We still need to check that the functor laws hold for
it.

\subsubsection{Example \label{subsec:f-Example-Int-x-A}\ref{subsec:f-Example-Int-x-A}\index{solved examples}}

Verify that the above implementation of \lstinline!map! for \lstinline!Counted!
satisfies the functor laws. 

\subparagraph{Solution}

The implementation of \lstinline!map! is fully parametric since it
does not perform any type-specific operations; it uses the value \lstinline!n:Int!
as if \lstinline!Int! were a type parameter. It remains to check
that the laws hold. We will first verify the laws using the Scala
syntax and then using the code notation.

The identity law means that for all \lstinline!n: Int! and \lstinline!a: A!
we must have
\begin{lstlisting}
Counted(n, a).map(identity) == Counted(n, a)
\end{lstlisting}
To verify this, we substitute the code of \lstinline!map! and find
\begin{lstlisting}
Counted(n, a).map(identity) == Counted(n, identity(a)) == Counted(n, a)
\end{lstlisting}

The composition law means that for all \lstinline!n: Int!, \lstinline!a: A!,
\lstinline!f: A => B!, and \lstinline!g: B => C!, we must have
\begin{lstlisting}
Counted(n, a).map(f).map(g) == Counted(n, a).map(f andThen g)
\end{lstlisting}
Substitute the Scala code of \lstinline!map! into the left-hand side:
\begin{lstlisting}
Counted(n, a).map(f).map(g) == Counted(n, f(a)).map(g) == Counted(n, g(f(a)))
\end{lstlisting}
The right-hand side can be transformed to the same expression:
\begin{lstlisting}
Counted(n, a).map(f andThen g) == Counted(n, (f andThen g)(a)) == Counted(n, g(f(a)))
\end{lstlisting}

Let us now write a proof in the code notation, formulating the laws
via the \lstinline!fmap! method:
\[
\text{fmap}_{\text{Counted}}(f^{:A\rightarrow B})\triangleq\big(n^{:\text{Int}}\times a^{:A}\rightarrow n\times f(a)\big)\quad.
\]
To verify the identity law, we write
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{fmap}_{\text{Counted}}(\text{id})\\
{\color{greenunder}\text{definition of }\text{fmap}_{\text{Counted}}:}\quad & =\big(n\times a\rightarrow n\times\gunderline{\text{id}\,(a)}\big)\\
{\color{greenunder}\text{definition of }\text{id}:}\quad & =\left(n\times a\rightarrow n\times a\right)=\text{id}\quad.
\end{align*}
To verify the composition law,
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{fmap}_{\text{Counted}}(f\bef g):}\quad & \text{fmap}_{\text{Counted}}(f)\bef\text{fmap}_{\text{Counted}}(g)\\
{\color{greenunder}\text{definition of }\text{fmap}_{\text{Counted}}:}\quad & =\left(n\times a\rightarrow n\times f(a)\right)\bef\left(n\times b\rightarrow n\times g(b)\right)\\
{\color{greenunder}\text{compute composition}:}\quad & =n\times a\rightarrow n\times\gunderline{g(f(a))}\\
{\color{greenunder}\text{definition of }\left(f\bef g\right):}\quad & =\left(n\times a\rightarrow n\times(f\bef g)(a)\right)=\text{fmap}_{\text{Counted}}(f\bef g)\quad.
\end{align*}

We will prove later that all polynomial type constructors have a definition
of \lstinline!map! that satisfies the functor laws. It will be clear
without proof that our definition of \lstinline!map! for \lstinline!Counted!
is correct. 

What would be an \emph{incorrect} implementation of \lstinline!map!?
As an example, \lstinline!map! could transform \lstinline!Counted(n, a)!
as before, except that the value \lstinline!n! is now used to count
the number of times \lstinline!map! is applied:
\begin{lstlisting}
def map_bad[A, B](c: Counted[A])(f: A => B): Counted[B] = c match {
  case Counted(n, a) => Counted(n + 1, f(a))
}
\end{lstlisting}
This implementation may appear reasonable. However, it violates both
functor laws; for instance,
\begin{lstlisting}
Counter(n, a) != map_bad(Counter(n, a))(identity) == Counter(n + 1, a)
\end{lstlisting}
The failure of functor laws leads to surprising behavior because a
code refactoring changes the result:
\begin{lstlisting}
Counter(n, a).map(incr).map(incr) != Counter(n, a).map(x => x + 2)
\end{lstlisting}

Let us look at some other simple examples of polynomial type constructors.

\subsubsection{Example \label{subsec:f-Example-A-A-A}\ref{subsec:f-Example-A-A-A}}

Implement the \lstinline!fmap! function for the type constructor
\begin{lstlisting}
case class Vec3[A](x: A, y: A, z: A)
\end{lstlisting}


\subparagraph{Solution}

Begin by implementing a fully parametric function:
\begin{lstlisting}
def fmap[A, B](f: A => B): Vec3[A] => Vec3[B] = {
  case Vec3(x, y, z) => Vec3(f(x), f(y), f(z))  // Apply `f` to all data of type `A`.
}
\end{lstlisting}
Since all three values \lstinline!f(x)!, \lstinline!f(y)!, \lstinline!f(z)!
have type \lstinline!B!, the code of \lstinline!fmap! would still
satisfy the required type signature by returning, say, \lstinline!Vec3(f(z), f(x), f(x))!
or some other combination of these values. However, that implementation
does not preserve information about the values \lstinline!x!, \lstinline!y!,
\lstinline!z! and about the ordering of these values in the original
data \lstinline!Vec(x, y, z)!. For this reason, we use the implementation
of \lstinline!fmap! shown first.

The type notation for the type constructor \lstinline!Vec3[_]! is
\[
\text{Vec}_{3}{}^{A}\triangleq A\times A\times A\quad,
\]
and the code notation for \lstinline!fmap! is
\[
\text{fmap}_{\text{Vec}_{3}}(f^{:A\rightarrow B})\triangleq x^{:A}\times y^{:A}\times z^{:A}\rightarrow f(x)\times f(y)\times f(z)\quad.
\]


\subsubsection{Example \label{subsec:f-Example-P+QxA}\ref{subsec:f-Example-P+QxA}}

Implement the \lstinline!fmap! function for the type constructor
\[
\text{QueryResult}^{A}\triangleq\text{String}+\text{String}\times\text{Long}\times A\quad.
\]


\subparagraph{Solution}

Begin by implementing the type constructor in Scala,
\begin{lstlisting}
sealed trait QueryResult[A]
case class Error[A](message: String)                     extends QueryResult[A]
case class Success[A](name: String, time: Long, data: A) extends QueryResult[A]
\end{lstlisting}
Now implement a fully parametric, information-preserving function
with the type signature of \lstinline!fmap! for this type constructor:
\begin{lstlisting}
def fmap[A, B](f: A => B): QueryResult[A] => QueryResult[B] = {
  case Error(message)              => Error(message)
  case Success(name, time, data)   => Success(name, time, f(data))
}
\end{lstlisting}
As in the previous example, we treat specific types (\lstinline!Long!,
\lstinline!String!) as if they were type parameters. In this way,
we obtain a correct implementation of \lstinline!fmap! that satisfies
the functor laws.

\paragraph{Recursive polynomial functors}

Recursive disjunctive type constructors shown in Section~\ref{sec:Lists-and-trees:recursive-disjunctive-types},
such as lists and trees, are functors. Their \lstinline!fmap! methods
are recursive functions; they usually \emph{cannot} be directly implemented
with tail recursion. 

\subsubsection{Example \label{subsec:Example-rec-poly-functor}\ref{subsec:Example-rec-poly-functor}}

Define a list of \emph{odd} length as a recursive type $\text{LO}^{\bullet}$,
\begin{align}
\text{LO}^{A} & \triangleq A+A\times A\times\text{LO}^{A}\label{eq:f-lo-def}\\
 & \cong A+A\times A\times A+A\times A\times A\times A\times A+...\nonumber 
\end{align}
and implement \lstinline!fmap! for it.

\subparagraph{Solution}

The Scala definition of the type constructor \lstinline!LO[_]! is

\begin{lstlisting}
sealed trait LO[A]
final case class LO1[A](x: A)                    extends LO[A]
final case class LO2[A](x: A, y: A, tail: LO[A]) extends LO[A]
\end{lstlisting}

We can implement \lstinline!fmap! as a recursive function:
\begin{lstlisting}
def fmap[A, B](f: A => B): LO[A] => LO[B] = {
  case LO1(x)            => LO1[B](f(x))
  case LO2(x, y, tail)   => LO2[B](f(x), f(y), fmap(f)(tail))
}
\end{lstlisting}
This code for \lstinline!fmap! is not tail-recursive because \lstinline!fmap!
is called inside the case class constructor \lstinline!LO2!. 

The type constructor $\text{LO}^{\bullet}$ is a \textbf{recursive}
\index{polynomial functor!recursive}\textbf{polynomial} \textbf{functor}
because it is defined by a recursive type equation~(\ref{eq:f-lo-def})
that uses only polynomial type operations (sums and products) in its
right-hand side. For the same reason, lists and trees are recursive
polynomial functors.

\subsection{Functor block expressions\index{functor block}}

Computations with wrapped values often require a chain of \lstinline!map!
methods, e.g.
\begin{lstlisting}
scala> val result = Map(1 -> "one", 2 -> "two", 3 -> "three").
  map { case (i, name) => (i * i, name) }.         // Compute i * i.
  map { case (x, name) => (x, s"$name * $name") }. // Compute product message.
  map { case (x, product) => s"$product is $x" }   // Compute final message.
result: Seq[String] = List(one * one is 1, two * two is 4, three * three is 9)
\end{lstlisting}
Such code can be rewritten equivalently in the \index{functor block}\textbf{functor
block} syntax:
\begin{lstlisting}
val result = for {
  (i, name) <- Map(1 -> "one", 2 -> "two", 3 -> "three") // For each (i, name)...
  x = i * i                    // define `x` by computing i * i...
  product = s"$name * $name"   // define `product`...
} yield s"$product is $x"      // and put these expressions into the `result` sequence.
result: Seq[String] = List(one * one is 1, two * two is 4, three * three is 9) 
\end{lstlisting}
Written in this way, the computations are easier to understand for
two main reasons:
\begin{itemize}
\item There is less code to read and to write; no \lstinline!map! or \lstinline!case!
and fewer curly braces.
\item Values such as \lstinline!name! and \lstinline!x! need to be kept
in tuples and passed from one \lstinline!map! function to another,
but any line in a functor block can directly reuse all values defined
in previous lines.
\end{itemize}
The functor block is an important idiom in functional programming
because it replaces a chain of \lstinline!map! methods (as well as
\lstinline!filter! and \lstinline!flatMap! methods, as we will see
in later chapters) by a visually clearer sequence of definitions and
expressions. Scala defines a functor block via the keywords \lstinline!for!
and \lstinline!yield!. We will see many examples of functor blocks
throughout this book. In this chapter, we only consider functor blocks
that are equivalent to a chain of \lstinline!map! operations on a
functor value \lstinline!p: L[A]!. These functor blocks can be recognized
because they contain \emph{only one} left arrow (in the first line).
Here is how to replace a chain of \lstinline!map! operations by a
functor block:
\begin{lstlisting}
p.map(x => f(x)).map(y => g(y)).map(z => h(z)) == for {
   x <- p           // The first line must contain a left arrow before a functor value `p`.
   y = f(x)         // Some computation involving `x`.
   z = g(y)         // Another computation, uses `y`.
} yield h(z)        // The `yield h(z)` replaces the inner result of the last `map`.
\end{lstlisting}
Translating functor blocks back into a chain of \lstinline!map! operations
is straightforward except for one complication: if some lines in the
functor block make use of variables defined in earlier lines, the
\lstinline!map! operations may need to create some intermediate tuples
that are not present in the functor block syntax. Consider the code
\begin{lstlisting}
val result: L[B] = for {
  x <- p           // The first line must contain a left arrow before a functor value `p`.
  y = f(x)         // Some computation involving `x`.
  z = g(x, y)      // Another computation, uses `x` and `y`.
  ...
} yield q(x, y, z) // The `yield` may use `x`, `y`, `z`, and any other defined variables.
\end{lstlisting}
The above functor block code assumes that \lstinline!q(x, y, z)!
has type \lstinline!B!, and is equivalent to
\begin{lstlisting}
val result: L[B] = p
  .map { x => (x, f(x)) }  // Create a tuple because we need to keep `x` and `f(x)`.
  .map { case (x, y) => (x, y, g(x, y)) }   // Need to keep `x`, `y`, and `g(x, y)`.
  ...
  .map { case (x, y, z) => q(x, y, z) }       // Here, we can use `x`, `y`, and `z`.
\end{lstlisting}
This code creates intermediate tuples only because the values \lstinline!x!,
\lstinline!y!, \lstinline!z! need to be used in later calculations.
The functor block code is easier to read, write, and modify. 

If desired, functor blocks may be written in a single line by using
semicolons to separate the individual steps:
\begin{lstlisting}
scala> for { x <- List(1, 2, 3); y = x * x; z = y + 2 } yield z
res0: List[Int] = List(3, 6, 11)
\end{lstlisting}

A confusing feature of the \lstinline!for!/\lstinline!yield! syntax
is that, at first sight, functor blocks (such as the code\begin{wrapfigure}{l}{0.36\columnwidth}%
\vspace{-0.8\baselineskip}
\begin{lstlisting}
for { x <- p; ... } yield expr(x)
\end{lstlisting}

\vspace{-0.8\baselineskip}
\end{wrapfigure}%
 shown at left) appear to compute (or to \textsf{``}yield\textsf{''}) the value \lstinline!expr(x)!.
However, this is not so. As the above examples show, if \lstinline!p!
is a sequence then the functor block also computes a \emph{sequence}.
In general, the result of a functor block is a \textsf{``}wrapped\textsf{''} value,
where the type of the \textsf{``}wrapper\textsf{''} is determined by the first line
of the functor block. The first line must have a left arrow followed
by a \textsf{``}source\index{functor block!source}\textsf{''}, which must be an expression
of a functor type, i.e., of type \lstinline!L[A]! for some functor
\lstinline!L[_]!. The result\textsf{'}s type will be \lstinline!L[B]! where
\lstinline!B! is the type of the expression after the \lstinline!yield!
keyword.

For instance, the first line of the following functor block contains
an \lstinline!Option! value, \lstinline!Some(123)!, as the \textsf{``}source\textsf{''}.
Because of that, the value of the entire functor block expression
will also be of type \lstinline!Option!:

\begin{wrapfigure}{l}{0.47\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
scala> for {
  x <- Some(123)   // "Source" is Option[Int].
  y = (x - 3) / 10
} yield { if (y > 0) s"Have $y" else "Error" }
res1: Option[String] = Some(Have 12)
\end{lstlisting}

\vspace{-1.2\baselineskip}
\end{wrapfigure}%

\noindent In this code, the \lstinline!yield! keyword is followed
by an expression of type \lstinline!String!. So, the result of the
entire functor block is of type \lstinline!Option[String]!. Note
that the expression after the \textsf{``}\lstinline!yield!\textsf{''} can be a block
of arbitrary code containing new \lstinline!val!s, new \lstinline!def!s,
and/or other \lstinline!for!/\lstinline!yield! functor blocks if
needed.

Functor blocks can be used with any functor that has a \lstinline!map!
method, not only with library-defined type constructors such as \lstinline!Seq!
or \lstinline!Option!. Here are some examples of defining the \lstinline!map!
methods and using functor blocks with disjunctive types.

The type constructor \lstinline!QueryResult[_]! may define the \lstinline!map!
method on the trait itself and split its implementation between the
case classes like this:
\begin{lstlisting}
sealed trait QueryResult[A] {
  def map[B](f: A => B): QueryResult[B] // No implementation here.
}
case class Error[A](message: String)                       extends QueryResult[A] {
  def map[B](f: A => B): QueryResult[B] = Error(message)
}
case class Success[A](name: String, time: Long, data: A)   extends QueryResult[A] {
  def map[B](f: A => B): QueryResult[B] = Success(name, time, f(data))
}
\end{lstlisting}
After these definitions, we can use \lstinline!QueryResult! in functor
blocks:
\begin{lstlisting}
val q: QueryResult[Int] = Success("addresses", 123456L, 10)
scala> val result = for {
  x <- q
  y = x + 2
} yield s"$y addresses instead of $x"
result: QueryResult[String] = Success(addresses,123456,12 addresses instead of 10)
\end{lstlisting}

As another example, let us define the \lstinline!map! method on the
\lstinline!LO! trait (a recursive disjunctive type):
\begin{lstlisting}
sealed trait LO[A] {
  def map[B](f: A => B): LO[B]
}
final case class LO1[A](x: A)                      extends LO[A] {
  def map[B](f: A => B): LO[B] = LO1[B](f(x))
}
final case class LO2[A](x: A, y: A, tail: LO[A])   extends LO[A] {
  def map[B](f: A => B): LO[B] = LO2[B](f(x), f(y), tail.map(f))
}
\end{lstlisting}
After these definitions, we may use values of type \lstinline!LO[_]!
in functor blocks:
\begin{lstlisting}
scala> val result = for {
         x <- LO2("a", "quick", LO2("brown", "fox", LO1("jumped")))
         y = x.capitalize
         z = y + "/"
       } yield (z, z.length)
result: LO[(String, Int)] = LO2((A/,2),(Quick/,6),LO2((Brown/,6),(Fox/,4),LO1((Jumped/,7))))
\end{lstlisting}


\paragraph{Functor blocks and functor laws}

There is an important connection between the functor laws and the
properties of code in functor blocks. Consider the following code,
\begin{lstlisting}
def f(x: Int) = x * x    // Some computations.
def g(x: Int) = x - 1    // More computations.

scala> for {
  x <- List(10, 20, 30)
  y = x
  z = f(y)   // Perform computations.
} yield g(z)
res0: List[Int] = List(99, 399, 899)
\end{lstlisting}
The code says that \lstinline!x = y!, so it appears reasonable to
eliminate \lstinline!y! and simplify this code into
\begin{lstlisting}
scala> for {
  x <- List(10, 20, 30)    // Eliminated `y` from the code.
  z = f(x)   // Perform computations.
} yield g(z)
res1: List[Int] = List(99, 399, 899)
\end{lstlisting}
Another example of refactoring that appears reasonable is to combine
transformations:
\begin{lstlisting}
scala> for {
  x <- List(10, 20, 30)
  y = x + 1
  z = f(y)   // Perform computations.
} yield g(z)
res2: List[Int] = List(120, 440, 960) 
\end{lstlisting}
The code says that \lstinline!y = x + 1!, so we may want to replace
\lstinline!f(y)! by \lstinline!f(x + 1)!:
\begin{lstlisting}
scala> for {
  x <- List(10, 20, 30)
  z = f(x + 1)   // Eliminated `y` from the code.
} yield g(z)
res3: List[Int] = List(120, 440, 960) 
\end{lstlisting}
Looking at these code changes, we expect that the computed results
will remain the same. Indeed, when the code directly states that \lstinline!x = y!,
it would be confusing and counter-intuitive if the result value changed
after replacing \lstinline!y! by \lstinline!x!. When the code says
that \lstinline!y = x + 1!, ordinary mathematical reasoning suggests
that \lstinline!f(y)! can be replaced by \lstinline!f(x + 1)! without
affecting the results.

\begin{table}
\begin{centering}
\begin{tabular}{|>{\centering}p{0.45\textwidth}|>{\centering}p{0.45\textwidth}|}
\hline 
\textbf{\small{}Functor block syntax} & \textbf{\small{}Chains of }\lstinline!map!\textbf{\small{} methods}\tabularnewline
\hline 
\hline 
\hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
for {  // Code fragment 1a.
  x <- List(10, 20, 30)
  y = x
  z = f(y)
} yield g(z)
\end{lstlisting}
\vspace{-0.25\baselineskip}
%
\end{minipage} & \hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
List(10, 20, 30)  // Code fragment 1b.
  .map(x =>
   x).map(y =>
   f(y) ).map(z =>
   g(z) )
\end{lstlisting}
\vspace{-0.25\baselineskip}
%
\end{minipage}\tabularnewline
\hline 
\hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
for {  // Code fragment 2a.
  x <- List(10, 20, 30)
  z = f(x)
} yield g(z)
\end{lstlisting}
\vspace{-0.25\baselineskip}
%
\end{minipage} & \hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
List(10, 20, 30)  // Code fragment 2b.
  .map(x =>
   f(x) ).map(z =>
   g(z) )
\end{lstlisting}
\vspace{-0.25\baselineskip}
%
\end{minipage}\tabularnewline
\hline 
\hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
for {  // Code fragment 3a.
  x <- List(10, 20, 30)
  y = x + 1
  z = f(y)
} yield g(z)
\end{lstlisting}
\vspace{-0.25\baselineskip}
%
\end{minipage} & \hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
List(10, 20, 30)  // Code fragment 3b.
  .map(x =>
   x + 1).map(y =>
   f(y) ).map(z =>
   g(z) )
\end{lstlisting}
\vspace{-0.25\baselineskip}
%
\end{minipage}\tabularnewline
\hline 
\hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
for {  // Code fragment 4a.
  x <- List(10, 20, 30)
  z = f(x + 1)
} yield g(z)
\end{lstlisting}
\vspace{-0.1\baselineskip}
%
\end{minipage} & \hspace*{-0.0278\linewidth}%
\begin{minipage}[t]{1.06\linewidth}%
\vspace{-0.86\baselineskip}
\begin{lstlisting}
List(10, 20, 30)  // Code fragment 4b.
  .map(x =>
   f(x + 1) ).map(z =>
   g(z) )
\end{lstlisting}
\vspace{-0.1\baselineskip}
%
\end{minipage}\tabularnewline
\hline 
\end{tabular}
\par\end{centering}
\caption{Example translations of functor blocks into \lstinline!map! methods.\label{tab:Example-translations-of-functor-blocks-into-map-methods}}
\end{table}

To see the connection with the functor laws, we translate the functor
block syntax line by line into chains of \lstinline!map! methods.
The resulting code fragments are shown in Table~\ref{tab:Example-translations-of-functor-blocks-into-map-methods}.
The fragments using \lstinline!map! methods were split into lines
to emphasize their close correspondence to functor blocks.

We find that code fragments \lstinline!1b! and \lstinline!2b! are
equal only if \lstinline!.map(x => x)! does not modify the list to
which it applies. This holds if the \lstinline!map! method obeys
the functor identity law, \lstinline!p.map(identity) == p!, for all
\lstinline!p! of the appropriate type. We also find that code fragments
\lstinline!3b! and \lstinline!4b! are equal if we can replace \lstinline!.map(x => x + 1).map(f)!
by \lstinline!.map(x => f(x + 1))!. This replacement is justified
as long as the \lstinline!map! method obeys the functor composition
law, 
\begin{lstlisting}
p.map(h).map(f) == p.map(x => f(h(x)))
\end{lstlisting}
for all \lstinline!p! and functions \lstinline!h! and \lstinline!f!
of appropriate types.

Functor laws guarantee that we can correctly understand and modify
code written in functor blocks, reasoning about transformations of
values as we do in mathematics.

\subsection{Examples of non-functors\label{subsec:Examples-of-non-functors}}

What properties of a data type make it a functor? To build an intuition,
it is helpful to see examples of data types that are \emph{not} functors.

There are several possibilities for a type constructor to fail being
a functor:
\begin{itemize}
\item A \lstinline!map! function\textsf{'}s type signature cannot be implemented
at all.
\item A \lstinline!map! function can be implemented but cannot satisfy
the functor laws.
\item A given \lstinline!map! function is incorrect (does not satisfy the
laws), although the error could be fixed: a different implementation
of \lstinline!map! satisfies the laws.
\item A given \lstinline!map[A, B]! function satisfies the laws for most
types \lstinline!A! and \lstinline!B!, but violates the laws for
certain specially chosen types.
\end{itemize}
We will now look at examples illustrating these possibilities.

\paragraph{Cannot implement \texttt{map}\textsf{'}s type signature}

Consider the type constructor $H^{\bullet}$ defined by
\[
H^{A}\triangleq A\rightarrow\text{Int}\quad.
\]
Scala code for this type notation can be
\begin{lstlisting}
final case class H[A](r: A => Int)
\end{lstlisting}
The data type \lstinline!H[A]! does not wrap data of type $A$; instead,
it is a function that \emph{consumes} data of type $A$. One cannot
implement a fully parametric \lstinline!map! function with the required
type signature 
\[
\text{map}^{A,B}:\left(A\rightarrow\text{Int}\right)\rightarrow\left(A\rightarrow B\right)\rightarrow\left(B\rightarrow\text{Int}\right)\quad.
\]
To see this, recall that a \index{fully parametric!function}fully
parametric function needs to treat all types as type parameters, including
the primitive type \lstinline!Int!. So the code
\begin{lstlisting}
def map[A, B]: H[A] => (A => B) => H[B] = { r => f => C(_ => 123) }
\end{lstlisting}
satisfies the type signature of \lstinline!map! but is not fully
parametric because it returns a specific value \lstinline!123! of
type \lstinline!Int!, which is not allowed. Replacing the type \lstinline!Int!
by a new type parameter $N$, we obtain the type signature
\[
\text{map}^{A,B,N}:\left(A\rightarrow N\right)\rightarrow\left(A\rightarrow B\right)\rightarrow B\rightarrow N\quad.
\]
We have seen in Example~\ref{subsec:ch-solvedExample-6} that this
type signature is not implementable. So, the type constructor $H$
is not a functor.

Another important example of type constructors where the \lstinline!map!\textsf{'}s
type signature cannot be implemented are certain kinds of type constructors
called \index{generalized algebraic data types}\textbf{generalized
algebraic data types} (GADTs\index{GADT}). In this book, they are
called \textbf{unfunctors}\index{unfunctor} for short. An unfunctor
is a type constructor having special values when its type parameter
is set to certain specific types. An example of an unfunctor is
\begin{lstlisting}
sealed trait ServerAction[R]
final case class GetResult[R](r: String => R) extends ServerAction[R]
final case class StoreId(x: Long, y: String)  extends ServerAction[Boolean]
final case class StoreName(name: String)      extends ServerAction[Int]
\end{lstlisting}
We see that some parts of the disjunctive type \lstinline!ServerAction[R]!
do not carry the type parameter \lstinline!R! but instead set \lstinline!R!
to specific types, \lstinline!R = Boolean! and \lstinline!R = Int!.
As a consequence, e.g., the case class \lstinline!StoreName! has
no type parameters and can only represent values of type \lstinline!ServerAction[Int]!
but not, say, \lstinline!ServerAction[String]!. For this reason,
\lstinline!ServerAction[A]! cannot have a fully parametric \lstinline!map!
function, 
\begin{lstlisting}
def map[A, B]: ServerAction[A] => (A => B) => ServerAction[B]
\end{lstlisting}
To implement \lstinline!map!, we are required to support any choice
of the type parameters \lstinline!A! and \lstinline!B!. For example,
with \lstinline!A = Int!, we must be able to transform the value
\lstinline!StoreName("abc")! of type \lstinline!ServerAction[Int]!
to a value of type \lstinline!ServerAction[B]! with any given \lstinline!B!.
However, the only way of creating a value of type \lstinline!ServerAction[B]!
with an arbitrary type \lstinline!B! is to use the case class \lstinline!GetResult[B]!.
That requires us to create a function of type \lstinline!String => B!.
It is impossible for us to produce such a function out of \lstinline!StoreName("abc")!
and a function \lstinline!f: Int => B! because the type \lstinline!B!
is unknown, and no fully parametric code could compute any values
of type \lstinline!Int! or of type \lstinline!B! from the given
value \lstinline!StoreName("abc")!. 

We are prevented from implementing \lstinline!map! because some type
parameters are already set in the definition of \lstinline!ServerAction[R]!.
One can say that the unfunctor \lstinline!ServerAction[_]! fails
to be fully parametric \emph{in its type definition}. This behavior
of unfunctors is intentional; unfunctors are only used in situations
where the lack of \lstinline!map! does not lead to problems (see
Chapter~\ref{chap:Free-type-constructions}).

\paragraph{Cannot implement a lawful \texttt{map}}

An example of a non-functor of the second kind is 
\[
Q^{A}\triangleq\left(A\rightarrow\text{Int}\right)\times A\quad.
\]
Scala code for this type constructor is
\begin{lstlisting}
final case class Q[A](q: A => Int, a: A)
\end{lstlisting}
A fully parametric \lstinline!map! function with the correct type
signature \emph{can} be implemented (and there is only one such implementation):
\[
\text{map}^{A,B}\triangleq q^{:A\rightarrow\text{Int}}\times a^{:A}\rightarrow f^{:A\rightarrow B}\rightarrow(\_\rightarrow q(a))^{:B\rightarrow\text{Int}}\times f(a)\quad.
\]
The corresponding Scala code is

\begin{wrapfigure}{l}{0.55\columnwidth}%
\vspace{-0.6\baselineskip}

\begin{lstlisting}
def map[A, B]: Q[A] => (A => B) => Q[B] = { qa => f =>
  Q[B](_ => qa.q(qa.a), f(qa.a)) 
}
\end{lstlisting}
\vspace{-0.8\baselineskip}
\end{wrapfigure}%

\noindent This \lstinline!map! function is fully parametric (since
it treats the type \lstinline!Int! as a type parameter) and has the
right type signature, but the functor laws do not hold. To show that
the identity law fails, we consider an arbitrary value $q^{:A\rightarrow\text{Int}}\times a^{:A}$
and compute:
\begin{align*}
{\color{greenunder}\text{expect to equal }q\times a:}\quad & \text{map}\,(q\times a)(\text{id})\\
{\color{greenunder}\text{definition of }\text{map}:}\quad & =(\_\rightarrow q(a))\times\gunderline{\text{id}\,(a)}\\
{\color{greenunder}\text{definition of }\text{id}:}\quad & =(\_\rightarrow q(a))\times a\\
{\color{greenunder}\text{expanded function, }q=\left(x\rightarrow q(x)\right):}\quad & \quad\neq q\times a=(x\rightarrow q(x))\times a\quad.
\end{align*}
The law must hold for arbitrary functions $q^{:A\rightarrow\text{Int}}$,
but the function $\left(\_\rightarrow q(a)\right)$ always returns
the same value $q(a)$ and thus is not equal to the original function
$q$. So, the result of evaluating the expression $\text{map}(q\times a)(\text{id})$
is not always equal to the original value $q\times a$. 

Since this \lstinline!map! function is the only available implementation
of the required type signature, we conclude that $Q^{\bullet}$ is
not a functor (we cannot implement \lstinline!map! that satisfies
the laws).

\paragraph{Mistakes in implementing \texttt{map}}

Non-functors of the third kind are type constructors with an incorrectly
implemented \lstinline!map!. An example is a type constructor $P^{A}\triangleq A\times A$
with the \lstinline!map! function
\[
\text{map}\triangleq x^{:A}\times y^{:A}\rightarrow f^{:A\rightarrow B}\rightarrow f(y)\times f(x)\quad.
\]
Here is the Scala code corresponding to this code notation:
\begin{lstlisting}
def map[A, B](p: (A, A))(f: A => B): (B, B) = p match { case (x, y) => (f(y), f(x)) }
\end{lstlisting}
This code swaps the values in the pair \lstinline!(x, y)!; we could
say that it fails to preserve information about the order of those
values. The functor identity law does not hold:
\begin{align*}
{\color{greenunder}\text{expect to equal }x\times y:}\quad & \text{map}\,(x^{:A}\times y^{:A})(\text{id}^{A})\\
{\color{greenunder}\text{definition of }\text{map}:}\quad & =\gunderline{\text{id}\,(y)}\times\gunderline{\text{id}\,(x)}\\
{\color{greenunder}\text{definition of }\text{id}:}\quad & =y\times x\neq x\times y\quad.
\end{align*}
We should not have swapped the values in the pair. The correct implementation
of \lstinline!map!,
\[
\text{map}\triangleq x^{:A}\times y^{:A}\rightarrow f^{:A\rightarrow B}\rightarrow f(x)\times f(y)\quad,
\]
preserves information and satisfies the functor laws.

Example \ref{subsec:f-Example-A-A-A} shows the type constructor $\text{Vec}_{3}{}^{\bullet}$
with an incorrect implementation of \lstinline!map! that reorders
some parts of a tuple and duplicates other parts. The correct implementation
preserves the order of parts in a tuple and does not duplicate or
omit any parts.

Another case of an an incorrect implementation is the following \lstinline!map!
function for \lstinline!Option[_]!:
\begin{lstlisting}
def map_bad[A, B]: Option[A] => (A => B) => Option[B] = { _ => _ => None }
\end{lstlisting}
This function always returns \lstinline!None!, losing information
and violating the identity law. However, we have already seen that
\lstinline!Option[_]! has a different implementation of \lstinline!map!
that satisfies the functor laws.

Similarly, one could define \lstinline!map! for the \lstinline!List[_]!
type constructor to always return an empty list:
\begin{lstlisting}
def map_bad[A, B]: List[A] => (A => B) => List[B] = { _ => _ => List() }
\end{lstlisting}
This implementation loses information and violates the functor laws.
Of course, the Scala library provides a correct implementation of
\lstinline!map! for \lstinline!List[_]!.

Example~\ref{subsec:f-Example-Int-x-A} is another situation where
an incorrectly implemented \lstinline!map! violates functor laws.

Functor laws will also be violated when \lstinline!map! is not fully
parametric. For instance, consider an implementation of \lstinline!fmap[A, B](f)!
that checks whether the two type parameters $A$ and $B$ are equal
to each other \emph{as types}, and if so, applies the function argument
\lstinline!f! twice. We need to use special features of Scala (run-time
\index{type reflection}type reflection and \lstinline!TypeTag!)
for comparing two type parameters as types:
\begin{lstlisting}
import scala.reflect.runtime.universe._
def getType[T: TypeTag]: Type = weakTypeOf[T]
def equalTypes[A: TypeTag, B: TypeTag]: Boolean = getType[A] =:= getType[B]

def fmap_bad[A: TypeTag, B: TypeTag](f: A => B)(oa: Option[A]): Option[B] = oa match {
  case None      =>   None
  case Some(x)   =>                   // If A = B, compute f(f(x)), else compute f(x).
    val z: B = if (equalTypes[A, B]) f(f(x).asInstanceOf[A]) else f(x)
    Some(z)
}
\end{lstlisting}
Testing shows that this function works as designed:
\begin{lstlisting}
scala> fmap_bad[Int, String](_ + " a")(Some(123))       // Appends " a" once.
res0: Option[String] = Some(123 a)

scala> fmap_bad[String, String](_ + " a")(Some("123"))  // Appends " a" twice.
res1: Option[String] = Some(123 a a)
\end{lstlisting}
The function \lstinline!fmap_bad[A, B]! satisfies the identity law
but violates the composition law when \lstinline!A = B!:
\begin{lstlisting}
scala> fmap_bad[String, String](_ + " b")(Some("123 a a"))
res2: Option[String] = Some(123 a a b b)

scala> fmap_bad[String, String](_ + " a b")(Some("123"))
res3: Option[String] = Some(123 a b a b)
\end{lstlisting}

In all these examples, we \emph{could} implement a \lstinline!map!
function that would obey the laws. It is not precise to say that,
e.g., the type constructor \lstinline!Vec3[_]! is \emph{by itself}
a functor: being a functor depends on having a lawful \lstinline!map!
function. Keeping that in mind, we will say that the type constructor
\lstinline!Vec3[_]! \textsf{``}is\textsf{''} a functor, meaning that a suitable lawful
implementation of \lstinline!map! is known.

\paragraph{Laws hold for some types but not for others}

The Scala standard library contains \lstinline!map! methods for the
type constructors \lstinline!Set! (transforming the values in a set)
and \lstinline!Map! (transforming both the keys and values in a dictionary).
However, \lstinline!Set[K]! and \lstinline!Map[K, V]! fail to be
lawful functors with respect to the type parameter \lstinline!K!.
The reason for the failure is complicated. A value of type \lstinline!Set[K]!
represents a set of zero or more values of type \lstinline!K!, and
it is enforced that all values in the set are distinct. So, the correct
functionality of \lstinline!Set! requires us to be able to\emph{
}check whether two values of type \lstinline!K! are equal. A standard
way of comparing values for equality is the \lstinline!equals! method
defined in the Scala library:
\begin{lstlisting}
scala>  List(1, 2, 3).equals(List(1, 2, 3))
res0: Boolean = true

scala>  List(1, 2, 3).equals(List(1, 2, 3, 4))
res1: Boolean = false
\end{lstlisting}
However, an \lstinline!equals! operation will work as expected only
if it obeys the \label{par:label-equality-laws}laws of \textbf{identity}\index{identity laws!of equality|textit}
(if $x=y$ then $f(x)=f(y)$ for any $f$), \textbf{symmetry} (if
$x=y$ then $y=x$)\index{symmetry law of equality}, \textbf{reflexivity}\index{reflexivity law|textit}
($x=x$ for any $x$), and \textbf{transitivity}\index{transitivity law of equality}
(if $x=y$ and $y=z$ then $x=z$). In most practical applications,
the required type \lstinline!K! (such as \lstinline!String! or \lstinline!Int!)
will have a lawful \lstinline!equals! method. In some cases, however,
data types could redefine their \lstinline!equals! method for application-specific
purposes and violate some of the required laws.

Here are two examples of law-breaking (but potentially useful) code
for \lstinline!equals!. The first example\footnote{This example is based on a comment by \index{Pawel@Pawe\l{} Szulc}Pawe\l{}
Szulc at \texttt{\href{https://gist.github.com/tpolecat/7401433}{https://gist.github.com/tpolecat/7401433}}} is a disjunctive type $A+B$ whose \lstinline!equals! method allows
only values of type $A+\bbnum 0$ to be equal:
\begin{lstlisting}
final case class OnlyA[A, B](eab: Either[A, B]) {
  override def equals(y: Any): Boolean = (eab, y) match {
    case (Left(a1), OnlyA(Left(a2)))   => a1 == a2  // Values Left(a1) and Left(a2) might be equal.
    case _                             => false     // Never equal unless both are `Left`.
  }
}
\end{lstlisting}
This implementation of \lstinline!equals! is mathematically invalid:
it violates the reflexivity law ($\forall x.\,x=x$) because values
of the form \lstinline!OnlyA! are never equal to each other:
\begin{lstlisting}
scala> OnlyA(Right(0)) equals OnlyA(Right(0))
res2: Boolean = false
\end{lstlisting}
As a result, the library code of \lstinline!Set[OnlyA]! will fail
to detect that, e.g., several values \lstinline!OnlyA(Right(0))!
are equal. The composition law of functors will fail when intermediate
values of that type are used:
\begin{lstlisting}
val f: OnlyA[Int, Int] => Int = { case OnlyA(Left(a)) => a; case OnlyA(Right(a)) => a }
val g: Int => OnlyA[Int, Int] = { a => OnlyA(Right(a)) }
val xs = Seq(0, 0, 0).map(g).toSet

scala> xs.map(f andThen g)  // `Set` fails to detect identical values.
res3: Set[OnlyA[Int,Int]] = Set(OnlyA(Right(0)), OnlyA(Right(0)), OnlyA(Right(0)))

scala> xs.map(f).map(g)     // `Set` detects identical values.
res4: Set[OnlyA[Int,Int]] = Set(OnlyA(Right(0)))
\end{lstlisting}

The second example is a product type $A\times B$ whose \lstinline!equals!
method ignores the part of type $B$:
\begin{lstlisting}
final case class IgnoreB[A, B](a: A, b: B) {
  override def equals(y: Any): Boolean = y match {
    case IgnoreB(a2, b2)   => a == a2  // Equal as long as the parts of type A are equal.
    case _                 => false    // Never equal to a value of another type (not IgnoreB).
  }
}

scala> IgnoreB(123, "abc") == IgnoreB(123, "def")
res5: Boolean = true
\end{lstlisting}
As a result, Scala\textsf{'}s library code of \lstinline!Set[IgnoreB]! will
fail to detect that some values are different. This violates the functor
composition law:
\begin{lstlisting}[mathescape=true]
val f: IgnoreB[Int, Int] => IgnoreB[Int, Int] = { case IgnoreB(x, y) => IgnoreB(y, x) }  //  ${\color{dkgreen} f \bef f = \textrm{id} }$
val xs = Set(IgnoreB(0, 0), IgnoreB(1, 0))

scala> xs.map(f andThen f)    // This is equal to `xs`.
res6: Set[IgnoreB[Int,Int]] = Set(IgnoreB(0,0), IgnoreB(1,0))

scala> xs.map(f).map(f)       // This is not equal to `xs`.
res7: Set[IgnoreB[Int,Int]] = Set(IgnoreB(0,0))
\end{lstlisting}

The functor laws for a type constructor $L^{\bullet}$ do not require
that the types $A,B$ used in the function
\[
\text{fmap}_{L}:\left(A\rightarrow B\right)\rightarrow L^{A}\rightarrow L^{B}
\]
should have a mathematically lawful definition of the \lstinline!equals!
method (or of any other operation). The \lstinline!map! method of
a functor $L^{\bullet}$ must be \textbf{lawful}\index{lawful functor}\index{functor!laws of},
i.e., must satisfy the functor laws~(\ref{eq:f-identity-law-functor-fmap})\textendash (\ref{eq:f-composition-law-functor-fmap})
for all types $A,B$. The functor laws must hold even if a type $A$\textsf{'}s
implementation of some operations violate some other laws. For this
reason, \lstinline!Set[_]! cannot be considered a functor in a rigorous
sense.

The \lstinline!map! method for dictionaries has a similar problem:
the keys of a dictionary must be distinct and will be compared using
the \lstinline!equals! method. So, the \lstinline!map! method for
\lstinline!Map[K, V]! will violate the functor laws unless the type
\lstinline!K! has a lawful \lstinline!equals! method.

The Scala standard library still provides the \lstinline!map! and
\lstinline!flatMap! methods for sets \lstinline!Set[K]! and dictionaries
\lstinline!Map[K, V]! because most applications will use types \lstinline!K!
that have lawful \lstinline!equals! operations, and the functor laws
will hold.

\subsection{Contrafunctors\label{subsec:Contrafunctors}}

As we have seen in Section~\ref{subsec:Examples-of-non-functors},
the type constructor $H^{\bullet}$ defined by $H^{A}\triangleq A\rightarrow\text{Int}$
is not a functor because it is impossible to implement the type signature
of \lstinline!map! as a fully parametric function,
\[
\text{map}^{A,B}:\left(A\rightarrow\text{Int}\right)\rightarrow\left(A\rightarrow B\right)\rightarrow B\rightarrow\text{Int}\quad.
\]
To see why, begin writing the code with a typed hole, 
\[
\text{map}\,(h^{:A\rightarrow\text{Int}})(f^{:A\rightarrow B})(b^{:B})=\text{???}^{:\text{Int}}\quad.
\]
The only way of returning an \lstinline!Int! in fully parametric
code is by applying the function $h^{:A\rightarrow\text{Int}}$. Since
$h$ consumes (rather than wraps) values of type $A$, we have no
values of type $A$ and cannot apply the function $h^{:A\rightarrow\text{Int}}$.
However, it would be possible to apply a function of type $B\rightarrow A$
since a value of type $B$ is given as one of the curried arguments,
$b^{:B}$. So, we can implement a function called \lstinline!contramap!
with a different type signature where the function type is $B\rightarrow A$
instead of $A\rightarrow B$: 
\[
\text{contramap}^{A,B}:\left(A\rightarrow\text{Int}\right)\rightarrow\left(B\rightarrow A\right)\rightarrow B\rightarrow\text{Int}\quad.
\]
The implementation of this function is written in the code notation
as
\[
\text{contramap}\triangleq h^{:A\rightarrow\text{Int}}\rightarrow f^{:B\rightarrow A}\rightarrow\left(f\bef h\right)^{:B\rightarrow\text{Int}}\quad,
\]
and the corresponding Scala code is
\begin{lstlisting}
def contramap[A, B](h: H[A])(f: B => A): H[B] = { f andThen h }
\end{lstlisting}
Flipping the order of the curried arguments in \lstinline!contramap!,
we define \lstinline!cmap! as
\begin{align}
\text{cmap}^{A,B} & :\left(B\rightarrow A\right)\rightarrow H^{A}\rightarrow H^{B}\quad,\nonumber \\
\text{cmap} & \triangleq f^{:B\rightarrow A}\rightarrow h^{:A\rightarrow\text{Int}}\rightarrow\left(f\bef h\right)^{:B\rightarrow\text{Int}}\quad.\label{eq:f-example-1-contrafmap}
\end{align}
The type signature of \lstinline!cmap! has the form of a \textsf{``}reverse
lifting\textsf{''}: functions of type \lstinline!B => A! are lifted into
the type \lstinline!H[A] => H[B]!. The Scala code for \lstinline!cmap!
is
\begin{lstlisting}
def cmap[A, B](f: B => A): H[A] => H[B] = { h => f andThen h } 
\end{lstlisting}
We can check that this \lstinline!cmap! satisfies two laws analogous
to the functor laws\index{composition law!of contrafunctors}\index{identity laws!of contrafunctors}:
\begin{align*}
{\color{greenunder}\text{identity law}:}\quad & \text{cmap}^{A,A}(\text{id}^{:A\rightarrow A})=\text{id}^{:H^{A}\rightarrow H^{A}}\quad,\\
{\color{greenunder}\text{composition law}:}\quad & \text{cmap}^{A,B}(f^{:B\rightarrow A})\bef\text{cmap}^{B,C}(g^{:C\rightarrow B})=\text{cmap}^{A,C}(g\bef f)\quad.
\end{align*}

\begin{wrapfigure}{l}{0.4\columnwidth}%
\vspace{-2\baselineskip}
\[
\xymatrix{\xyScaleY{1.5pc}\xyScaleX{3pc} & H^{B}\ar[rd]\sp(0.6){\ ~\text{cmap}_{H}(g^{:C\rightarrow B})}\\
H^{A}\ar[ru]\sp(0.4){\text{cmap}_{H}(f^{:B\rightarrow A})\ }\ar[rr]\sb(0.5){\text{cmap}_{H}(g^{:C\rightarrow B}\bef f^{:B\rightarrow A})\ ~} &  & H^{C}
}
\]

\vspace{-2\baselineskip}
\end{wrapfigure}%

\noindent Since the function argument $f^{:B\rightarrow A}$ has the
reverse order of types, the composition law reverses the order of
composition $\left(g\bef f\right)$ on one side; in this way, all
types match. To verify the identity law:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{cmap}\left(\text{id}\right)\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-example-1-contrafmap})}:}\quad & =h\rightarrow\gunderline{(\text{id}\bef h)}\\
{\color{greenunder}\text{definition of }\text{id}:}\quad & =\left(h\rightarrow h\right)=\text{id}\quad.
\end{align*}
To verify the composition law:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{cmap}\left(g\bef f\right):}\quad & \text{cmap}\left(f\right)\bef\text{cmap}\left(g\right)\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-example-1-contrafmap})}:}\quad & =\left(h\rightarrow(f\bef h)\right)\bef(\gunderline h\rightarrow(g\bef\gunderline h))\\
{\color{greenunder}\text{rename }h\text{ to }k\text{ for clarity}:}\quad & =\left(h\rightarrow(f\bef h)\right)\bef\left(k\rightarrow(g\bef k)\right)\\
{\color{greenunder}\text{compute composition}:}\quad & =\left(h\rightarrow g\bef f\bef h\right)\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-example-1-contrafmap})}:}\quad & =\text{cmap}\left(g\bef f\right)\quad.
\end{align*}

A type constructor with a fully parametric \lstinline!cmap! is called
a \textbf{contrafunctor}\index{contrafunctor} if the identity and
the composition laws are satisfied.

\subsubsection{Example \label{subsec:f-Example-contrafunctor}\ref{subsec:f-Example-contrafunctor}}

Show that the type constructor $D^{A}\triangleq A\rightarrow A\rightarrow\text{Int}$
is a contrafunctor.

\subparagraph{Solution}

The required type signature for \lstinline!contramap! is
\begin{lstlisting}
def contramap[A, B](d: A => A => Int)(f: B => A): B => B => Int = ???
\end{lstlisting}
We begin implementing \lstinline!contramap! by writing code with
a typed hole:
\[
\text{contramap}^{A,B}\triangleq d^{:A\rightarrow A\rightarrow\text{Int}}\rightarrow f^{:B\rightarrow A}\rightarrow b_{1}^{:B}\rightarrow b_{2}^{:B}\rightarrow\text{???}^{:\text{Int}}\quad.
\]
To fill the typed hole, we need to compute a value of type \lstinline!Int!.
The only possibility is to apply $d$ to two curried arguments of
type $A$. We have two curried arguments of type $B$. So we apply
$f^{:B\rightarrow A}$ to those arguments, obtaining two values of
type $A$. To avoid information loss, we need to preserve the order
of the curried arguments. So the resulting expression is
\[
\text{contramap}^{A,B}\triangleq d^{:A\rightarrow A\rightarrow\text{Int}}\rightarrow f^{:B\rightarrow A}\rightarrow b_{1}^{:B}\rightarrow b_{2}^{:B}\rightarrow d\left(f(b_{1})\right)\left(f(b_{2})\right)\quad.
\]
The corresponding Scala code is 
\begin{lstlisting}
def contramap[A, B](d: A => A => Int)(f: B => A): B => B => Int = { b1 => b2 => d(f(b1))(f(b2)) }
\end{lstlisting}
To verify the laws, it is easier to use the equivalent \lstinline!cmap!
defined by
\begin{equation}
\text{cmap}^{A,B}(f^{:B\rightarrow A})\triangleq d^{:A\rightarrow A\rightarrow\text{Int}}\rightarrow b_{1}^{:B}\rightarrow b_{2}^{:B}\rightarrow d\left(f(b_{1})\right)\left(f(b_{2})\right)\quad.\label{eq:f-example-2-contrafmap}
\end{equation}
To verify the identity law:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{cmap}\left(\text{id}\right)\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-example-2-contrafmap})}:}\quad & =d\rightarrow b_{1}\rightarrow b_{2}\rightarrow d\gunderline{\left(\text{id}\,(b_{1})\right)}\gunderline{\left(\text{id}\,(b_{2})\right)}\\
{\color{greenunder}\text{definition of }\text{id}:}\quad & =d\rightarrow\gunderline{b_{1}\rightarrow b_{2}\rightarrow d(b_{1})(b_{2})}\\
{\color{greenunder}\text{simplify curried function}:}\quad & =\left(d\rightarrow d\right)=\text{id}\quad.
\end{align*}
To verify the composition law, we rewrite its left-hand side into
the right-hand side:
\begin{align*}
 & \text{cmap}\,(f)\bef\text{cmap}\,(g)\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-example-2-contrafmap})}:}\quad & =\left(d\rightarrow b_{1}\rightarrow b_{2}\rightarrow d\left(f(b_{1})\right)\left(f(b_{2})\right)\right)\bef(\gunderline d\rightarrow b_{1}\rightarrow b_{2}\rightarrow\gunderline d\left(g(b_{1})\right)\left(g(b_{2})\right))\\
{\color{greenunder}\text{rename }d\text{ to }e:}\quad & =\left(d\rightarrow b_{1}\rightarrow b_{2}\rightarrow d\left(f(b_{1})\right)\left(f(b_{2})\right)\right)\bef\left(e\rightarrow b_{1}\rightarrow b_{2}\rightarrow e\left(g(b_{1})\right)\left(g(b_{2})\right)\right)\\
{\color{greenunder}\text{compute composition}:}\quad & =d\rightarrow b_{1}\rightarrow b_{2}\rightarrow d\left(f(g(b_{1}))\right)\left(f(g(b_{2}))\right)\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-example-2-contrafmap})}:}\quad & =\text{cmap}\,(b\rightarrow f(g(b)))\\
{\color{greenunder}\text{definition of }\left(g\bef f\right):}\quad & =\text{cmap}\,(g\bef f)\quad.
\end{align*}

The type $H^{A}$ represents a function that consumes a value of type
$A$ to produce an integer; the type $D^{A}$ represents a curried
function consuming \emph{two} values of type $A$. These examples
suggest the heuristic view that contrafunctors \textsf{``}consume\textsf{''} data
while functors \textsf{``}wrap\textsf{''} data. By looking at the position of a given
type parameter in a type expression such as $A\times\text{Int}$ or
$A\rightarrow A\rightarrow\text{Int}$, we can see whether the type
parameter is \textsf{``}consumed\textsf{''} or \textsf{``}wrapped\textsf{''}: A type parameter to
the left of a function arrow is being \textsf{``}consumed\textsf{''}; a type parameter
to the right of a function arrow (or used without a function arrow)
is being \textsf{``}wrapped\textsf{''}. We will make this intuition precise in Section~\ref{sec:f-Laws-and-structure}.

\paragraph{Type constructors that are not contrafunctors }

A type constructor that both consumes \emph{and} wraps data is neither
a functor nor a contrafunctor. An example of such a type constructor
is
\[
N^{A}\triangleq\left(A\rightarrow\text{Int}\right)\times\left(\bbnum 1+A\right)\quad.
\]
We can implement neither \lstinline!map! nor \lstinline!contramap!
for $N^{\bullet}$. Intuitively, the type parameter $A$ is used both
to the left of a function arrow (being \textsf{``}consumed\textsf{''}) and outside
of a function (being \textsf{``}wrapped\textsf{''}).

Unfunctors\index{unfunctor} (type constructors that lack full parametricity)
also cannot be contrafunctors because the required type signature
for \lstinline!contramap! cannot be implemented by a fully parametric
function. To show that \lstinline!ServerAction[_]! cannot be a contrafunctor,
we can straightforwardly adapt the reasoning used in Section~\ref{subsec:Examples-of-non-functors}
when we showed that \lstinline!ServerAction[_]! cannot be a functor.

\subsection{Subtyping, covariance, and contravariance}

A type $P$ is called a \textbf{subtype}\index{types!subtype of}
of a type $Q$ if there exists a designated\index{type conversion function}
\textbf{type conversion} function of type $P\rightarrow Q$ that the
compiler will automatically use whenever necessary to match types.
For instance, applying a function of type $Q\rightarrow Z$ to a value
of type $P$ is ordinarily a type error,
\begin{lstlisting}
val h: Q => Z = ???
val p: P = ???
h(p) // Type error: the argument of h must be of type Q, not P.
\end{lstlisting}
However, this code will work when $P$ is a subtype of $Q$ because
the compiler will automatically use the type conversion $P\rightarrow Q$
before applying the function \lstinline!h!.

Different programming languages define subtyping differently because
they make different choices of the type conversion functions and of
types $P$, $Q$ to which type conversions apply. Most often, the
language designers choose the type conversion functions to be \emph{identity}
functions that merely reassign the types. Let us look at some examples
of type conversion functions of that kind. 

Within the focus of this book, the main example of subtyping is with
disjunctive types. Consider this definition,
\begin{lstlisting}
sealed trait AtMostTwo
final case class Zero()               extends AtMostTwo
final case class One(x: Int)          extends AtMostTwo
final case class Two(x: Int, y: Int)  extends AtMostTwo
\end{lstlisting}
The corresponding type notation can be written as
\[
\text{AtMostTwo}\triangleq\bbnum 1+\text{Int}+\text{Int}\times\text{Int}\quad.
\]
Each of the case classes (\lstinline!Zero!, \lstinline!One!, and
\lstinline!Two!) defines a type that is a subtype of \lstinline!AtMostTwo!.
To see that, we need to implement type conversion functions from each
of the three case classes to \lstinline!AtMostTwo!. The required
functions reassign the types but perform no transformations on the
data:
\begin{lstlisting}
def f0: Zero => AtMostTwo  = { case Zero()    => Zero()    }
def f1: One  => AtMostTwo  = { case One(x)    => One(x)    }
def f2: Two  => AtMostTwo  = { case Two(x, y) => Two(x, y) }
\end{lstlisting}
The implementation of these type conversion functions looks like the
code of \emph{identity} functions. In the matrix notation, we can
write
\begin{align*}
f_{0} & \triangleq\,\begin{array}{|c||ccc|}
 & \text{Zero} & \text{One} & \text{Two}\\
\hline \text{Zero} & \text{id} & \bbnum 0 & \bbnum 0
\end{array}\quad,\quad\quad f_{0}(\bbnum 1^{:\text{Zero}})\triangleq\bbnum 1+\bbnum 0^{:\text{One}}+\bbnum 0^{:\text{Two}}\quad,\\
f_{1} & \triangleq\,\,\begin{array}{|c||ccc|}
 & \text{Zero} & \text{One} & \text{Two}\\
\hline \text{One} & \bbnum 0 & \text{id} & \bbnum 0
\end{array}\quad,\quad\quad f_{1}(x^{:\text{Int}})\triangleq\bbnum 0^{:\text{Zero}}+x^{:\text{One}}+\bbnum 0^{:\text{Two}}\quad,\\
f_{2} & \triangleq\,\begin{array}{|c||ccc|}
 & \text{Zero} & \text{One} & \text{Two}\\
\hline \text{Two} & \bbnum 0 & \bbnum 0 & \text{id}
\end{array}\quad,\quad\quad f_{2}(x^{:\text{Int}}\times y^{:\text{Int}})\triangleq\bbnum 0^{:\text{Zero}}+\bbnum 0^{:\text{One}}+(x\times y)^{:\text{Two}}\quad.
\end{align*}
This notation emphasizes that the code consists of identity functions
with reassigned types.

Another example is a subtyping relation between function types. Consider
the types
\begin{lstlisting}
type P = (AtMostTwo => Int)
type Q = (Two => Int)
\end{lstlisting}
We can convert a function $f$ of type $P$ into a function $g$ of
type $Q$ because $f$ includes all the information necessary to define
$g$. The Scala code for that type conversion is
\begin{lstlisting}
def p2q(f: P): Q = { t: Two => f(t) }
\end{lstlisting}
This is written in the code notation as
\[
\text{p2q}\,(f^{:\text{AtMostTwo}\rightarrow\text{Int}})\triangleq t^{:\text{Two}}\rightarrow f(t)\quad.
\]
Note that $t^{:\text{Two}}\rightarrow f(t)$ is the same function
as $f$, except applied to a subtype \lstinline!Two! of \lstinline!AtMostTwo!.
So, the implementation of \lstinline!p2q(f)! is just \lstinline!f!
composed with an identity function with reassigned types.

In these cases, it is useful if the compiler could insert the appropriate
conversion functions automatically whenever necessary. Any function
that consumes an argument of type $Q$ could be then automatically
applicable to arguments of type $P$. The compiler could also remove
the identity functions from the code, since they do not perform any
data transformations. In this way, code involving subtypes becomes
more concise with no decrease in performance.

To achieve this, we need to declare to the Scala compiler that certain
types are in a subtyping relation. This can be done in one of three
ways depending on the situation at hand:
\begin{enumerate}
\item Declaring a class that \lstinline!extends! another class (as we have
just seen).
\item Declaring type parameters with a \textsf{``}variance annotation\textsf{''} such as
\lstinline!L[+A]! or \lstinline!L[-B]!.
\item Declaring type parameters with a \textsf{``}subtyping annotation\textsf{''} (\lstinline!A <: B!).
\end{enumerate}

\paragraph{Subtyping for disjunctive types }

A function with argument of type \lstinline!AtMostTwo! can be applied
to a value of type \lstinline!Two! with no additional code written
by the programmer:

\begin{wrapfigure}{l}{0.4\columnwidth}%
\vspace{-0.5\baselineskip}
\begin{lstlisting}
def head: AtMostTwo => Option[Int] = {
  case Zero()      => None
  case One(x)      => Some(x)
  case Two(x, y)   => Some(x)
}

scala> head(Two(10, 20))
res0: Option[Int] = Some(10)
\end{lstlisting}

\vspace{-1\baselineskip}
\end{wrapfigure}%
We may imagine that the compiler automatically used the type conversion
function \lstinline!f2! shown above to convert a value of the type
\lstinline!Two! into a value of the type \lstinline!AtMostTwo!.
Since the code of \lstinline!f2! is equivalent to an identity function,
the type conversion does not change any data and only reassigns the
types of the given values. So the compiler does not need to insert
any additional code, and the type conversion does not lead to any
decrease in performance.

\paragraph{Subtyping for type constructors}

If a type constructor $L^{A}$ is a functor, we can use its $\text{fmap}_{L}$
method to lift a type conversion function $f:P\rightarrow Q$ into
\[
\text{fmap}_{L}(f):L^{P}\rightarrow L^{Q}\quad,
\]
which gives a type conversion function from $L^{P}$ to $L^{Q}$.
This gives a subtyping relation between the types $L^{P}$ and $L^{Q}$
because the code of the lifted function $\text{fmap}_{L}(f)$ is an
identity function, due to functor $L$\textsf{'}s identity law, $\text{fmap}_{L}(\text{id})=\text{id}$. 

If a type constructor $H^{A}$ is a contrafunctor, a type conversion
function $f^{:P\rightarrow Q}$ is lifted to 
\[
\text{cmap}_{H}(f):H^{Q}\rightarrow H^{P}\quad,
\]
showing that $H^{Q}$ is a subtype of $H^{P}$. The identity law of
the contrafunctor $H$, 
\[
\text{cmap}_{H}(\text{id})=\text{id}\quad,
\]
shows that the lifted conversion function is an identity function
with reassigned types.

A type constructor $F$ is called \textbf{covariant}\index{type constructor!covariant}
if $F^{A}$ is a subtype of $F^{B}$ whenever $A$ is a subtype of
$B$. A \textbf{contravariant}\index{type constructor!contravariant}
type constructor $H$ has the subtype relation in the opposite direction:
$H^{B}$ is a subtype of $H^{A}$. In principle, all functors could
be declared as covariant type constructors, and all contrafunctors
as contravariant type constructors.\footnote{The name \textsf{``}\textbf{contrafunctor}\textsf{''}\index{contrafunctor} is used
in this book as a shortened form of \textsf{``}\index{contravariant functor!see \textsf{``}contrafunctor\textsf{''}}contravariant
functor\textsf{''}.} However, the Scala compiler does not automatically determine whether
a given type constructor \lstinline!F[A]! is covariant with respect
to a given type parameter \lstinline!A!. To indicate the covariance
property, the programmer needs to use a \index{variance annotation}\textbf{variance
annotation}, which looks like \lstinline!F[+A]!, on the relevant
type parameters. For example, the type constructor \lstinline!Counted[A]!
defined in Section~\ref{subsec:Functors:-definition-and-examples}
is a functor and so is covariant in its type parameter \lstinline!A!.
If we use the variance annotation \lstinline!Counted[+A]! in the
definition, Scala will automatically consider the type \lstinline!Counted[Two]!
as a subtype of \lstinline!Counted[AtMostTwo]!. Then we will be able
to apply any function to a value of type \lstinline!Counted[Two]!
as if it had type \lstinline!Counted[AtMostTwo]!:
\begin{lstlisting}
final case class Counted[+A](n: Int, a: A)

def total(c: Counted[AtMostTwo]): Int = c match {
  case Counted(n, Zero())      => 0
  case Counted(n, One(_))      => n
  case Counted(n, Two(_, _))   => n * 2
}

scala> total(Counted(2, Two(10, 20)))
res1: Int = 4
\end{lstlisting}

The contravariance property for contrafunctors can be annotated using
the syntax \lstinline!F[-A]!.

A given type constructor may have several type parameters and may
be covariant with respect to some of them and contravariant with respect
to others. As we have seen, the position of a type parameter in a
type expression indicates whether the value is \textsf{``}wrapped\textsf{''} (used
in a \textbf{covariant position}\index{covariant position}) or \textsf{``}consumed\textsf{''}
(used in a \textbf{contravariant position}\index{contravariant position}).
Covariant positions are to the right of function arrows, or outside
function arrows; contravariant positions are to the left of a function
arrow. The next examples confirm this intuition, which will be made
rigorous in Section~\ref{sec:f-Laws-and-structure}.

\subsection{Solved examples: implementation of functors and contrafunctors\index{solved examples}}

\subsubsection{Example \label{subsec:f-Example-functors}\ref{subsec:f-Example-functors}}

Consider this implementation of \lstinline!map! for the type constructor
\lstinline!Option[_]!:

\begin{wrapfigure}{l}{0.66\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
def map[A, B](oa: Option[A])(f: A => B): Option[B] = oa match {
  case None           => None
  case Some(x: Int)   => Some(f((x+1).asInstanceOf[A]))
  case Some(x)        => Some(f(x))
}
\end{lstlisting}

\vspace{-1\baselineskip}
\end{wrapfigure}%

\noindent This code performs a non-standard computation if the type
parameter \lstinline!A! is set to \lstinline!Int!. Show that this
implementation of \lstinline!map! violates the functor laws.

\subparagraph{Solution}

If the type parameter \lstinline!A! is not \lstinline!Int!, or if
the argument \lstinline!oa! is \lstinline!None!, the given code
is the same as the standard (correct) implementation of \lstinline!map!
for \lstinline!Option!. The function does something non-standard
when, e.g., \lstinline!oa == Some(123)!. Substitute this value of
\lstinline!oa! into the identity law, \lstinline!map(oa)(identity) == oa!,
and compute symbolically (using Scala syntax)
\begin{lstlisting}
map(oa)(identity) == Some(identity((123+1).asInstanceOf[Int])) == Some(124) != oa
\end{lstlisting}
This shows a violation of the functor identity law.

\subsubsection{Example \label{subsec:f-Example-functors-1}\ref{subsec:f-Example-functors-1}}

Define case classes and implement \lstinline!fmap! for the given
type constructors:

\textbf{(a)} $\text{Data}^{A}\triangleq\text{String}+A\times\text{Int}+A\times A\times A\quad.$

\textbf{(b)} $\text{Data}^{A}\triangleq\bbnum 1+A\times(\text{Int}\times\text{String}+A)\quad.$

\textbf{(c)} $\text{Data}^{A}\triangleq(\text{String}\rightarrow\text{Int}\rightarrow A)\times A+(\text{Boolean}\rightarrow\text{Double}\rightarrow A)\times A\quad.$

\subparagraph{Solution}

\textbf{(a)} Begin by defining a case class for each part of the disjunctive
type:
\begin{lstlisting}
sealed trait Data[A] 
final case class Message[A](message: String)  extends Data[A]
final case class Have1[A](x: A, n: Int)       extends Data[A]  
final case class Have3[A](x: A, y: A, z: A)   extends Data[A]
\end{lstlisting}
The names \lstinline!Message!, \lstinline!Have1!, \lstinline!Have3!,
\lstinline!n!, \lstinline!x!, \lstinline!y!, \lstinline!z! are
chosen arbitrarily. 

The function \lstinline!fmap! must have the type signature
\[
\text{fmap}^{A,B}:f^{:A\rightarrow B}\rightarrow\text{Data}^{A}\rightarrow\text{Data}^{B}\quad.
\]
To implement \lstinline!fmap! correctly, we need to transform each
part of the disjunctive type \lstinline!Data[A]! into the corresponding
part of \lstinline!Data[B]! without loss of information. To clarify
where the transformation $f^{:A\rightarrow B}$ need to be applied,
let us write the type notation for $\text{Data}^{A}$ and $\text{Data}^{B}$
side by side:
\begin{align*}
\text{Data}^{A} & \triangleq\text{String}+A\times\text{Int}+A\times A\times A\quad,\\
\text{Data}^{B} & \triangleq\text{String}+B\times\text{Int}+B\times B\times B\quad.
\end{align*}
Now it is clear that we need to apply $f$ to each value of type $A$
present in $\text{Data}^{A}$, preserving the order of values. The
Scala code is
\begin{lstlisting}
def fmap[A, B](f: A => B): Data[A] => Data[B] = {
  case Message(message)   => Message(message)
  case Have1(x, n)        => Have1(f(x), n)
  case Have3(x, y, z)     => Have3(f(x), f(y), f(z))
}
\end{lstlisting}

\textbf{(b)} It is convenient to define the disjunctive type $\text{Int}\times\text{String}+A$
separately as $P^{A}$:
\begin{lstlisting}
sealed trait P[A]
final case class Message[A](code: Int, message: String)  extends P[A]
final case class Value[A](x: A)                          extends P[A]
\end{lstlisting}
Now we notice that the type expression $\left(\bbnum 1+...\right)$
can be encoded via the standard \lstinline!Option! type. So, the
Scala code for $\text{Data}^{A}$ is
\begin{lstlisting}
final case class Data[A](d: Option[(A, P[A])])
\end{lstlisting}
To help us implement \lstinline!fmap! correctly, we write out the
type expressions 
\begin{align*}
\text{Data}^{A} & \triangleq\bbnum 1+A\times(\text{Int}\times\text{String}+A)\quad,\\
\text{Data}^{B} & \triangleq\bbnum 1+B\times(\text{Int}\times\text{String}+B)\quad,
\end{align*}
and transform $\text{Data}^{A}$ into $\text{Data}^{B}$ by applying
$f^{:A\rightarrow B}$ at the correct places:
\begin{lstlisting}
def fmap[A, B](f: A => B): Data[A] => Data[B] = {
  case Data(None)                                => Data(None)
  case Data(Some((x, Message(code, message))))   => Data(Some((f(x), Message(code, message))))
  case Data(Some((x, Value(y))))                 => Data(Some((f(x), Value(f(y)))))
}
\end{lstlisting}
When deeply nested patterns become hard to read, we may handle the
nested structure separately:
\begin{lstlisting}
def fmap[A, B](f: A => B): Data[A] => Data[B] = {
  case Data(None)           => Data(None)
  case Data(Some((x, p)))   =>
      val newP: P[B] = p match {
        case Message(code, message)   => Message(code, message)
        case Value(x)                 => Value(f(x))
      }
      Data(Some((f(x), newP)))
}
\end{lstlisting}

\textbf{(c)} Since the type structures $(\text{String}\rightarrow\text{Int}\rightarrow A)\times A$
and $(\text{Boolean}\rightarrow\text{Double}\rightarrow A)\times A$
have a similar pattern, let us define a parameterized type
\[
Q^{X,Y,A}\triangleq\left(X\rightarrow Y\rightarrow A\right)\times A\quad,
\]
and express the given type expression as 
\[
\text{Data}^{A}\triangleq Q^{\text{String},\text{Int},A}+Q^{\text{Boolean},\text{Double},A}\quad.
\]
It is then convenient to define \lstinline!Data[A]! using the standard
disjunctive type \lstinline!Either!: 
\begin{lstlisting}
type Q[X, Y, A] = (X => Y => A, A)
type Data[A] = Either[Q[String, Int, A], Q[Boolean, Double, A]]
\end{lstlisting}
To make the code clearer, we will implement \lstinline!fmap! separately
for $Q^{\bullet}$ and $\text{Data}^{\bullet}$.

To derive the code of \lstinline!fmap! for $Q^{\bullet}$, we begin
with the type signature
\[
\text{fmap}_{Q}^{A,B}:\left(A\rightarrow B\right)\rightarrow\left(X\rightarrow Y\rightarrow A\right)\times A\rightarrow\left(X\rightarrow Y\rightarrow B\right)\times B
\]
and start writing the code using typed holes,
\[
\text{fmap}_{Q}(f^{:A\rightarrow B})\triangleq g^{:X\rightarrow Y\rightarrow A}\times a^{:A}\rightarrow\text{???}^{:X\rightarrow Y\rightarrow B}\times\text{???}^{:B}\quad.
\]
The typed hole $\text{???}^{:B}$ is filled by $f(a)$. To fill the
remaining type hole, we write
\begin{align*}
 & \text{???}^{:X\rightarrow Y\rightarrow B}\\
 & =x^{:X}\rightarrow y^{:Y}\rightarrow\gunderline{\text{???}^{:B}}\\
 & =x^{:X}\rightarrow y^{:Y}\rightarrow f(\text{???}^{:A})\quad.
\end{align*}
It would be wrong to fill the typed hole $\text{???}^{:A}$ by $a^{:A}$
because, to preserve information, a value of type $X\rightarrow Y\rightarrow B$
should be computed using the given data $g$ of type $X\rightarrow Y\rightarrow A$.
So we write
\[
\text{???}^{:X\rightarrow Y\rightarrow B}=x^{:X}\rightarrow y^{:Y}\rightarrow f(g(x)(y))\quad.
\]
The corresponding Scala code is
\begin{lstlisting}
def fmap_Q[A, B, X, Y](f: A => B): Q[X, Y, A] => Q[X, Y, B] = {
  case (g, a) => (x => y => f(g(x)(y)), f(a))
  // Could also write the code as
  //   case (g, a) => (x => g(x) andThen f, f(a))
}
\end{lstlisting}
Finally, we can write the code for $\text{fmap}_{\text{Data}}$:
\begin{lstlisting}
def fmap_Data[A, B](f: A => B): Data[A] => Data[B] = {
  case Left(q)    => Left(fmap_Q(f)(q))
  case Right(q)   => Right(fmap_Q(f)(q))
}
\end{lstlisting}
The Scala compiler will automatically infer the type parameters required
by \lstinline!fmap_Q! and check that all types match. With all inferred
types written out, the code above would be
\begin{lstlisting}
def fmap_Data[A, B](f: A => B): Data[A] => Data[B] = {
  case Left(q: Q[String, Int, A])            =>
         Left[Q[String, Int, B]](fmap_Q[A, B, String, Int](f)(q))
  case Right(q: Q[Boolean, Double, A])       =>
         Right[Q[Boolean, Double, B]](fmap_Q[A, B, Boolean, Double](f)(q))
}
\end{lstlisting}
When types become complicated, it may help to write out some of the
type parameters in the code.

\subsubsection{Example \label{subsec:f-Example-functors-4}\ref{subsec:f-Example-functors-4}}

Decide whether these types are functors or contrafunctors, and implement
\lstinline!fmap! or \lstinline!cmap! as appropriate:

\textbf{(a)} $\text{Data}^{A}\triangleq\left(\bbnum 1+A\rightarrow\text{Int}\right)+(A\rightarrow A\times A\rightarrow\text{String})\quad.$ 

\textbf{(b)} $\text{Data}^{A,B}\triangleq\left(A+B\right)\times\left(\left(A\rightarrow\text{Int}\right)\rightarrow B\right)\quad.$

\subparagraph{Solution}

\textbf{(a)} The type constructor $\text{Data}^{A}$ is defined in
Scala as
\begin{lstlisting}
type Data[A] = Either[Option[A] => Int, A => ((A, A)) => String]
\end{lstlisting}
The type parameter $A$ is always located to the left of function
arrows. So, $\text{Data}^{A}$ \emph{consumes} values of type $A$,
and we expect that $\text{Data}^{A}$ is a contrafunctor. Indeed,
we can implement \lstinline!cmap!:
\begin{lstlisting}
def cmap[A, B](f: B => A): Data[A] => Data[B] = {
  case Left(oa2Int)        => Left(b => oa2Int(b.map(f)))
  case Right(a2aa2Str)     => Right( b1 => { case (b2, b3) => a2aa2Str(f(b1))((f(b2), f(b3))) } )
}
\end{lstlisting}

\textbf{(b)} The type constructor $\text{Data}^{A,B}$ has two type
parameters, and so we need to answer the question separately for each
of them. Write the Scala type definition as
\begin{lstlisting}
type Data[A, B] = (Either[A, B], (A => Int) => B)
\end{lstlisting}

Begin with the type parameter $A$ and notice that a value of type
$\text{Data}^{A,B}$ possibly contains a value of type $A$ within
\lstinline!Either[A, B]!. In other words, $A$ is \textsf{``}wrapped\textsf{''},
i.e., it is in a covariant position within the first part of the tuple.
It remains to check the second part of the tuple, which is a higher-order
function of type $\left(A\rightarrow\text{Int}\right)\rightarrow B$.
That function consumes a function of type $A\rightarrow\text{Int}$,
which in turn consumes a value of type $A$. Consumers of $A$ are
contravariant in $A$, but it turns out that a \textsf{``}consumer of a consumer
of $A$\textsf{''} is \emph{covariant} in $A$. So we expect to be able to
implement \lstinline!fmap! that applies to the type parameter $A$
of $\text{Data}^{A,B}$. Renaming the type parameter $B$ to $Z$
for clarity, we write the type signature for \lstinline!fmap! like
this,
\[
\text{fmap}^{A,C,Z}:\left(A\rightarrow C\right)\rightarrow\left(A+Z\right)\times\left(\left(A\rightarrow\text{Int}\right)\rightarrow Z\right)\rightarrow\left(C+Z\right)\times\left(\left(C\rightarrow\text{Int}\right)\rightarrow Z\right)\quad.
\]
We need to transform each part of the tuple separately. Transforming
$A+Z$ into $C+Z$ is straightforward via the function
\[
\begin{array}{|c||cc|}
 & C & Z\\
\hline A & f & \bbnum 0\\
Z & \bbnum 0 & \text{id}
\end{array}\quad.
\]
This code notation corresponds to the following Scala code:
\begin{lstlisting}
{
  case Left(x)    => Left(f(x))
  case Right(z)   => Right(z)
}
\end{lstlisting}
To derive code transforming $\left(A\rightarrow\text{Int}\right)\rightarrow Z$
into $\left(C\rightarrow\text{Int}\right)\rightarrow Z$, we use typed
holes:
\begin{align*}
 & f^{:A\rightarrow C}\rightarrow g^{:\left(A\rightarrow\text{Int}\right)\rightarrow Z}\rightarrow\gunderline{\text{???}^{:\left(C\rightarrow\text{Int}\right)\rightarrow Z}}\\
{\color{greenunder}\text{nameless function}:}\quad & =f^{:A\rightarrow C}\rightarrow g^{:\left(A\rightarrow\text{Int}\right)\rightarrow Z}\rightarrow p^{:C\rightarrow\text{Int}}\rightarrow\gunderline{\text{???}^{:Z}}\\
{\color{greenunder}\text{get a }Z\text{ by applying }g:}\quad & =f^{:A\rightarrow C}\rightarrow g^{:\left(A\rightarrow\text{Int}\right)\rightarrow Z}\rightarrow p^{:C\rightarrow\text{Int}}\rightarrow g(\gunderline{\text{???}^{:A\rightarrow\text{Int}}})\\
{\color{greenunder}\text{nameless function}:}\quad & =f^{:A\rightarrow C}\rightarrow g^{:\left(A\rightarrow\text{Int}\right)\rightarrow Z}\rightarrow p^{:C\rightarrow\text{Int}}\rightarrow g(a^{:A}\rightarrow\gunderline{\text{???}^{:\text{Int}}})\\
{\color{greenunder}\text{get an Int by applying }p:}\quad & =f^{:A\rightarrow C}\rightarrow g^{:\left(A\rightarrow\text{Int}\right)\rightarrow Z}\rightarrow p^{:C\rightarrow\text{Int}}\rightarrow g(a^{:A}\rightarrow p(\gunderline{\text{???}^{:C}}))\\
{\color{greenunder}\text{get a }C\text{ by applying }f:}\quad & =f^{:A\rightarrow C}\rightarrow g^{:\left(A\rightarrow\text{Int}\right)\rightarrow Z}\rightarrow p^{:C\rightarrow\text{Int}}\rightarrow g(a^{:A}\rightarrow p(f(\gunderline{\text{???}^{:A}})))\\
{\color{greenunder}\text{use argument }a^{:A}:}\quad & =f\rightarrow g\rightarrow p\rightarrow g(a\rightarrow p(f(a))\quad.
\end{align*}
In the resulting Scala code for \lstinline!fmap!, we write out some
types for clarity:
\begin{lstlisting}
def fmapA[A, Z, C](f: A => C): Data[A, Z] => Data[C, Z] = {
  case (e: Either[A, Z], g: ((A => Int) => Z)) =>
    val newE: Either[C, Z] = e match {
      case Left(x)    => Left(f(x))
      case Right(z)   => Right(z)
    }
    val newG: (C => Int) => Z = { p => g(a => p(f(a))) }
    (newE, newG) // This has type Data[C, Z].
}
\end{lstlisting}
This suggests that $\text{Data}^{A,Z}$ is covariant with respect
to the type parameter $A$. The results of Section~\ref{sec:f-Laws-and-structure}
will show rigorously that the functor laws hold for this implementation
of \lstinline!fmap!.

The analysis is simpler for the type parameter $B$ because it is
only used in covariant positions, never to the left of function arrows.
So we expect $\text{Data}^{A,B}$ to be a functor with respect to
$B$. Implementing the corresponding \lstinline!fmap! is straightforward:
\begin{lstlisting}
def fmapB[Z, B, C](f: B => C): Data[Z, A] => Data[Z, B] = {
  case (e: Either[Z, B], g: ((Z => Int) => B)) =>
    val newE: Either[Z, B] = e match {
      case Left(x)    => Left(f(x))
      case Right(z)   => Right(z)
    }
    val newG: (C => Int) => Z = { p => g(a => p(f(a))) }
    (newE, newG) // This has type Data[C, Z].
}
\end{lstlisting}

The code indicates that $\text{Data}^{A,B}$ is a functor with respect
to both $A$ and $B$.

\subsubsection{Example \label{subsec:f-Example-functors-6}\ref{subsec:f-Example-functors-6}}

Rewrite the following code in the type notation; identify covariant
and contravariant type usages; verify with the Scala compiler that
the variance annotations are correct:
\begin{lstlisting}
sealed trait Coi[A, B]
final case class Pa[A, B](b: (A, B), c: B => Int)     extends Coi[A, B]
final case class Re[A, B](d: A, e: B, c: Int)         extends Coi[A, B]
final case class Ci[A, B](f: String => A, g: B => A)  extends Coi[A, B]
\end{lstlisting}


\subparagraph{Solution}

The type notation puts all parts of the disjunctive type into a single
type expression:
\[
\text{Coi}^{A,B}\triangleq A\times B\times(B\rightarrow\text{Int})+A\times B\times\text{Int}+(\text{String}\rightarrow A)\times(B\rightarrow A)\quad.
\]
Now find which types are wrapped and which are consumed in this type
expression. The type parameter $A$ is wrapped and never consumed,
but $B$ is both wrapped and consumed (in $B\rightarrow A$). So,
the type constructor \lstinline!Coi! is covariant in $A$ but neither
covariant nor contravariant in $B$. We can check this by compiling
the corresponding Scala code with variance annotations:
\begin{lstlisting}
sealed trait Coi[+A, B]
case class Pa[+A, B](b: (A, B), c: B => Int)     extends Coi[A, B]
case class Re[+A, B](d: A, e: B, c: Int)         extends Coi[A, B]
case class Ci[+A, B](f: String => A, g: B => A)  extends Coi[A, B]
\end{lstlisting}
We could also replace the fixed types \lstinline!Int! and \lstinline!String!
by type parameters \lstinline!N! and \lstinline!S!. A similar analysis
shows that \lstinline!N! is in covariant positions while \lstinline!S!
is in a contravariant position. We can then check that the Scala compiler
accepts the following type definition with variance annotations:
\begin{lstlisting}
sealed trait Coi2[+A, B, +N, -S]
case class Pa2[+A, B, +N, -S](b: (A, B), c: B => N)  extends Coi[A, B, N, S]
case class Re2[+A, B, +N, -S](d: A, e: B, c: N)      extends Coi[A, B, N, S]
case class Ci2[+A, B, +N, -S](f: S => A, g: B => A)  extends Coi[A, B, N, S]
\end{lstlisting}


\subsection{Exercises: implementation of functors and contrafunctors\index{exercises}}

\subsubsection{Exercise \label{subsec:f-Exercise-functors}\ref{subsec:f-Exercise-functors}}

An implementation of \lstinline!fmap! for the type constructor \lstinline!Either[A, A]!
is given as
\begin{lstlisting}
def fmap[A, B](f: A => B): Either[A, A] => Either[B, B] = {
  case Left(a)         => Right(f(a))
  case Right(a: Int)   => Left(f(a + 1))
  case Right(a)        => Left(f(a))
}
\end{lstlisting}
Show that this implementation of \lstinline!fmap! violates the functor
laws. Implement \lstinline!fmap! correctly for this type constructor
and the given type signature.

\subsubsection{Exercise \label{subsec:f-Exercise-functors-1}\ref{subsec:f-Exercise-functors-1}}

Define these type constructors in Scala, decide whether they are covariant
or contravariant, and implement \lstinline!fmap! or \lstinline!cmap!
as appropriate:

\textbf{(a)} $\text{Data}^{A}\triangleq\left(\bbnum 1+A\right)\times\left(\bbnum 1+A\right)\times\text{String}\quad.$

\textbf{(b)} $\text{Data}^{A}\triangleq(A\rightarrow\text{Boolean})\rightarrow A\times\left(\text{Int}+A\right)\quad.$

\textbf{(c)} $\text{Data}^{A,B}\triangleq(A\rightarrow\text{Boolean})\times\left(A+B\rightarrow\text{Int}\right)\quad.$

\textbf{(d)} $\text{Data}^{A}\triangleq(\bbnum 1+(A\rightarrow\text{Boolean}))\rightarrow\bbnum 1+(A\rightarrow\text{Int})\rightarrow\text{Int}\quad.$

\textbf{(e)} $\text{Data}^{B}\triangleq(B+(\text{Int}\rightarrow B))\times(B+(\text{String}\rightarrow B))\quad.$

\subsubsection{Exercise \label{subsec:f-Exercise-functors-2}\ref{subsec:f-Exercise-functors-2}}

Rewrite the following code in the type notation; find covariant and
contravariant positions of type parameters; add variance annotations
and verify that the revised code compiles:

\begin{lstlisting}
sealed trait S[A, B]
final case class P[A, B](a: A, b: B, c: Int)        extends S[A, B]
final case class Q[A, B](d: Int => A, e: Int => B)  extends S[A, B]
final case class R[A, B](f: A => A, g: A => B)      extends S[A, B]
\end{lstlisting}


\section{Laws and structure\label{sec:f-Laws-and-structure}}

A type constructor is a functor if it admits a lawful \lstinline!map!
function. How can we recognize quickly that a given type constructor
is a functor or perhaps a contrafunctor? For example, consider the
type constructor $Z^{A,R}$ defined by 
\begin{equation}
Z^{A,R}\triangleq\left(\left(A\rightarrow A\rightarrow R\right)\rightarrow R\right)\times A+\left(\bbnum 1+R\rightarrow A+\text{Int}\right)+A\times A\times\text{Int}\times\text{Int}\quad.\label{eq:f-example-complicated-z}
\end{equation}
Is $Z^{A,R}$ a functor with respect to $A$, or perhaps with respect
to $R$? To answer these questions, we will systematically build up
various type expressions for which the functor or contrafunctor laws
hold. 

\subsection{Reformulations of laws}

We begin by introducing a more convenient notation for the functor
laws. The laws~(\ref{eq:f-identity-law-functor-fmap})\textendash (\ref{eq:f-composition-law-functor-fmap})
were defined in terms of the function \lstinline!fmap!. When written
in terms of the curried function \lstinline!map!, the structure of
the laws becomes less clear:
\begin{align*}
 & \text{map}_{L}(x^{:L^{A}})(\text{id}^{:A\rightarrow A})=x\quad,\\
 & \text{map}_{L}(x^{:L^{A}})(f^{:A\rightarrow B}\bef g^{:B\rightarrow C})=\text{map}_{L}\big(\text{map}_{L}(x)(f)\big)(g)\quad.
\end{align*}
The laws again look clearer when using \lstinline!map! as a class
method:

\begin{wrapfigure}{l}{0.4\columnwidth}%
\vspace{-0.8\baselineskip}
\begin{lstlisting}
x.map(identity) == x
x.map(f).map(g) == x.map(f andThen g)
\end{lstlisting}

\vspace{-0.5\baselineskip}
\end{wrapfigure}%
To take advantage of this syntax, we can use the \index{pipe notation}pipe
notation where $x\triangleright\text{fmap}(f)$ means \lstinline!x.map(f)!,
and write the functor laws as
\begin{align*}
 & x\triangleright\text{fmap}_{L}(\text{id})=x\quad,\\
 & x\triangleright\text{fmap}_{L}(f)\triangleright\text{fmap}_{L}(g)=x\triangleright\text{fmap}_{L}(f\bef g)\quad.
\end{align*}

In later chapters of this book, we will find that the \lstinline!map!
methods (equivalently, the \lstinline!fmap! function) are used so
often in different contexts that the notation $\text{fmap}_{L}(f)$
becomes too verbose. To make code expressions visually easy to manipulate,
we need a shorter notation. At the same time, it is important to show
clearly the relevant type constructor $L$. Dropping the symbol $L$
can lead to errors, since it will be sometimes unclear what type constructors
are involved in an expression such as \lstinline!x.map(f).map(g)!
and whether we are justified in replacing that expression with \lstinline!x.map(f andThen g)!.

For these reasons, we introduce the superscript notation $^{\uparrow L}$
(pronounced \textsf{``}lifted to $L$\textsf{''}) defined, for any function $f$,
by
\[
(f^{:A\rightarrow B})^{\uparrow L}:L^{A}\rightarrow L^{B}\quad,\quad\quad f^{\uparrow L}\triangleq\text{fmap}_{L}(f)\quad.
\]
Now we can choose the notation according to convenience and write
\[
\text{map}_{L}(x)(f)=\text{fmap}_{L}(f)(x)=x\triangleright\text{fmap}_{L}(f)=x\triangleright f^{\uparrow L}=f^{\uparrow L}(x)\quad.
\]
In this notation, the identity and composition laws for a functor
$L$ are especially easy to use:
\[
\text{id}^{\uparrow L}=\text{id}\quad,\quad\quad\left(f\bef g\right)^{\uparrow L}=f^{\uparrow L}\bef g^{\uparrow L}\quad.
\]
Applying a composition of lifted functions to a value looks like this,
\[
x\triangleright\left(f\bef g\right)^{\uparrow L}=x\triangleright f^{\uparrow L}\bef g^{\uparrow L}=x\triangleright f^{\uparrow L}\triangleright g^{\uparrow L}\quad.
\]
This equation directly represents the Scala code syntax
\begin{lstlisting}
x.map(f andThen g) == (_.map(f) andThen _.map(g))(x) == x.map(f).map(g)
\end{lstlisting}
since the piping symbol $\left(\triangleright\right)$ groups weaker
than the composition symbol $\left(\bef\right)$.

Written in the \emph{backward} notation ($f\circ g$), the functor
composition law is
\[
\left(g\circ f\right)^{\uparrow L}=g^{\uparrow L}\circ f^{\uparrow L}\quad.
\]

The analogous notation for a contrafunctor $C^{\bullet}$ is
\[
f^{\downarrow C}\triangleq\text{cmap}_{C}(f)\quad.
\]
The contrafunctor laws are then written as
\[
\text{id}^{\downarrow C}=\text{id}\quad,\quad\quad\left(f\bef g\right)^{\downarrow C}=g^{\downarrow C}\bef f^{\downarrow C}\quad,\quad\quad\left(g\circ f\right)^{\downarrow C}=f^{\downarrow C}\circ g^{\downarrow C}\quad.
\]

We will mostly use the forward composition $f\bef g$ in this book,
keeping in mind that one can straightforwardly and mechanically translate
between forward and backward notations via 
\[
f\bef g=g\circ f\quad,\quad\quad x\triangleright f=f(x)\quad.
\]


\subsection{Bifunctors\label{subsec:Bifunctors}}

A type constructor can be a functor with respect to several type parameters.
A \textbf{bifunctor}\index{bifunctor} is a type constructor with
\emph{two} type parameters that satisfies the functor laws with respect
to both type parameters at once.

As an example, consider the type constructor $F$ defined by
\[
F^{A,B}\triangleq A\times B\times B\quad.
\]
If we fix the type parameter $B$ but let the parameter $A$ vary,
we get a type constructor that we can denote as $F^{\bullet,B}$.
We see that the type constructor $F^{\bullet,B}$ is a functor, with
the corresponding \lstinline!fmap! function
\[
\text{fmap}_{F^{\bullet,B}}(f^{:A\rightarrow C})\triangleq a^{:A}\times b_{1}^{:B}\times b_{2}^{:B}\rightarrow f(a)\times b_{1}\times b_{2}\quad.
\]
Instead of saying that $F^{\bullet,B}$ is a functor, we can also
say more verbosely that $F^{A,B}$ is a functor with respect to $A$. 

If we now fix the type parameter $A$, we find that the type constructor
$F^{A,\bullet}$ is a functor, with the \lstinline!fmap! function
\[
\text{fmap}_{F^{A,\bullet}}(g^{:B\rightarrow D})\triangleq a^{:A}\times b_{1}^{:B}\times b_{2}^{:B}\rightarrow a\times g(b_{1})\times g(b_{2})\quad.
\]

Since the bifunctor $F^{\bullet,\bullet}$ is a functor with respect
to each type parameter separately, we can transform a value of type
$F^{A,B}$ to a value of type $F^{C,D}$ by applying the two \lstinline!fmap!
functions one after another. It is convenient to denote this transformation
by a single operation called \lstinline!bimap! that uses two functions
$f^{:A\rightarrow C}$ and $g^{:B\rightarrow D}$ as arguments:
\begin{align}
\text{bimap}_{F}(f^{:A\rightarrow C})(g^{:B\rightarrow D}) & :F^{A,B}\rightarrow F^{C,D}\quad,\nonumber \\
\text{bimap}_{F}(f^{:A\rightarrow C})(g^{:B\rightarrow D}) & \triangleq\text{fmap}_{F^{\bullet,B}}(f^{:A\rightarrow C})\bef\text{fmap}_{F^{C,\bullet}}(g^{:B\rightarrow D})\quad.\label{eq:f-definition-of-bimap}
\end{align}
In the condensed notation, this is written as
\[
\text{bimap}_{F}(f^{:A\rightarrow C})(g^{:B\rightarrow D})\triangleq f^{\uparrow F^{\bullet,B}}\bef g^{\uparrow F^{C,\bullet}}\quad,
\]
but in this case the longer notation in Eq.~(\ref{eq:f-definition-of-bimap})
is easier to reason about. 

What if we apply the two \lstinline!fmap! functions in the opposite
order? Since these functions work with different type parameters,
it is reasonable to expect that the transformation $F^{A,B}\rightarrow F^{C,D}$
should be independent of the order of application:
\begin{equation}
\text{fmap}_{F^{\bullet,B}}(f^{:A\rightarrow C})\bef\text{fmap}_{F^{C,\bullet}}(g^{:B\rightarrow D})=\text{fmap}_{F^{A,\bullet}}(g^{:B\rightarrow D})\bef\text{fmap}_{F^{\bullet,D}}(f^{:A\rightarrow C})\quad.\label{eq:f-fmap-fmap-bifunctor-commutativity}
\end{equation}
This equation is illustrated by the type diagram below.

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-1.5\baselineskip}
\[
\xymatrix{\xyScaleY{2.0pc}\xyScaleX{5.0pc} & F^{C,B}\ar[rd]\sp(0.6){\ ~~\text{fmap}_{F^{C,\bullet}}(g^{:B\rightarrow D})}\\
F^{A,B}\ar[ru]\sp(0.4){\text{fmap}_{F^{\bullet,B}}(f^{:A\rightarrow C})~~~}\ar[rd]\sb(0.5){\text{fmap}_{F^{A,\bullet}}(g^{:B\rightarrow D})~~\ }\ar[rr]\sp(0.5){\text{bimap}_{F}(f^{:A\rightarrow C})(g^{:B\rightarrow D})\ } &  & F^{C,D}\\
 & F^{A,D}\ar[ru]\sb(0.6){~~~~\text{fmap}_{F^{\bullet,D}}(f^{:A\rightarrow C})}
}
\]

\vspace{-2.2\baselineskip}
\end{wrapfigure}%
Different paths in this diagram give the same results if they arrive
at the same vertex (as mathematicians say, \textsf{``}the diagram commutes\textsf{''}).
In this way, the diagram illustrates at once the commutativity law~(\ref{eq:f-fmap-fmap-bifunctor-commutativity})
and the definition~(\ref{eq:f-definition-of-bimap}) of $\text{bimap}_{F}$.

Let us verify the commutativity law for the bifunctor\index{commutativity law!of bifunctors}
$F^{A,B}\triangleq A\times B\times B$:
\begin{align*}
{\color{greenunder}\text{left-hand side}:}\quad & \text{fmap}_{F^{\bullet,B}}(f^{:A\rightarrow C})\bef\text{fmap}_{F^{C,\bullet}}(g^{:B\rightarrow D})\\
{\color{greenunder}\text{definitions of }\text{fmap}_{F^{\bullet,\bullet}}:}\quad & \quad=(a^{:A}\times b_{1}^{:B}\times b_{2}^{:B}\rightarrow f(a)\times b_{1}\times b_{2})\bef(c^{:C}\times b_{1}^{:B}\times b_{2}^{:B}\rightarrow c\times g(b_{1})\times g(b_{2}))\\
{\color{greenunder}\text{compute composition}:}\quad & \quad=a^{:A}\times b_{1}^{:B}\times b_{2}^{:B}\rightarrow f(a)\times g(b_{1})\times g(b_{2})\quad,\\
{\color{greenunder}\text{right-hand side}:}\quad & \text{fmap}_{F^{A,\bullet}}(g^{:B\rightarrow D})\bef\text{fmap}_{F^{\bullet,D}}(f^{:A\rightarrow C})\\
{\color{greenunder}\text{definitions of }\text{fmap}_{F^{\bullet,\bullet}}:}\quad & \quad=(a^{:A}\times b_{1}^{:B}\times b_{2}^{:B}\rightarrow a\times g(b_{1})\times g(b_{2}))\bef(a^{:A}\times d_{1}^{:D}\times d_{2}^{:D}\rightarrow f(a)\times d_{1}\times d_{2})\\
{\color{greenunder}\text{compute composition}:}\quad & \quad=a^{:A}\times b_{1}^{:B}\times b_{2}^{:B}\rightarrow f(a)\times g(b_{1})\times g(b_{2})\quad.
\end{align*}
Both sides of the law are equal.

The commutativity law~(\ref{eq:f-fmap-fmap-bifunctor-commutativity})
leads to the composition law of \lstinline!bimap!,
\begin{equation}
\text{bimap}_{F}(f_{1}^{:A\rightarrow C})(g_{1}^{:B\rightarrow D})\bef\text{bimap}_{F}(f_{2}^{:C\rightarrow E})(g_{2}^{:D\rightarrow G})=\text{bimap}_{F}(f_{1}\bef f_{2})(g_{1}\bef g_{2})\quad.\label{eq:f-bimap-composition-law}
\end{equation}
The following type diagram shows the relationships between various
\lstinline!bimap! and \lstinline!fmap! functions:
\[
\xymatrix{\xyScaleY{3.0pc}\xyScaleX{12.0pc}F^{A,B}\ar[rd]\sp(0.6){~~~\text{bimap}_{F}(f_{1})(g_{1})}\ar[r]\sp(0.4){\text{fmap}_{F^{\bullet,B}}(f_{1}^{:A\rightarrow C})}\ar[d]\sp(0.5){\text{fmap}_{F^{A,\bullet}}(g_{1}^{:B\rightarrow D})} & F^{C,B}\ar[rd]\sp(0.6){~~~\text{bimap}_{F}(f_{2})(g_{1})}\ar[r]\sp(0.4){~\text{fmap}_{F^{\bullet,B}}(f_{2}^{:C\rightarrow E})}\ar[d]\sp(0.5){\text{fmap}_{F^{C,\bullet}}(g_{1}^{:B\rightarrow D})~~~} & F^{E,B}\ar[d]\sp(0.5){\text{fmap}_{F^{E,\bullet}}(g_{1}^{:B\rightarrow D})}\\
F^{A,D}\ar[rd]\sp(0.6){~~~\text{bimap}_{F}(f_{1})(g_{2})}\ar[r]\sp(0.4){\text{fmap}_{F^{\bullet,D}}(f_{1}^{:A\rightarrow C})}\ar[d]\sp(0.5){\text{fmap}_{F^{A,\bullet}}(g_{2}^{:D\rightarrow G})} & F^{C,D}\ar[rd]\sp(0.6){~~~\text{bimap}_{F}(f_{2})(g_{2})}\ar[r]\sp(0.4){~\text{fmap}_{F^{\bullet,D}}(f_{2}^{:C\rightarrow E})}\ar[d]\sp(0.5){\text{fmap}_{F^{C,\bullet}}(g_{2}^{:D\rightarrow G})~~~} & F^{E,D}\ar[d]\sp(0.5){\text{fmap}_{F^{E,\bullet}}(g_{2}^{:D\rightarrow G})}\\
F^{A,G}\ar[r]\sp(0.4){~\text{fmap}_{F^{\bullet,G}}(f_{1}^{:A\rightarrow C})} & F^{C,G}\ar[r]\sp(0.4){~\text{fmap}_{F^{\bullet,G}}(f_{2}^{:C\rightarrow E})} & F^{E,G}
}
\]

To derive the composition law from Eq.~(\ref{eq:f-fmap-fmap-bifunctor-commutativity}),
write
\begin{align*}
 & \text{bimap}_{F}(f_{1})(g_{1})\bef\text{bimap}_{F}(f_{2})(g_{2})\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-definition-of-bimap})}:}\quad & =\text{fmap}_{F^{\bullet,B}}(f_{1})\bef\gunderline{\text{fmap}_{F^{C,\bullet}}(g_{1})\bef\text{fmap}_{F^{\bullet,D}}(f_{2})}\bef\text{fmap}_{F^{E,\bullet}}(g_{2})\\
{\color{greenunder}\text{commutativity law~(\ref{eq:f-fmap-fmap-bifunctor-commutativity})}:}\quad & =\text{fmap}_{F^{\bullet,B}}(f_{1})\gunderline{\,\bef\,}\text{fmap}_{F^{\bullet,B}}(f_{2})\bef\text{fmap}_{F^{E,\bullet}}(g_{1})\gunderline{\,\bef\,}\text{fmap}_{F^{E,\bullet}}(g_{2})\\
{\color{greenunder}\text{composition laws}:}\quad & =\text{fmap}_{F^{\bullet,B}}(f_{1}\bef f_{2})\bef\text{fmap}_{F^{E,\bullet}}(g_{1}\bef g_{2})\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-definition-of-bimap})}:}\quad & =\text{bimap}_{F}(f_{1}\bef f_{2})(g_{1}\bef g_{2})\quad.
\end{align*}
Conversely, we can derive Eq.~(\ref{eq:f-fmap-fmap-bifunctor-commutativity})
from the composition law~(\ref{eq:f-bimap-composition-law}). We
write the composition law with specially chosen functions:
\begin{equation}
\text{bimap}_{F}(f^{:A\rightarrow C})(g^{:B\rightarrow D})=\text{bimap}_{F}(\text{id}^{:A\rightarrow A})(g^{:B\rightarrow D})\bef\text{bimap}_{F}(f^{:A\rightarrow C})(\text{id}^{:D\rightarrow D})\quad.\label{eq:f-bimap-id-f-g-id}
\end{equation}
Using Eq.~(\ref{eq:f-definition-of-bimap}), we find
\begin{align*}
{\color{greenunder}\text{expect }\text{fmap}_{F^{A,\bullet}}(g)\bef\text{fmap}_{F^{\bullet,D}}(f):}\quad & \text{fmap}_{F^{\bullet,B}}(f^{:A\rightarrow C})\bef\text{fmap}_{F^{C,\bullet}}(g^{:B\rightarrow D})\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-definition-of-bimap})}:}\quad & =\text{bimap}_{F}(f^{:A\rightarrow C})(g^{:B\rightarrow D})\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-bimap-id-f-g-id})}:}\quad & =\text{bimap}_{F}(\text{id}^{:A\rightarrow A})(g^{:B\rightarrow D})\bef\text{bimap}_{F}(f^{:A\rightarrow C})(\text{id}^{:D\rightarrow D})\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-definition-of-bimap})}:}\quad & =\gunderline{\text{fmap}_{F^{\bullet,B}}(\text{id})}\bef\text{fmap}_{F^{A,\bullet}}(g)\bef\text{fmap}_{F^{\bullet,D}}(f)\bef\gunderline{\text{fmap}_{F^{C,\bullet}}(\text{id})}\\
{\color{greenunder}\text{identity laws for }F:}\quad & =\text{fmap}_{F^{A,\bullet}}(g)\bef\text{fmap}_{F^{\bullet,D}}(f)\quad.
\end{align*}

The identity law of \lstinline!bimap! holds as well,
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{bimap}_{F}(\text{id}^{:A\rightarrow A})(\text{id}^{:B\rightarrow B})\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-definition-of-bimap})}:}\quad & =\gunderline{\text{fmap}_{F^{\bullet,B}}(\text{id})}\bef\gunderline{\text{fmap}_{F^{C,\bullet}}(\text{id})}\\
{\color{greenunder}\text{identity laws for }F:}\quad & =\text{id}\bef\text{id}=\text{id}\quad.
\end{align*}

If $F^{A,B}$ is known to be a functor separately with respect to
$A$ and $B$, will the commutativity law~(\ref{eq:f-fmap-fmap-bifunctor-commutativity})
always hold? The calculation for the example $F^{A,B}\triangleq A\times B\times B$
shows that the two \lstinline!fmap! functions commute because they
work on different parts of the data structure $F^{A,B}$. This turns
out\footnote{Proving that statement is beyond the scope of this chapter. See Section~\ref{sec:Commutativity-laws-for-type-constructors}
in Appendix~\ref{app:Proofs-of-naturality-parametricity}.} to be true in general: the commutativity law follows from the parametricity
of the \lstinline!fmap! functions. Because of that, we do not need
to verify the \lstinline!bimap! laws as long as $F^{\bullet,B}$
and $F^{A,\bullet}$ are lawful functors.

Type constructors with more than two type parameters have similar
properties. It is sufficient to check the functor laws with respect
to each type parameter separately.

In general, a type constructor may be a functor with respect to some
type parameters and a contrafunctor with respect to others. Below
we will see examples of such type constructors.

\subsection{Constructions of functors\label{subsec:f-Functor-constructions}}

What type expressions will produce a functor? Functional programming
languages support the six standard type constructions (see Section~\ref{subsec:Type-notation-and-standard-type-constructions}).
This section will check whether each construction produces a new type
that obeys the functor laws. The results are summarized in Table~\ref{tab:f-Functor-constructions}.

\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline 
\textbf{\small{}Construction} & \textbf{\small{}Type notation} & \textbf{\small{}Comment}\tabularnewline
\hline 
\hline 
{\footnotesize{}type parameter} & {\footnotesize{}$L^{A}\triangleq A$} & {\footnotesize{}the identity functor}\tabularnewline
\hline 
{\footnotesize{}product type} & {\footnotesize{}$L^{A}\triangleq P^{A}\times Q^{A}$} & {\footnotesize{}the functor product; $P$ and $Q$ must be functors}\tabularnewline
\hline 
{\footnotesize{}disjunctive type} & {\footnotesize{}$L^{A}\triangleq P^{A}+Q^{A}$} & {\footnotesize{}the functor co-product; $P$ and $Q$ must be functors}\tabularnewline
\hline 
{\footnotesize{}function type} & {\footnotesize{}$L^{A}\triangleq C^{A}\rightarrow P^{A}$} & {\footnotesize{}the functor exponential; $P$ is a functor and $C$
a contrafunctor}\tabularnewline
\hline 
{\footnotesize{}type parameter} & {\footnotesize{}$L^{A}\triangleq Z$} & {\footnotesize{}the constant functor; $Z$ is a fixed type}\tabularnewline
\hline 
{\footnotesize{}type constructor} & {\footnotesize{}$L^{A}\triangleq P^{Q^{A}}$} & {\footnotesize{}functor composition; $P$ and $Q$ are both functors
or both contrafunctors}\tabularnewline
\hline 
{\footnotesize{}recursive type} & {\footnotesize{}$L^{A}\triangleq S^{A,L^{A}}$} & {\footnotesize{}recursive functor; $S^{A,B}$ must be a functor w.r.t.~both
$A$ and $B$}\tabularnewline
\hline 
\end{tabular}
\par\end{centering}
\caption{Type constructions defining a functor $L^{A}$.\label{tab:f-Functor-constructions}}
\end{table}

In each of these constructions, the \lstinline!fmap! function for
a new functor is defined either from scratch or by using the known
\lstinline!fmap! functions for previously defined type constructors.
The rest of this section will derive the code for these constructions
and prove their laws. We will use the code notation for brevity, occasionally
showing the translation into the Scala syntax.

\subsubsection{Statement \label{subsec:f-Statement-identity-functor}\ref{subsec:f-Statement-identity-functor}}

The type constructor $\text{Id}^{A}\triangleq A$ is a lawful functor
(the \textbf{identity functor}\index{identity functor}).

\subparagraph{Proof}

The \lstinline!fmap! function is defined by
\begin{align*}
 & \text{fmap}_{\text{Id}}:\left(A\rightarrow B\right)\rightarrow\text{Id}^{A}\rightarrow\text{Id}^{B}\cong\left(A\rightarrow B\right)\rightarrow A\rightarrow B\quad,\\
 & \text{fmap}_{\text{Id}}\triangleq(f^{:A\rightarrow B}\rightarrow f)=\text{id}^{:(A\rightarrow B)\rightarrow A\rightarrow B}\quad.
\end{align*}
The identity function is the only fully parametric implementation
of the type signature $\left(A\rightarrow B\right)\rightarrow A\rightarrow B$.
Since the code of \lstinline!fmap! is the identity function, the
laws are satisfied automatically:
\begin{align*}
{\color{greenunder}\text{identity law}:}\quad & \text{fmap}_{\text{Id}}(\text{id})=\text{id}(\text{id})=\text{id}\quad,\\
{\color{greenunder}\text{composition law}:}\quad & \text{\ensuremath{\text{fmap}_{\text{Id}}}}(f\bef g)=f\bef g=\text{fmap}_{\text{Id}}(f)\bef\text{fmap}_{\text{Id}}(g)\quad.
\end{align*}


\subsubsection{Statement \label{subsec:f-Statement-constant-functor}\ref{subsec:f-Statement-constant-functor}}

The type\index{constant functor} constructor $\text{Const}^{Z,A}\triangleq Z$
is a lawful functor (a \textbf{constant functor}) with respect to
the type parameter $A$.

\subparagraph{Proof}

The \lstinline!fmap! function is defined by
\begin{align*}
\text{fmap}_{\text{Const}} & :\left(A\rightarrow B\right)\rightarrow\text{Const}^{Z,A}\rightarrow\text{Const}^{Z,B}\cong\left(A\rightarrow B\right)\rightarrow Z\rightarrow Z\quad,\\
\text{fmap}_{\text{Const}}(f^{:A\rightarrow B}) & \triangleq(z^{:Z}\rightarrow z)=\text{id}^{:Z\rightarrow Z}\quad.
\end{align*}
It is a constant function that ignores $f$ and returns the identity
$\text{id}^{:Z\rightarrow Z}$. The laws are satisfied:
\begin{align*}
{\color{greenunder}\text{identity law}:}\quad & \text{fmap}_{\text{Const}}(\text{id})=\text{id}\quad,\\
{\color{greenunder}\text{composition law}:}\quad & \text{\ensuremath{\text{fmap}_{\text{Const}}}}(f\bef g)=\text{id}=\text{fmap}_{\text{Const}}(f)\bef\text{fmap}_{\text{Const}}(g)=\text{id}\bef\text{id}\quad.
\end{align*}
The corresponding Scala code is
\begin{lstlisting}
type Const[Z, A] = Z
def fmap[A, B](f: A => B): Const[Z, A] => Const[Z, B] = identity[Z]
\end{lstlisting}

The identity functor $\text{Id}^{\bullet}$ and the constant functor
$\text{Const}^{Z,\bullet}$ are not often used: their \lstinline!fmap!
implementations are identity functions, and so they rarely provide
useful functionality. 

We have seen that type constructors with product types, such as $L^{A}\triangleq A\times A\times A$,
are functors. The next construction (the \index{functor product}\textbf{functor
product}) explains why.

\subsubsection{Statement \label{subsec:functor-Statement-functor-product}\ref{subsec:functor-Statement-functor-product}}

If $L^{\bullet}$ and $M^{\bullet}$ are two functors then the product\index{functor product}
$P^{A}\triangleq L^{A}\times M^{A}$ is also a functor.

\subparagraph{Proof}

The \lstinline!fmap! function for $P$ is defined by

\begin{wrapfigure}{l}{0.595\columnwidth}%
\vspace{-0.9\baselineskip}
\begin{lstlisting}
def fmap[A, B](f: A => B): (L[A], M[A]) => (L[B], M[B]) = {
  case (la, ma) => (la.map(f), ma.map(f))
}
\end{lstlisting}

\vspace{-1.5\baselineskip}
\end{wrapfigure}%

\noindent The corresponding code notation is
\[
\negthickspace\negthickspace f^{\uparrow P}\triangleq l^{:L^{A}}\times m^{:M^{A}}\rightarrow f^{\uparrow L}(l)\times f^{\uparrow M}(m)\quad.
\]
Writing this code using the pipe ($\triangleright$) operation makes
it somewhat closer to the Scala syntax:
\begin{equation}
(l^{:L^{A}}\times m^{:M^{A}})\triangleright f^{\uparrow P}\triangleq(l\triangleright f^{\uparrow L})\times(m\triangleright f^{\uparrow M})\quad.\label{eq:f-def-of-functor-product-lift}
\end{equation}
An alternative notation uses the \index{pair product of functions}\textbf{pair
product} symbol $\boxtimes$ defined by
\begin{align*}
 & p^{:A\rightarrow B}\boxtimes q^{:C\rightarrow D}:A\times C\rightarrow B\times D\quad,\\
 & p\boxtimes q\triangleq a\times c\rightarrow p(a)\times q(c)\quad,\\
 & (a\times c)\triangleright\left(p\boxtimes q\right)=\left(a\triangleright p\right)\times\left(b\triangleright q\right)\quad.
\end{align*}
In this notation, the lifting for $P$ is defined more concisely:
\begin{equation}
f^{\uparrow P}=f^{\uparrow L\times M}\triangleq f^{\uparrow L}\boxtimes f^{\uparrow M}\quad.\label{eq:def-of-functor-product-fmap}
\end{equation}

We need to verify the identity law and the composition law.

To verify the identity law of $P$, pipe an arbitrary value of type
$L^{A}\times M^{A}$ into both sides of the law:
\begin{align*}
{\color{greenunder}\text{expect to equal }l\times m:}\quad & (l^{:L^{A}}\times m^{:M^{A}})\triangleright\text{id}^{\uparrow P}\\
{\color{greenunder}\text{definition of }f^{\uparrow P}:}\quad & =(l\triangleright\gunderline{\text{id}^{\uparrow L}})\times(m\triangleright\gunderline{\text{id}^{\uparrow M}})\\
{\color{greenunder}\text{identity laws of }L\text{ and }M:}\quad & =(\gunderline{l\triangleright\text{id}})\times(\gunderline{m\triangleright\text{id}})\\
{\color{greenunder}\text{definition of }\text{id}:}\quad & =l\times m\quad.
\end{align*}
To verify the composition law of $P$, we need to show that
\[
f^{\uparrow P}\bef g^{\uparrow P}=\left(f\bef g\right)^{\uparrow P}\quad.
\]
Apply both sides of this equation to an arbitrary value of type $L^{A}\times M^{A}$:
\begin{align*}
{\color{greenunder}\text{expect to equal }(l\times m)\triangleright(f\bef g)^{\uparrow P}:}\quad & (l^{:L^{A}}\times m^{:M^{A}})\triangleright f^{\uparrow P}\gunderline{\,\bef\,}g^{\uparrow P}\\
{\color{greenunder}\triangleright\text{ notation}:}\quad & =\gunderline (l^{:L^{A}}\times m^{:M^{A}}\gunderline{)\triangleright f^{\uparrow P}}\triangleright g^{\uparrow P}\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-def-of-functor-product-lift})}:}\quad & =\gunderline{\big(}(l\triangleright f^{\uparrow L})\times(m\triangleright f^{\uparrow M})\gunderline{\big)\triangleright g^{\uparrow P}}\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-def-of-functor-product-lift})}:}\quad & =(l\triangleright f^{\uparrow L}\gunderline{\,\triangleright\,}g^{\uparrow L})\times(m\triangleright f^{\uparrow M}\gunderline{\,\triangleright\,}g^{\uparrow M})\\
{\color{greenunder}\triangleright\text{ notation}:}\quad & =(l\triangleright\gunderline{f^{\uparrow L}\bef g^{\uparrow L}})\times(m\triangleright\gunderline{f^{\uparrow M}\bef g^{\uparrow M}})\\
{\color{greenunder}\text{composition laws of }L\text{ and }M:}\quad & =(l\triangleright(f\bef g)^{\uparrow L})\times(m\triangleright(f\bef g)^{\uparrow M})\\
{\color{greenunder}\text{use Eq.~(\ref{eq:f-def-of-functor-product-lift})}:}\quad & =(l\times m)\triangleright(f\bef g)^{\uparrow P}\quad.
\end{align*}
The calculations are shorter if we use the pair product operation:
\begin{align*}
{\color{greenunder}\text{expect to equal }(f\bef g)^{\uparrow P}:}\quad & f^{\uparrow P}\bef g^{\uparrow P}=(f^{\uparrow L}\boxtimes f^{\uparrow M})\bef(g^{\uparrow L}\boxtimes g^{\uparrow M})\\
{\color{greenunder}\text{composition of functions under }\boxtimes:}\quad & =(\gunderline{f^{\uparrow L}\bef g^{\uparrow L}})\boxtimes(\gunderline{f^{\uparrow M}\bef g^{\uparrow M}})\\
{\color{greenunder}\text{composition laws of }L\text{ and }M:}\quad & =(f\bef g)^{\uparrow L}\boxtimes(f\bef g)^{\uparrow M}=(f\bef g)^{\uparrow P}\quad.
\end{align*}
For comparison, the same derivation using the Scala code syntax looks
like this,
\begin{lstlisting}
(( l, m )).map(f).map(g) == (( l.map(f), m.map(f) )).map(g)
  == (( l.map(f).map(g), m.map(f).map(g) ))
  == (( l.map(f andThen g), m.map(f andThen g) )) 
\end{lstlisting}
assuming that the \lstinline!map! method is defined on pairs by Eq.~(\ref{eq:f-def-of-functor-product-lift}),
\begin{lstlisting}
(( l, m )).map(f) == (( l.map(f), m.map(f) ))
\end{lstlisting}
The proof written in the Scala syntax does not show the type constructors
whose \lstinline!map! methods are used in each expression. For instance,
it is not indicated that the two \lstinline!map! methods used in
the expression \lstinline!m.map(f).map(g)! belong to the \emph{same}
type constructor $M$ and thus obey $M$\textsf{'}s composition law. The code
notation shows this more concisely and more clearly, helping us in
reasoning:
\[
m\triangleright f^{\uparrow M}\triangleright g^{\uparrow M}=m\triangleright f^{\uparrow M}\bef g^{\uparrow M}=m\triangleright(f\bef g)^{\uparrow M}\quad.
\]
By the convention of the pipe notation, it groups to the left, so
we have
\[
\left(x\triangleright f\right)\triangleright g=x\triangleright f\triangleright g=x\triangleright f\bef g=x\triangleright(f\bef g)=(f\bef g)(x)=g(f(x))\quad.
\]
We will often use this notation in code derivations. (Chapter~\ref{chap:Reasoning-about-code}
gives an overview of the derivation techniques, including some more
details about the pipe notation.)

\subsubsection{Statement \label{subsec:functor-Statement-functor-coproduct}\ref{subsec:functor-Statement-functor-coproduct}}

If $P^{A}$ and $Q^{A}$ are functors then $L^{A}\triangleq P^{A}+Q^{A}$
is a functor, with \lstinline!fmap! defined by
\begin{lstlisting}
def fmap[A, B](f: A => B): Either[P[A], Q[A]] => Either[P[B], Q[B]] = {
  case Left(pa)    => Left(fmap_P(f)(pa))   // Use fmap for P.
  case Right(qa)   => Right(fmap_Q(f)(qa))  // Use fmap for Q.
}
\end{lstlisting}
The functor $L^{\bullet}$ is the \textbf{functor co-product}\index{functor co-product}
of $P^{\bullet}$ and $Q^{\bullet}$. The code notation for the \lstinline!fmap!
function is
\[
\text{fmap}_{L}(f^{:A\rightarrow B})=f^{\uparrow L}\triangleq\,\begin{array}{|c||cc|}
 & P^{B} & Q^{B}\\
\hline P^{A} & f^{\uparrow P} & \bbnum 0\\
Q^{A} & \bbnum 0 & f^{\uparrow Q}
\end{array}\quad.
\]
Here we assume that lawful \lstinline!fmap! functions are given for
the functors $P$ and $Q$.

\subparagraph{Proof}

Omitting the type annotations, we write the code of $\text{fmap}_{L}(f)$
as
\begin{equation}
\text{fmap}_{L}(f)=f^{\uparrow L}=\,\begin{array}{||cc|}
f^{\uparrow P} & \bbnum 0\\
\bbnum 0 & f^{\uparrow Q}
\end{array}\quad.\label{eq:f-coproduct-functor-def-fmap}
\end{equation}
To verify the identity law, use Eq.~(\ref{eq:f-coproduct-functor-def-fmap})
and the identity laws for $P$ and $Q$:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{id}^{\uparrow L}=\,\begin{array}{||cc|}
\text{id}^{\uparrow P} & \bbnum 0\\
\bbnum 0 & \text{id}^{\uparrow Q}
\end{array}\,=\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & \text{id}
\end{array}\\
{\color{greenunder}\text{identity function in matrix notation}:}\quad & =\text{id}\quad.
\end{align*}
To verify the composition law:
\begin{align*}
{\color{greenunder}\text{expect to equal }(f\bef g)^{\uparrow L}:}\quad & f^{\uparrow L}\bef g^{\uparrow L}=\,\begin{array}{||cc|}
f^{\uparrow P} & \bbnum 0\\
\bbnum 0 & f^{\uparrow Q}
\end{array}\,\bef\,\begin{array}{||cc|}
g^{\uparrow P} & \bbnum 0\\
\bbnum 0 & g^{\uparrow Q}
\end{array}\\
{\color{greenunder}\text{matrix composition}:}\quad & =\,\begin{array}{||cc|}
\gunderline{f^{\uparrow P}\bef g^{\uparrow P}} & \bbnum 0\\
\bbnum 0 & \gunderline{f^{\uparrow Q}\bef g^{\uparrow Q}}
\end{array}\\
{\color{greenunder}\text{composition laws of }P\text{ and }Q:}\quad & =\,\,\begin{array}{||cc|}
(f\bef g)^{\uparrow P} & \bbnum 0\\
\bbnum 0 & (f\bef g)^{\uparrow Q}
\end{array}\,=(f\bef g)^{\uparrow L}\quad.
\end{align*}

The last two statements show that any type constructor built up via
primitive types, type parameters, products and co-products, such as
$L^{A}\triangleq\bbnum 1+(\text{String}+A)\times A\times\text{Int}+A$,
is a functor. Functors of this kind are called \index{polynomial functor}\textbf{polynomial
functors} because of the analogy to ordinary arithmetic polynomial
functions of a variable $A$. The type notation with its symbols ($+$,
$\times$) makes this analogy visually clear. 

Implementing \lstinline!fmap! for a polynomial functor is straightforward:
\lstinline!fmap! replaces each occurrence of the a value of type
$A$ by the corresponding value of type $B$, leaving constant types
unchanged and keeping the order of parts in all products and disjunctive
types. Previously, our implementations of \lstinline!fmap! for various
type constructors (such as shown in Example~\ref{subsec:f-Example-functors-1})
were guided by the idea of preserving information. Statements~\ref{subsec:functor-Statement-functor-product}\textendash \ref{subsec:functor-Statement-functor-coproduct}
explain why those implementations of the \lstinline!fmap! are correct
(i.e., obey the functor laws).

The next construction shows when a function type is a functor: the
argument of the function must be a contrafunctor.

\subsubsection{Statement \label{subsec:functor-Statement-functor-exponential}\ref{subsec:functor-Statement-functor-exponential}}

If $C$ is a contrafunctor and $P$ is a functor then $L^{A}\triangleq C^{A}\rightarrow P^{A}$
is a functor, called an \index{functor exponential}\textbf{functor
exponential}, with \lstinline!fmap! defined by
\begin{align}
 & \text{fmap}_{L}^{A,B}(f^{:A\rightarrow B}):(C^{A}\rightarrow P^{A})\rightarrow C^{B}\rightarrow P^{B}\quad,\nonumber \\
 & \text{fmap}_{L}(f^{:A\rightarrow B})=f^{\uparrow L}\triangleq h^{:C^{A}\rightarrow P^{A}}\rightarrow f^{\downarrow C}\bef h\bef f^{\uparrow P}\quad.\label{eq:f-functor-exponential-def-of-fmap}
\end{align}
The corresponding Scala code is
\begin{lstlisting}
def fmap_L[A, B](f: A => B)(h: C[A] => P[A]): C[B] => P[B] = {
  cmap_C(f) andThen h andThen fmap_P(f)
}
\end{lstlisting}
A type diagram for $\text{fmap}_{L}$ can be drawn as
\[
\xymatrix{\xyScaleY{1.5pc}\xyScaleX{2.5pc} & C^{A}\ar[r]\sp(0.5){h} & P^{A}\ar[rd]\sp(0.6){\ \text{fmap}_{P}(f^{:A\rightarrow B})\ }\\
C^{B}\ar[ru]\sp(0.4){\text{cmap}_{C}(f^{:A\rightarrow B})\ ~~}\ar[rrr]\sb(0.5){\text{fmap}_{L}(f^{:A\rightarrow B})(h^{:C^{A}\rightarrow P^{A}})\ } &  &  & P^{B}
}
\]


\subparagraph{Proof}

Since the types are already checked, we can use Eq.~(\ref{eq:f-functor-exponential-def-of-fmap})
without type annotations,
\begin{equation}
h\triangleright f^{\uparrow L}=f^{\downarrow C}\bef h\bef f^{\uparrow P}\quad.\label{eq:f-functor-exponential-def-fmap-f-h}
\end{equation}
To verify the identity law of $L$, show that $\text{id}^{\uparrow L}(h)=h$:
\begin{align*}
{\color{greenunder}\text{expect to equal }h:}\quad & h\triangleright\text{id}^{\uparrow L}\\
{\color{greenunder}\text{definition (\ref{eq:f-functor-exponential-def-fmap-f-h}) of }^{\uparrow L}:}\quad & =\gunderline{\text{id}^{\downarrow C}}\bef h\bef\gunderline{\text{id}^{\uparrow P}}\\
{\color{greenunder}\text{identity laws of }C\text{ and }P:}\quad & =\gunderline{\text{id}}\bef h\bef\gunderline{\text{id}}\\
{\color{greenunder}\text{definition of }\text{id}:}\quad & =h\quad.
\end{align*}
To verify the composition law of $L$, it helps to apply both sides
of the law to an arbitrary $h^{:L^{A}}$:
\begin{align*}
{\color{greenunder}\text{expect to equal }h\triangleright f^{\uparrow L}\bef g^{\uparrow L}:}\quad & h\triangleright(f\bef g)^{\uparrow L}\\
{\color{greenunder}\text{definition (\ref{eq:f-functor-exponential-def-fmap-f-h}) of }^{\uparrow L}:}\quad & =(\gunderline{f\bef g})^{\downarrow C}\bef h\bef(\gunderline{f\bef g})^{\uparrow P}\\
{\color{greenunder}\text{composition laws of }C\text{ and }P:}\quad & =g^{\downarrow C}\bef\gunderline{f^{\downarrow C}}\bef h\bef\gunderline{f^{\uparrow P}}\bef g^{\uparrow P}\\
{\color{greenunder}\text{definition (\ref{eq:f-functor-exponential-def-fmap-f-h}) of }^{\uparrow L}:}\quad & =\gunderline{g^{\downarrow C}}\bef(h\triangleright f^{\uparrow L})\bef\gunderline{g^{\uparrow P}}\\
{\color{greenunder}\text{definition (\ref{eq:f-functor-exponential-def-fmap-f-h}) of }^{\uparrow L}:}\quad & =(h\triangleright f^{\uparrow L})\triangleright g^{\uparrow L}=h\triangleright f^{\uparrow L}\bef g^{\uparrow L}\quad.
\end{align*}

It is important for this proof that the order of function compositions
is reversed when lifting to a contrafunctor $C$: $(f\bef g)^{\downarrow C}=g^{\downarrow C}\bef f^{\downarrow C}$.
If $C$ were a functor, the proof would not work because we would
have obtained $f^{\uparrow C}\bef g^{\uparrow C}$ instead of $g^{\downarrow C}\bef f^{\downarrow C}$.
The order of composition cannot be permuted for arbitrary functions
$f$, $g$. So, we would not be able to group $f^{\downarrow C}\bef h\bef f^{\uparrow P}$
together.

Examples of functors obtained via the exponential\index{functor exponential}
construction are $L^{A}\triangleq Z\rightarrow A$ (with the contrafunctor
$C^{A}$ chosen as the constant contrafunctor $Z$, where $Z$ is
a fixed type) and $L^{A}\triangleq\left(A\rightarrow Z\right)\rightarrow A$
(with the contrafunctor $C^{A}\triangleq A\rightarrow Z$). Statement~\ref{subsec:functor-Statement-functor-exponential}
generalizes those examples to arbitrary contrafunctors $C^{A}$ used
as arguments of function types.

Similarly, one can prove that $P^{A}\rightarrow C^{A}$ is a contrafunctor
(Exercise~\ref{subsec:functor-Exercise-functor-laws}). Together
with Statements~\ref{subsec:functor-Statement-functor-product}\textendash \ref{subsec:functor-Statement-functor-exponential},
this gives us the rules of reasoning about covariance and contravariance
of type parameters in arbitrary type expressions. Every function arrow
($\rightarrow$) flips the variance from covariant to contravariant
and back. For instance, the identity functor $L^{A}\triangleq A$
is covariant in $A$, while $A\rightarrow Z$ is contravariant in
$A$, and $\left(A\rightarrow Z\right)\rightarrow Z$ is again covariant
in $A$. As we have seen, $A\rightarrow A\rightarrow Z$ is contravariant
in $A$, so any number of curried arrows count as one in this reasoning
(and, in any case, $A\rightarrow A\rightarrow Z\cong A\times A\rightarrow Z$).
Products and disjunctions do not change variance, so $\left(A\rightarrow Z_{1}\right)\times\left(A\rightarrow Z_{2}\right)+\left(A\rightarrow Z_{3}\right)$
is contravariant in $A$. This is shown in more detail in Section~\ref{subsec:Solved-examples:-How-to-recognize-functors}.

The remaining constructions set a type parameter to another type.
The \textbf{functor composition}\index{functor composition} $P^{Q^{A}}$,
written in Scala as \lstinline!P[Q[A]]!, is analogous to a function
composition such as $f(g(x))$ except for using type constructors.
Viewed in this way, type constructors are \textbf{type-level functions}\index{type-level function}
(i.e., mappings of types). So, functor composition may be denoted
by $P\circ Q$, like the function composition $f\circ g$.

An example of functor composition in Scala is \lstinline!List[Option[A]]!.
Since both \lstinline!List! and \lstinline!Option! have a \lstinline!map!
method, we may write code such as
\begin{lstlisting}
val p: List[Option[Int]] = List(Some(1), None, Some(2), None, Some(3))

scala> p.map(_.map(x => x + 10))
res0: List[Option[Int]] = List(Some(11), None, Some(12), None, Some(13)) 
\end{lstlisting}
The code \lstinline!p.map(_.map(f))! lifts an $f^{:A\rightarrow B}$
into a function of type \lstinline!List[Option[A]] => List[Option[B]]!.
In this way, we may perform the \lstinline!map! operation on the
nested data type \lstinline!List[Option[_]]!. 

The next statement shows that this code always produces a lawful \lstinline!map!
function. In other words, the composition\index{functor composition}
of functors is always a functor.

\subsubsection{Statement \label{subsec:functor-Statement-functor-composition-1}\ref{subsec:functor-Statement-functor-composition-1}}

If $P^{A}$ and $Q^{A}$ are functors then $L^{A}\triangleq P^{Q^{A}}$
is also a functor, with \lstinline!fmap! defined by
\begin{lstlisting}
def fmap_L[A, B](f: A => B): P[Q[A]] => P[Q[B]] = fmap_P(fmap_Q(f))
\end{lstlisting}
Here we assumed that the functions $\text{fmap}_{P}$ and $\text{fmap}_{Q}$
are known and satisfy the functor laws.

In the code notation, $\text{fmap}_{L}$ is written equivalently as
\begin{align}
{\color{greenunder}\text{type signature}:}\quad & \text{fmap}_{L}:f^{:A\rightarrow B}\rightarrow P^{Q^{A}}\rightarrow P^{Q^{B}}\quad,\nonumber \\
{\color{greenunder}\text{implementation}:}\quad & \text{fmap}_{L}(f)\triangleq\text{fmap}_{P}(\text{fmap}_{Q}(f))\quad,\nonumber \\
{\color{greenunder}\text{equivalent code}:}\quad & \text{fmap}_{L}\triangleq\text{fmap}_{Q}\bef\text{fmap}_{P}\quad,\\
{\color{greenunder}\text{in a shorter notation}:}\quad & f^{\uparrow L}\triangleq(f^{\uparrow Q})^{\uparrow P}\triangleq f^{\uparrow Q\uparrow P}\quad.\label{eq:def-functor-composition-fmap}
\end{align}


\subparagraph{Proof}

To verify the identity law of $L$, use the identity laws for $P$
and $Q$:
\[
\text{id}^{\uparrow L}=(\gunderline{\text{id}^{\uparrow Q}})^{\uparrow P}=\gunderline{\text{id}^{\uparrow P}}=\text{id}\quad.
\]
To verify the composition law of $L$, use the composition laws for
$P$ and $Q$:
\[
(f\bef g)^{\uparrow L}=\big((\gunderline{f\bef g})^{\uparrow Q}\big)^{\uparrow P}=(\gunderline{f^{\uparrow Q}\bef g^{\uparrow Q}}\big)^{\uparrow P}=f^{\uparrow Q\uparrow P}\bef g^{\uparrow Q\uparrow P}\quad.
\]

Finally, we consider recursive data types such as lists and trees
(Section~\ref{sec:Lists-and-trees:recursive-disjunctive-types}).
It is helpful to use the type notation for reasoning about those types.
The list type,
\begin{lstlisting}
sealed trait List[A]
final case class Empty()                         extends List[A]
final case class Head[A](head: A, tail: List[A]) extends List[A]
\end{lstlisting}
is written in type notation as
\[
\text{List}^{A}\triangleq\bbnum 1+A\times\text{List}^{A}\quad.
\]
The binary tree type,
\begin{lstlisting}
sealed trait Tree2[A]
final case class Leaf[A](a: A)                       extends Tree2[A]
final case class Branch[A](x: Tree2[A], y: Tree2[A]) extends Tree2[A]
\end{lstlisting}
is defined by $\text{Tree}_{2}^{A}\triangleq A+\text{Tree}_{2}^{A}\times\text{Tree}_{2}^{A}$.
Such definitions of recursive types look like \textsf{``}type equations\textsf{''}.
We can generalize these examples to a recursive definition
\begin{equation}
L^{A}\triangleq S^{A,L^{A}}\quad,\label{eq:f-def-recursive-functor}
\end{equation}
where $S^{A,R}$ is a suitably chosen type constructor with two type
parameters $A,R$. If the type constructor $S^{\bullet,\bullet}$
is given, the Scala code defining $L^{\bullet}$ can be written as
\begin{lstlisting}
type S[A, R] = ... // Must be defined previously as type alias, class, or trait.
final case class L[A](x: S[A, L[A]])
\end{lstlisting}
We must use a case class to define \lstinline!L! because Scala does
not support recursive type aliases:
\begin{lstlisting}
scala> type L[A] = Either[A, L[A]]
<console>:14: error: illegal cyclic reference involving type L
       type L[A] = Either[A, L[A]]
                             ^

scala> final case class L[A](x: Either[A, L[A]])
defined class L
\end{lstlisting}

Table~\ref{tab:Examples-of-recursive-disjunctive-type-equations}
summarizes our previous examples of recursive disjunctive types and
shows the relevant choices of $S^{A,R}$, which turns out to be always
a bifunctor. For abstract syntax trees, the functors $P^{\bullet}$
and $Q^{\bullet}$ must be given; they specify the available shapes
of leaves and branches respectively. 

\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline 
\textbf{\small{}Description} & \textbf{\small{}Type definition} & \textbf{\small{}Bifunctor $S^{A,R}$}\tabularnewline
\hline 
\hline 
{\small{}list} & {\small{}$L^{A}\triangleq\bbnum 1+A\times L^{A}$} & {\small{}$S^{A,R}\triangleq\bbnum 1+A\times R$}\tabularnewline
\hline 
{\small{}non-empty list} & {\small{}$\text{NEL}^{A}\triangleq A+A\times\text{NEL}^{A}$} & {\small{}$S^{A,R}\triangleq A+A\times R$}\tabularnewline
\hline 
{\small{}list of odd length} & {\small{}$L^{A}\triangleq A+A\times A\times L^{A}$} & {\small{}$S^{A,R}\triangleq A+A\times A\times R$}\tabularnewline
\hline 
{\small{}binary tree} & {\small{}$L^{A}\triangleq A+L^{A}\times L^{A}$} & {\small{}$S^{A,R}\triangleq A+R\times R$}\tabularnewline
\hline 
{\small{}rose tree} & {\small{}$L^{A}\triangleq A+\text{NEL}^{L^{A}}$} & {\small{}$S^{A,R}\triangleq A+\text{NEL}^{R}$}\tabularnewline
\hline 
{\small{}regular-shaped binary tree} & {\small{}$L^{A}\triangleq A+L^{A\times A}$} & {\small{}not possible}\tabularnewline
\hline 
{\small{}abstract syntax tree} & {\small{}$L^{A}\triangleq P^{A}+Q^{L^{A}}$} & {\small{}$S^{A,R}\triangleq P^{A}+Q^{R}$}\tabularnewline
\hline 
\end{tabular}
\par\end{centering}
\caption{Recursive disjunctive types defined using type equations.\label{tab:Examples-of-recursive-disjunctive-type-equations}}
\end{table}

We will now prove that Eq.~(\ref{eq:f-def-recursive-functor}) always
defines a functor when $S^{\bullet,\bullet}$ is a bifunctor.

\subsubsection{Statement \label{subsec:functor-Statement-functor-recursive}\ref{subsec:functor-Statement-functor-recursive}}

If $S^{A,B}$ is a bifunctor (a functor with respect to both type
parameters $A$ and $B$) then the recursively defined type constructor
$L^{A}$ is a functor,
\[
L^{A}\triangleq S^{A,L^{A}}\quad.
\]
The \lstinline!fmap! method for $L$ is a recursive function implemented
as
\begin{equation}
\text{fmap}_{L}(f^{:A\rightarrow B})\triangleq\text{bimap}_{S}(f)(\text{fmap}_{L}(f))\quad.\label{eq:def-recursive-functor-fmap}
\end{equation}
The corresponding Scala code is
\begin{lstlisting}
final case class L[A](x: S[A, L[A]]) // The type constructor S[_, _] must be defined previously.

def bimap_S[A, B, C, D](f: A => C)(g: B => D): S[A, B] => S[C, D] = ???      // Must be defined.

def fmap_L[A, B](f: A => B): L[A] => L[B] = { case L(x) =>
    val newX: S[B, L[B]] = bimap_S(f)(fmap_L(f))(x)                 // Recursive call to fmap_L.
    L(newX)            // Need to wrap the value of type S[B, L[B]] into the type constructor L.
}
\end{lstlisting}


\subparagraph{Proof}

Usually, laws for a \index{recursive function!proving laws for}recursive
function (such as $\text{fmap}_{L}$) must be proved by induction.
In the recursive implementation of $\text{fmap}_{L}$, its code calls
itself in some cases but returns without recursive calls in other
cases. So, the base case of induction corresponds to the non-recursive
evaluations in the code of $\text{fmap}_{L}$, and we need to prove
that the law is then satisfied. The inductive step must prove that
the code of $\text{fmap}_{L}$ obeys the law under the inductive assumption
that all recursive calls to $\text{fmap}_{L}$ already obey that law.
In the proof, we do not need to separate the base case from the inductive
step; we just derive the law using the inductive assumption whenever
needed.

For clarity, we add an overline to recursive calls in the code formula:
\[
\text{fmap}_{L}(f)\triangleq\text{bimap}_{S}(f)(\overline{\text{fmap}_{L}}(f))\quad.
\]

To prove the identity law:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{fmap}_{L}(\text{id})\\
{\color{greenunder}\text{definition of }\text{fmap}_{L}:}\quad & =\text{bimap}_{S}(\text{id})\gunderline{(\overline{\text{fmap}_{L}}(\text{id}))}\\
{\color{greenunder}\text{inductive assumption --- the law holds for }\overline{\text{fmap}_{L}}:}\quad & =\text{bimap}_{S}(\text{id})(\text{id})\\
{\color{greenunder}\text{identity law of }S:}\quad & =\text{id}\quad.
\end{align*}
To prove the composition law:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{fmap}_{L}(f\bef g):}\quad & \text{fmap}_{L}(f)\bef\text{fmap}_{L}(g)\\
{\color{greenunder}\text{definition of }\text{fmap}_{L}:}\quad & =\text{bimap}_{S}(f)(\overline{\text{fmap}_{L}}(f))\bef\text{bimap}_{S}(g)(\overline{\text{fmap}_{L}}(g))\\
{\color{greenunder}\text{composition law of }S:}\quad & =\text{bimap}_{S}(f\bef g)(\gunderline{\overline{\text{fmap}_{L}}(f)\bef\overline{\text{fmap}_{L}}(g)})\\
{\color{greenunder}\text{inductive assumption}:}\quad & =\text{bimap}_{S}(f\bef g)(\overline{\text{fmap}_{L}}(f\bef g))\\
{\color{greenunder}\text{definition of }\text{fmap}_{L}:}\quad & =\text{fmap}_{L}(f\bef g)\quad.
\end{align*}

For the regular-shaped binary tree, the construction~(\ref{eq:f-def-recursive-functor})
is insufficient: no bifunctor $S^{A,L^{A}}$ can replace the type
argument $A$ in $L^{A}$ to obtain $L^{A\times A}$. To see that,
consider that $S^{A,L^{A}}$ is an application of a type-level function
$S^{\bullet,\bullet}$ to its two type parameters, which are set to
$A$ and $L^{A}$. In Scala syntax, $S^{A,L^{A}}$ is written as \lstinline!S[A,L[A]]!.
No matter how we define the type constructor $S$, the resulting type
expression \lstinline!S[A,L[A]]! will always use the type constructor
\lstinline!L! as \lstinline!L[A]! and not as \lstinline!L[(A,A)]!. 

To describe regular-shaped trees, we need to modify the construction
by adding another arbitrary functor, $P^{\bullet}$, in the type argument
of $L^{\bullet}$:
\begin{equation}
L^{A}\triangleq S^{A}+L^{P^{A}}\quad.\label{eq:f-def-recursive-functor-2}
\end{equation}
Regular-shaped binary trees are defined by Eq.~(\ref{eq:f-def-recursive-functor-2})
with $S^{A}\triangleq A$ and $P^{A}\triangleq A\times A$. The Scala
code for these definitions is
\begin{lstlisting}
type S[A] = A                                    // The shape of a leaf.
type P[A] = (A, A)                               // The shape of a branch.
final case class L[A](s: Either[S[A], L[P[A]]])  // Or `case class L[A](s: Either[A, L[(A, A)]])`.
\end{lstlisting}
Different choices of $P$ will define regular-shaped trees with different
kinds of branching.

\subsection{Constructions of contrafunctors\label{subsec:f-Contrafunctor-constructions}}

The previous section performed \textbf{structural analysis}\index{structural analysis|textit}
for functors: a systematic search for type constructions (product,
co-product, etc.) that create new functors.\emph{ Mutatis mutandis},
similar constructions work for contrafunctors, as shown in Table~\ref{tab:f-Contrafunctor-constructions}.
One difference with respect to Table~\ref{tab:f-Functor-constructions}
is the absence of the identity type constructor, $L^{A}\triangleq A$
(it is a functor, not a contrafunctor). However, the constant type
constructor, $L^{A}\triangleq Z$, is a functor and a contrafunctor
at the same time.

\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline 
\textbf{\small{}Construction} & \textbf{\small{}Type notation} & \textbf{\small{}Comment}\tabularnewline
\hline 
\hline 
{\footnotesize{}tuple} & {\footnotesize{}$C^{A}\triangleq P^{A}\times Q^{A}$} & {\footnotesize{}the product contrafunctor; $P$ and $Q$ must be contrafunctors}\tabularnewline
\hline 
{\footnotesize{}disjunctive type} & {\footnotesize{}$C^{A}\triangleq P^{A}+Q^{A}$} & {\footnotesize{}the co-product contrafunctor; $P$ and $Q$ must be
contrafunctors}\tabularnewline
\hline 
{\footnotesize{}function type} & {\footnotesize{}$C^{A}\triangleq L^{A}\rightarrow H^{A}$} & {\footnotesize{}the exponential contrafunctor; $L$ is a functor and
$H$ a contrafunctor}\tabularnewline
\hline 
{\footnotesize{}type parameter} & {\footnotesize{}$C^{A}\triangleq Z$} & {\footnotesize{}the constant contrafunctor; $Z$ is a fixed type}\tabularnewline
\hline 
{\footnotesize{}type constructor} & {\footnotesize{}$C^{A}\triangleq P^{Q^{A}}$} & {\footnotesize{}the composition; $P$ is a functor and $Q$ a contrafunctor
(or vice versa)}\tabularnewline
\hline 
{\footnotesize{}recursive type} & {\footnotesize{}$C^{A}\triangleq S^{A,C^{A}}$} & {\footnotesize{}$S^{A,B}$ must be a contrafunctor w.r.t.~$A$ and
functor w.r.t.~$B$}\tabularnewline
\hline 
\end{tabular}
\par\end{centering}
\caption{Type constructions defining a contrafunctor $C^{A}$.\label{tab:f-Contrafunctor-constructions}}
\end{table}

Let us now prove the validity of some of these constructions.

\subsubsection{Statement \label{subsec:functor-Statement-contrafunctor-constant}\ref{subsec:functor-Statement-contrafunctor-constant}}

If $Z$ is any fixed type, the constant type constructor $C^{A}\triangleq Z$
is a contrafunctor (the \textbf{constant contrafunctor}\index{constant contrafunctor})
whose \lstinline!cmap! returns an identity function of type $Z\rightarrow Z$:
\begin{lstlisting}
type Const[Z, A] = Z
def cmap[Z, A, B](f: B => A): Const[Z, A] => Const[Z, B] = identity[Z] 
\end{lstlisting}


\subparagraph{Proof}

All laws hold because \lstinline!cmap! returns an identity function:
\begin{align*}
{\color{greenunder}\text{identity law}:}\quad & \text{cmap}\left(\text{id}\right)=\text{id}\quad,\\
{\color{greenunder}\text{composition law}:}\quad & \text{cmap}\left(f\right)\bef\text{cmap}\left(g\right)=\text{id}\bef\text{id}=\text{id}=\text{cmap}\left(g\bef f\right)\quad.
\end{align*}


\subsubsection{Statement \label{subsec:functor-Statement-contrafunctor-composition-1}\ref{subsec:functor-Statement-contrafunctor-composition-1}}

If $P^{A}$ is a functor and $Q^{A}$ is a contrafunctor then $L^{A}\triangleq P^{Q^{A}}$
is a contrafunctor with \lstinline!cmap! defined by
\begin{lstlisting}
def cmap[A, B](f: B => A): P[Q[A]] => P[Q[B]] = fmap_P(cmap_Q(f))
\end{lstlisting}
where lawful implementations of \lstinline!fmap!$_{P}$ and \lstinline!cmap!$_{Q}$
are assumed to be given.

\subparagraph{Proof}

Convert the Scala implementation of \lstinline!cmap!$_{L}$ into
the code notation:
\[
\text{cmap}_{L}(f^{:B\rightarrow A})\triangleq\text{fmap}_{P}(\text{cmap}_{Q}(f))\quad.
\]
It is easier to reason about this function if we rewrite it as
\[
f^{\downarrow L}\triangleq\big(f^{\downarrow Q}\big)^{\uparrow P}\quad.
\]
The contrafunctor laws for $L$ are then proved like this:
\begin{align*}
{\color{greenunder}\text{identity law}:}\quad & \text{id}^{\downarrow L}=(\text{id}^{\downarrow Q})^{\uparrow P}=\text{id}^{\uparrow P}=\text{id}\quad.\\
{\color{greenunder}\text{composition law}:}\quad & f^{\downarrow L}\bef g^{\downarrow L}=(f^{\downarrow Q})^{\uparrow P}\gunderline{\,\bef\,}(g^{\downarrow Q})^{\uparrow P}\\
{\color{greenunder}\text{use }P\text{\textsf{'}s composition law}:}\quad & \quad=\big(\gunderline{f^{\downarrow Q}\bef g^{\downarrow Q}}\big)^{\uparrow P}=\big((g\bef f\gunderline{)^{\downarrow Q}\big)^{\uparrow P}}=\left(g\bef f\right)^{\downarrow L}\quad.
\end{align*}

Finally, the recursive construction works for contrafunctors, except
that the type constructor $S^{A,R}$ must be a contrafunctor in $A$
(but still a functor in $R$). An example of such a type constructor
is
\begin{equation}
S^{A,R}\triangleq\left(A\rightarrow\text{Int}\right)+R\times R\quad.\label{eq:f-example-contra-bifunctor}
\end{equation}
The type constructor $S^{\bullet,\bullet}$ is not a bifunctor because
it is contravariant in its first type parameter; so we cannot define
a \lstinline!bimap! function for it. However, we can define an analogous
function called \lstinline!xmap!, with the type signature
\begin{lstlisting}
def xmap[A, B, Q, R](f: B => A)(g: Q => R): S[A, Q] => S[B, R]
\end{lstlisting}
\begin{align*}
 & \text{xmap}_{S}:\left(B\rightarrow A\right)\rightarrow\left(Q\rightarrow R\right)\rightarrow S^{A,Q}\rightarrow S^{B,R}\quad,\\
 & \text{xmap}_{S}(f^{:B\rightarrow A})(g^{:Q\rightarrow R})\triangleq\text{fmap}_{S^{A,\bullet}}(g)\bef\text{cmap}_{S^{\bullet,R}}(f)\quad.
\end{align*}
The function \lstinline!xmap! should obey the laws of identity and
composition:
\begin{align}
{\color{greenunder}\text{identity law}:}\quad & \text{xmap}_{S}(\text{id})(\text{id})=\text{id}\quad,\label{eq:f-profunctor-identity-law}\\
{\color{greenunder}\text{composition law}:}\quad & \text{xmap}_{S}(f_{1})(g_{1})\bef\text{xmap}_{S}(f_{2})(g_{2})=\text{xmap}_{S}(f_{2}\bef f_{1})(g_{1}\bef g_{2})\quad.\label{eq:f-profunctor-composition-law}
\end{align}
These laws are similar to the identity and composition laws for bifunctors
(Section~\ref{subsec:Bifunctors}), except for inverting the order
of the composition $\left(f_{2}\bef f_{1}\right)$. The laws hold
automatically whenever the functor and contrafunctor methods for $S$
($\text{fmap}_{S^{A,\bullet}}$ and $\text{cmap}_{S^{\bullet,R}}$)
are fully parametric. We omit the details since they are quite similar
to what we saw in Section~\ref{subsec:Bifunctors} for bifunctors.

If we define a type constructor $L^{\bullet}$ using the recursive
\textsf{``}type equation\textsf{''}
\[
L^{A}\triangleq S^{A,L^{A}}\triangleq\left(A\rightarrow\text{Int}\right)+L^{A}\times L^{A}\quad,
\]
we obtain a contrafunctor in the shape of a binary tree whose leaves
are functions of type $A\rightarrow\text{Int}$. The next statement
shows that recursive type equations of this kind always define contrafunctors.

\subsubsection{Statement \label{subsec:functor-Statement-contrafunctor-recursive-1}\ref{subsec:functor-Statement-contrafunctor-recursive-1}}

If $S^{A,R}$ is a contrafunctor with respect to $A$ and a functor
with respect to $R$ then the recursively defined type constructor
$C^{A}$ is a contrafunctor,
\[
C^{A}\triangleq S^{A,C^{A}}\quad.
\]
Given the functions \lstinline!cmap!$_{S^{\bullet,R}}$ and \lstinline!fmap!$_{S^{A,\bullet}}$
for $S$, we implement \lstinline!cmap!$_{C}$ as
\begin{align*}
\text{cmap}_{C}(f^{:B\rightarrow A}) & :C^{A}\rightarrow C^{B}\cong S^{A,C^{A}}\rightarrow S^{B,C^{B}}\quad,\\
\text{cmap}_{C}(f^{:B\rightarrow A}) & \triangleq\text{xmap}_{S}(f)(\overline{\text{cmap}_{C}}(f))\quad.
\end{align*}
The corresponding Scala code can be written as
\begin{lstlisting}
final case class C[A](x: S[A, C[A]]) // The type constructor S[_, _] must be defined previously.

def xmap_S[A,B,Q,R](f: B => A)(g: Q => R): S[A, Q] => S[B, R] = ???          // Must be defined.

def cmap_C[A, B](f: B => A): C[A] => C[B] = { case C(x) =>
  val sbcb: S[B, C[B]] = xmap_S(f)(cmap_C(f))(x)                    // Recursive call to cmap_C.
  C(sbcb)              // Need to wrap the value of type S[B, C[B]] into the type constructor C.
}
\end{lstlisting}


\subparagraph{Proof}

The code of \lstinline!cmap! is recursive, and the recursive call
is marked by an overline: 
\[
\text{cmap}_{C}(f)\triangleq f^{\downarrow C}\triangleq\text{xmap}_{S}(f)(\overline{\text{cmap}_{C}}(f))\quad.
\]
To verify the identity law:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{cmap}_{C}(\text{id})=\text{xmap}_{S}(\text{id})(\gunderline{\overline{\text{cmap}_{C}}(\text{id})})\\
{\color{greenunder}\text{inductive assumption}:}\quad & =\text{xmap}_{S}(\text{id})(\text{id})\\
{\color{greenunder}\text{identity law of }\text{xmap}_{S}:}\quad & =\text{id}\quad.
\end{align*}
To verify the composition law:
\begin{align*}
{\color{greenunder}\text{expect to equal }(g^{\downarrow C}\bef f^{\downarrow C}):}\quad & (f^{:D\rightarrow B}\bef g^{:B\rightarrow A})^{\downarrow C}=\text{xmap}_{S}(f\bef g)(\gunderline{\overline{\text{cmap}_{C}}(f\bef g)})\\
{\color{greenunder}\text{inductive assumption}:}\quad & =\text{xmap}_{S}(f\bef g)(\overline{\text{cmap}_{C}}(g)\bef\overline{\text{cmap}_{C}}(f)))\\
{\color{greenunder}\text{composition law of }\text{xmap}_{S}:}\quad & =\text{xmap}_{S}(g)(\overline{\text{cmap}_{C}}(g))\bef\text{xmap}_{S}(f)(\overline{\text{cmap}_{C}}(f))\\
{\color{greenunder}\text{definition of }^{\downarrow C}:}\quad & =g^{\downarrow C}\bef f^{\downarrow C}\quad.
\end{align*}


\subsection{Solved examples: How to recognize functors and contrafunctors\label{subsec:Solved-examples:-How-to-recognize-functors}}

Sections~\ref{subsec:f-Functor-constructions} and~\ref{subsec:f-Contrafunctor-constructions}
describe how functors and contrafunctors are built from other type
expressions. We can see from Tables~\ref{tab:f-Functor-constructions}
and~\ref{tab:f-Contrafunctor-constructions} that \emph{every} one
of the six basic type constructions\index{six type constructions}
(unit type, type parameters, product types, co-product types, function
types, recursive types) gives either a new functor or a new contrafunctor.
The six type constructions generate all \index{exponential-polynomial type}exponential-polynomial
types, including recursive ones. So, we should be able to decide whether
any given exponential-polynomial type expression is a functor or a
contrafunctor. The decision algorithm is based on the results shown
in Tables~\ref{tab:f-Functor-constructions} and~\ref{tab:f-Contrafunctor-constructions}:
\begin{itemize}
\item Primitive types $\bbnum 1$, \lstinline!Int!, \lstinline!String!,
etc., can be viewed both as constant functors and as constant contrafunctors
(since they do not contain type parameters).
\item Polynomial type expressions (not containing any function arrows) are
always functors\index{polynomial functor} with respect to every type
parameter. Equivalently, we may say that all polynomial type constructors
are covariant in every type parameter. For example, the type expression
$A\times B+\left(A+\bbnum 1+B\right)\times A\times C$ is covariant
in each of the type parameters $A$, $B$, $C$.
\item Type parameters to the right of a function arrow are in a covariant
position. For example, $\text{Int}\rightarrow A$ is covariant in
$A$.
\item Each time a type parameter is placed to the left of an \emph{uncurried}
function arrow $\rightarrow$, the variance is reversed: covariant
becomes contravariant and vice versa. For example,
\begin{align*}
{\color{greenunder}\text{this is covariant in }A:}\quad & \bbnum 1+A\times A\quad,\\
{\color{greenunder}\text{this is contravariant in }A:}\quad & \left(\bbnum 1+A\times A\right)\rightarrow\text{Int}\quad,\\
{\color{greenunder}\text{this is covariant in }A:}\quad & \left(\left(\bbnum 1+A\times A\right)\rightarrow\text{Int}\right)\rightarrow\text{Int}\quad,\\
{\color{greenunder}\text{this is contravariant in }A:}\quad & \left(\left(\left(\bbnum 1+A\times A\right)\rightarrow\text{Int}\right)\rightarrow\text{Int}\right)\rightarrow\text{Int}\quad.
\end{align*}
\item Repeated curried function arrows work as one arrow: $A\rightarrow\text{Int}$
is contravariant in $A$, and $A\rightarrow A\rightarrow A\rightarrow\text{Int}$
is still contravariant in $A$. This is  because the type $A\rightarrow A\rightarrow A\rightarrow\text{Int}$
is equivalent to $A\times A\times A\rightarrow\text{Int}$, which
is of the form $F^{A}\rightarrow\text{Int}$ with a type constructor
$F^{A}\triangleq A\times A\times A$. Exercise~\ref{subsec:functor-Exercise-contrafunctor-exponential}
will show that $F^{A}\rightarrow\text{Int}$ is contravariant in $A$.
\item Nested type constructors combine their variances: e.g., if we know
that $F^{A}$ is contravariant in $A$ then $F^{A\rightarrow\text{Int}}$
is covariant in $A$, while $F^{A\times A\times A}$ is contravariant
in $A$.
\end{itemize}
For any exponential-polynomial type expression, such as Eq.~(\ref{eq:f-example-complicated-z}),
\[
Z^{A,R}\triangleq\left(\left(A\rightarrow A\rightarrow R\right)\rightarrow R\right)\times A+\left(\bbnum 1+R\rightarrow A+\text{Int}\right)+A\times A\times\text{Int}\times\text{Int}\quad,
\]
we mark the position of each type parameter as either covariant ($+$)
or contravariant ($-$), according to the number of nested function
arrows:
\[
\big(\big(\underset{+}{A}\rightarrow\underset{+}{A}\rightarrow\underset{-}{R}\big)\rightarrow\underset{+}{R}\big)\times\underset{+}{A}+\big(\bbnum 1+\underset{-}{R}\rightarrow\underset{+}{A}+\text{Int}\big)+\underset{+}{A}\times\underset{+}{A}\times\text{Int}\times\text{Int}\quad.
\]
We find that $A$ is always in covariant positions, while $R$ is
sometimes in covariant and sometimes in contravariant positions. So,
we expect that $Z^{A,R}$ is a functor with respect to $A$, but not
a functor (nor a contrafunctor) with respect to $R$.

To show that $Z^{A,R}$ is indeed a functor in the parameter $A$,
we need to implement a suitable \lstinline!map! method and verify
that the functor laws hold. To do that from scratch, we could use
the techniques explained in this and the previous chapters: starting
from the type signature 
\[
\text{map}_{Z}:Z^{A,R}\rightarrow\left(A\rightarrow B\right)\rightarrow Z^{B,R}\quad,
\]
we could derive a fully parametric, information-preserving implementation
of \lstinline!map!. We could then look for proofs of the identity
and composition laws for that \lstinline!map! function. This would
require a lot of work for a complicated type constructor such as $Z^{A,R}$.

However, that work can be avoided if we find a way of building up
$Z^{A,R}$ step by step via the known functor and contrafunctor constructions.
Each step automatically provides both a fragment of the code of \lstinline!map!
and a proof that the functor laws hold up to that step. In this way,
we will avoid the need to look for an implementation of \lstinline!map!
and proofs of laws for each new functor and contrafunctor. The next
examples illustrate the procedure for a simpler type constructor.

\subsubsection{Example \label{subsec:f-Example-recognize-type-variance-1}\ref{subsec:f-Example-recognize-type-variance-1}\index{solved examples}}

Rewrite this Scala definition in the type notation and decide whether
it is covariant or contravariant with respect to each type parameter:

\begin{lstlisting}
final case class G[A, Z](p: Either[Int, A], q: Option[Z => Int => Z => (Int, A)])
\end{lstlisting}


\subparagraph{Solution}

The type notation for $G$ is $G^{A,Z}\triangleq(\text{Int}+A)\times(\bbnum 1+(Z\rightarrow\text{Int}\rightarrow Z\rightarrow\text{Int}\times A))$.
Mark the covariant and the contravariant positions in this type expression:
\[
(\text{Int}+\underset{+}{A})\times(\bbnum 1+(\underset{-}{Z}\rightarrow\text{Int}\rightarrow\underset{-}{Z}\rightarrow\text{Int}\times\underset{+}{A}))\quad.
\]
All $Z$ positions in the sub-expression $Z\rightarrow\text{Int}\rightarrow Z\rightarrow\text{Int}\times A$
are contravariant since the function arrows are curried rather than
nested. We see that $A$ is always in covariant positions ($+$) while
$Z$ is always in contravariant positions ($-$). It follows that
$G^{A,Z}$ is covariant in $A$ and contravariant in $Z$. 

\subsubsection{Example \label{subsec:f-Example-recognize-type-variance-1-1}\ref{subsec:f-Example-recognize-type-variance-1-1}}

Use known functor constructions to implement the \lstinline!map!
method with respect to \lstinline!A! for the type \lstinline!G[A, Z]!
from Example~\ref{subsec:f-Example-recognize-type-variance-1}.

\subparagraph{Solution}

We need to build $G^{A,Z}$ via step-by-step constructions that start
from primitive types and type parameters. At the top level of its
type expression, $G^{A,Z}$ is a product type. So, we begin by using
the \textsf{``}functor product\textsf{''} construction (Statement~\ref{subsec:functor-Statement-functor-product}):
\begin{align*}
 & G^{A,Z}\cong G_{1}^{A}\times G_{2}^{A,Z}\quad,\\
\text{where } & G_{1}^{A}\triangleq\text{Int}+A\quad\text{ and }\quad G_{2}^{A,Z}\triangleq\bbnum 1+(Z\rightarrow\text{Int}\rightarrow Z\rightarrow\text{Int}\times A)\quad.
\end{align*}
We continue with $G_{1}^{A}$, which is a co-product of $\text{Int}$
(a constant functor) and $A$ (the identity functor). The constant
functor and the identity functor have lawful \lstinline!map! implementations
that are already known (Statements~\ref{subsec:f-Statement-identity-functor}\textendash \ref{subsec:f-Statement-constant-functor}).
Now, the \textsf{``}functor co-product\textsf{''} construction (Statement~\ref{subsec:functor-Statement-functor-product})
produces a \lstinline!map! implementation for $G_{1}^{A}$ together
with a proof that it satisfies the functor laws:
\[
\text{fmap}_{G_{1}}(f^{:A\rightarrow B})=f^{\uparrow G_{1}}\triangleq\,\begin{array}{|c||cc|}
 & \text{Int} & B\\
\hline \text{Int} & \text{id} & \bbnum 0\\
A & \bbnum 0 & f
\end{array}\quad.
\]
Turning our attention to $G_{2}^{A,Z}$, we find that it is a disjunctive
type containing a curried function type that ultimately returns the
product type $\text{Int}\times A$. This tells us to use the functor
constructions for \textsf{``}co-product\textsf{''}, \textsf{``}exponential\textsf{''}, and \textsf{``}product\textsf{''}.
Write down the functor constructions needed at each step as we decompose
$G_{2}^{A,Z}$:
\begin{align*}
 & G_{2}^{A,Z}\triangleq\bbnum 1+(Z\rightarrow\text{Int}\rightarrow Z\rightarrow\text{Int}\times A)\quad.\\
{\color{greenunder}\text{co-product}:}\quad & G_{2}^{A,Z}\cong\bbnum 1+G_{3}^{A,Z}\quad\text{ where }G_{3}^{A,Z}\triangleq Z\rightarrow\text{Int}\rightarrow Z\rightarrow\text{Int}\times A\quad.\\
{\color{greenunder}\text{exponential}:}\quad & G_{3}^{A,Z}\cong Z\rightarrow G_{4}^{A,Z}\quad\text{ where }G_{4}^{A,Z}\triangleq\text{Int}\rightarrow Z\rightarrow\text{Int}\times A\quad.\\
{\color{greenunder}\text{exponential}:}\quad & G_{4}^{A,Z}\cong\text{Int}\rightarrow G_{5}^{A,Z}\quad\text{ where }G_{5}^{A,Z}\triangleq Z\rightarrow\text{Int}\times A\quad.\\
{\color{greenunder}\text{exponential}:}\quad & G_{5}^{A,Z}\cong Z\rightarrow G_{6}^{A}\quad\text{ where }G_{6}^{A}\triangleq\text{Int}\times A\quad.\\
{\color{greenunder}\text{product}:}\quad & G_{6}^{A}\cong\text{Int}\times A\cong\text{Const}^{\text{Int},A}\times\text{Id}^{A}\quad.
\end{align*}
Each of the type constructors $G_{1}$, ..., $G_{6}$ is a functor
in $A$ because all of the functor constructions preserve functor
laws. Therefore, $G^{A,Z}$ is a functor in $A$. 

It remains to derive the code for the \lstinline!fmap! method of
$G$. Each of the functor constructions combines the \lstinline!fmap!
implementations from previously defined functors into a new \lstinline!map!
implementation, so we just need to combine the code fragments in the
order of constructions. For brevity, we will use the notations $f^{\uparrow L}\triangleq\text{fmap}_{L}(f)$
and $x\triangleright f^{\uparrow L}$ instead of the Scala code \lstinline!x.map(f)!
throughout the derivations:
\begin{align*}
{\color{greenunder}\text{product}:}\quad & G^{A,Z}\cong G_{1}^{A}\times G_{2}^{A,Z}\quad,\quad\quad(g_{1}\times g_{2})\triangleright f^{\uparrow G}=(g_{1}\triangleright f^{\uparrow G_{1}})\times(g_{2}\triangleright f^{\uparrow G_{2}})\quad.\\
{\color{greenunder}\text{co-product}:}\quad & G_{1}^{A}\triangleq\text{Int}+A\quad,\quad\quad f^{\uparrow G_{1}}=\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & f
\end{array}\quad.\\
{\color{greenunder}\text{co-product}:}\quad & G_{2}^{A,Z}\triangleq\bbnum 1+G_{3}^{A,Z}\quad,\quad\quad f^{\uparrow G_{2}}=\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & f^{\uparrow G_{3}}
\end{array}\quad.\\
{\color{greenunder}\text{exponential}:}\quad & G_{3}^{A,Z}\triangleq Z\rightarrow G_{4}^{A,Z}\quad,\quad\quad g_{3}\triangleright f^{\uparrow G_{3}}=g_{3}\bef f^{\uparrow G_{4}}=z^{:Z}\rightarrow z\triangleright g_{3}\triangleright f^{\uparrow G_{4}}\quad.
\end{align*}
The pipe symbol groups stronger than the function arrow.\index{pipe notation!operator precedence}
So, $z\rightarrow z\triangleright g\triangleright h$ means $z\rightarrow(z\triangleright g\triangleright h)$,
or $z\rightarrow h(g(z))$. Applying the exponential  construction
three times, we finally obtain
\begin{align*}
 & G_{3}^{A,Z}\triangleq Z\rightarrow\text{Int}\rightarrow Z\rightarrow G_{6}^{A}\quad,\quad\quad g_{3}\triangleright f^{\uparrow G_{3}}=z_{1}^{:Z}\rightarrow n^{:\text{Int}}\rightarrow z_{2}^{:Z}\rightarrow g_{3}(z_{1})(n)(z_{2})\triangleright f^{\uparrow G_{6}}\quad.\\
 & G_{6}^{A}\triangleq\text{Int}\times A\quad,\quad\quad(i\times a)\triangleright f^{\uparrow G_{6}}=i\times f(a)\quad.
\end{align*}
We can now write the corresponding Scala code for \lstinline!fmap!$_{G}$:
\begin{lstlisting}
def fmap_G[A, B, Z](f: A => B): G[A, Z] => G[B, Z] = { case G(p, q) =>
  val newP: Either[Int, B] = p.map(f)      // Use the standard map method for Either[Int, A].
  val newQ: Option[Z => Int => Z => (Int, B)]  = q.map { // Use the map method for Option[_].
    (g3: Z => Int => Z => (Int, A)) => 
       z1 => n => z2 =>             // The code of map for G_3.
         val (i, a) = g3(z1)(n)(z2)
         (i, f(a))                  // The code of map for G_6.
    }
  G(newP, newQ)                     // The code of map for G_1.
}
\end{lstlisting}
In this way, the code of $\text{fmap}_{F}$ can be unambiguously \emph{derived}
for any functor $F$ from the type expression of $F^{A}$, and similarly
the code for $\text{cmap}_{C}$ for any contrafunctor $C$.

\section{Summary}

What tasks can we perform with the techniques of this chapter?
\begin{itemize}
\item Quickly decide if a given type constructor is a functor, a contrafunctor,
or neither.
\item Implement a \lstinline!fmap! or a \lstinline!cmap! function that
satisfies the appropriate laws.
\item Use constructions to derive the correct code of \lstinline!fmap!
or \lstinline!cmap! without trial and error.
\item Use functor blocks to manipulate data wrapped in functors with more
readable code.
\end{itemize}

\subsection{Exercises: Functor and contrafunctor constructions \index{exercises}}

\subsubsection{Exercise \label{subsec:functor-Exercise-contrafunctor-exponential}\ref{subsec:functor-Exercise-contrafunctor-exponential}}

If $H^{A}$ is a contrafunctor and $L^{A}$ is a functor, show that
$C\triangleq L^{A}\rightarrow H^{A}$ is a contrafunctor with the
\lstinline!cmap! method defined by the following code:
\begin{lstlisting}[mathescape=true]
def cmap[A, B](f: B => A)(c: L[A] => H[A]): L[B] => H[B] = {
  lb: L[B] => cmap_H(f)(c(fmap_L(f)(lb)))                    // Code notation: ${\color{dkgreen}\scriptstyle f^{\downarrow C}\triangleq\, c\rightarrow f^{\uparrow L}\bef c\bef f^{\downarrow H}}$
}
\end{lstlisting}
Here, \lstinline!cmap_H! and \lstinline!fmap_L! are the methods
already defined for $H$ and $L$. Prove that the laws hold.

\subsubsection{Exercise \label{subsec:functor-Exercise-functor-laws}\ref{subsec:functor-Exercise-functor-laws}}

Implement the required \lstinline!fmap! or \lstinline!cmap! function
for the given type constructors $L$ and prove that the appropriate
laws hold. Write the implementations both in Scala and in the code
notation. Assume that the given type constructors $F$ and $G$ already
satisfy their respective laws.

\textbf{(a)} $L^{A}\triangleq F^{A}\times G^{A}$ is a contrafunctor
if $F^{A}$ and $G^{A}$ are contrafunctors.

\textbf{(b)} $L^{A}\triangleq F^{A}+G^{A}$ is a contrafunctor if
$F^{A}$ and $G^{A}$ are contrafunctors.

\textbf{(c)} $L^{A}\triangleq F^{G^{A}}$ is a functor when both $F$
and $G$ are contrafunctors.

\textbf{(d)} $L^{A}\triangleq F^{G^{A}}$ is a contrafunctor when
$F$ is a contrafunctor and $G$ is a functor.

\subsubsection{Exercise \label{subsec:f-Exercise-recursive-functor-2}\ref{subsec:f-Exercise-recursive-functor-2}}

Show that the type constructor $L$ defined by Eq.~(\ref{eq:f-def-recursive-functor-2})
is a functor for any given bifunctor $S$ and functor $P$. 

\subsubsection{Exercise \label{subsec:functor-Exercise-functor-constructions-0}\ref{subsec:functor-Exercise-functor-constructions-0}}

Show that $L^{A}\triangleq F^{A}\rightarrow G^{A}$ is, in general,
neither a functor nor a contrafunctor when both $F^{A}$ and $G^{A}$
are functors or both are contrafunctors (an example of suitable $F^{A}$
and $G^{A}$ is sufficient).

\subsubsection{Exercise \label{subsec:functor-Exercise-functor-constructions-1}\ref{subsec:functor-Exercise-functor-constructions-1}}

For each of the Scala type constructors defined below, formulate the
definition in the type notation and decide whether the type constructors
are functors, contrafunctors, or neither.
\begin{lstlisting}
type F[A] = Int => (Option[A], Either[A, Int], Either[A, A])
type G[A] = ((Int, A)) => Either[Int, A]
type H[A] = Either[A, (A, Option[A])] => Int => Int
\end{lstlisting}


\subsubsection{Exercise \label{subsec:functor-Exercise-functor-constructions-2}\ref{subsec:functor-Exercise-functor-constructions-2}}

Using the known constructions, determine which of the following are
functors or contrafunctors (or neither) and implement \lstinline!fmap!
or \lstinline!cmap! if appropriate. Answer this question with respect
to each type parameter separately.

\textbf{(a)} $F^{A}\triangleq\text{Int}\times A\times A+(\text{String}\rightarrow A)\times A\quad.$

\textbf{(b)} $G^{A,B}\triangleq\left(A\rightarrow\text{Int}\rightarrow\bbnum 1+B\right)+\left(A\rightarrow\bbnum 1+A\rightarrow\text{Int}\right)\quad.$

\textbf{(c)} $H^{A,B,C}\triangleq\left(A\rightarrow A\rightarrow B\rightarrow C\right)\times C+\left(B\rightarrow A\right)\quad.$

\textbf{(d)} $P^{A,B}\triangleq\left(\left(\left(A\rightarrow B\right)\rightarrow A\right)\rightarrow B\right)\rightarrow A\quad.$

\subsubsection{Exercise \label{subsec:functor-Exercise-functor-constructions-3}\ref{subsec:functor-Exercise-functor-constructions-3}}

Show that the recursive type constructor $L^{\bullet}$ defined by
\[
L^{A}\triangleq\bbnum 1+A+L^{A}
\]
is a functor, and implement a \lstinline!map! or \lstinline!fmap!
function for $L$ in Scala.

\subsubsection{Exercise \label{subsec:functor-Exercise-functor-constructions-3-1}\ref{subsec:functor-Exercise-functor-constructions-3-1}}

Show that the regular-shaped tree $L^{\bullet}$ defined by
\[
L^{A}\triangleq A\times A+L^{A\times A\times A}
\]
is a functor, and implement a \lstinline!map! or \lstinline!fmap!
function for $L$ in Scala.

\section{Further developments}

\subsection{Profunctors\label{subsec:f-Profunctors}}

We have seen that some type constructors are neither functors nor
contrafunctors because their type parameters appear both in covariant
and contravariant positions. An example of such a type constructor
is
\[
P^{A}\triangleq A+\left(A\rightarrow\text{Int}\right)\quad.
\]
It is not possible to define either a \lstinline!map! or a \lstinline!cmap!
function for $P$: the required type signatures cannot be implemented.
However, we \emph{can} implement a function called \lstinline!xmap!,
with the type signature
\[
\text{xmap}_{P}:\left(B\rightarrow A\right)\rightarrow\left(A\rightarrow B\right)\rightarrow P^{A}\rightarrow P^{B}\quad.
\]
To see why, let us temporarily rename the contravariant occurrence
of $A$ to $Z$ and define a new type constructor $\tilde{P}$ by
\[
\tilde{P}^{Z,A}\triangleq A+\left(Z\rightarrow\text{Int}\right)\quad.
\]
The original type constructor $P^{A}$ is expressed as $P^{A}=\tilde{P}^{A,A}$.
Now, $\tilde{P}^{Z,A}$ is covariant in $A$ and contravariant in
$Z$. We can implement \lstinline!xmap!$_{\tilde{P}}$ as a composition
of \lstinline!fmap! with respect to $A$ and \lstinline!cmap! with
respect to $Z$, similarly to what we saw in the proof of Statement~\ref{subsec:functor-Statement-contrafunctor-recursive-1}.
The function \lstinline!xmap!$_{\tilde{P}}$ will satisfy the identity
and composition laws~(\ref{eq:f-profunctor-identity-law})\textendash (\ref{eq:f-profunctor-composition-law}).
Setting the type parameter $Z=A$, we will obtain the \lstinline!xmap!$_{P}$
function for $P$. The identity and composition laws for \lstinline!xmap!$_{P}$
will hold, since the laws of $\tilde{P}^{Z,A}$ hold for all type
parameters:
\begin{align*}
{\color{greenunder}P\text{\textsf{'}s identity law}:}\quad & \text{xmap}_{P}(\text{id}^{:A\rightarrow A})(\text{id}^{:A\rightarrow A})=\text{id}\quad,\\
{\color{greenunder}P\text{\textsf{'}s composition law}:}\quad & \text{xmap}_{P}(f_{1}^{:B\rightarrow A})(g_{1}^{:A\rightarrow B})\bef\text{xmap}_{P}(f_{2}^{:C\rightarrow B})(g_{2}^{:B\rightarrow C})=\text{xmap}_{P}(f_{2}\bef f_{1})(g_{1}\bef g_{2})\quad.
\end{align*}

A type constructor $P^{A}$ with these properties ($P^{A}\cong\tilde{P}^{A,A}$
where $\tilde{P}^{Z,A}$ has a lawful \lstinline!xmap!$_{\tilde{P}}$)
is called a \textbf{profunctor}\index{profunctor|textit}. Sometimes
the type constructor $\tilde{P}^{Z,A}$ is also called a profunctor.

Consider an exponential-polynomial type constructor $P^{A}$, no matter
how complicated, such as
\[
P^{A}\triangleq\left(\bbnum 1+A\times A\rightarrow A\right)\times A\rightarrow\bbnum 1+\left(A\rightarrow A+\text{Int}\right)\quad.
\]
Each copy of the type parameter $A$ will occur either in covariant
or in a contravariant position because no other possibility is available
in exponential-polynomial types. So, we can always rename all contravariant
occurrences of the type parameter $A$ to \textsf{``}$Z$\textsf{''} and so obtain
a new type constructor $\tilde{P}^{Z,A}$, which will be covariant
in $A$ and contravariant in $Z$. Since $\tilde{P}^{A,Z}$ is a functor
in $A$ and a contrafunctor in $Z$, we will be able to define a function
\lstinline!xmap!$_{\tilde{P}}$ satisfying the identity and composition
laws. Setting $Z=A$, we will obtain a lawful \lstinline!xmap!$_{P}$,
which makes $P$ a profunctor. Thus, \emph{every} exponential-polynomial
type constructor is a profunctor.

An unfunctor\index{unfunctor}, such as the disjunctive type \lstinline!ServerAction[R]!
shown in Section~\ref{subsec:Examples-of-non-functors}, cannot be
made into a profunctor. The type signature of \lstinline!xmap! cannot
be implemented for \lstinline!ServerAction[R]! because it is not
a fully parametric type constructor (and so is not exponential-polynomial).

Profunctors are not often used in practical coding. We will see profunctors
occasionally in later chapters where we need to reason about type
constructors of arbitrary variance.

\subsection{Subtyping with injective or surjective conversion functions}

In some cases, $P$ is a subtype of $Q$ when the set of values of
$P$ is a \emph{subset} of values of $Q$. In other words, the conversion
function $P\rightarrow Q$ is injective and embeds all information
from a value of type $P$ into a value of type $Q$. This kind of
subtyping works for parts of disjunctive types, such as \lstinline!Some[A] <: Option[A]!
(in the type notation, $\bbnum 0+A\lesssim\bbnum 1+A$). The set of
all values of type \lstinline!Some[A]! is a subset of the set of
values of type \lstinline!Option[A]!, and the conversion function
is injective because it is an identity function, $\bbnum 0+x^{:A}\rightarrow\bbnum 0+x$,
that merely reassigns types.

However, subtyping does not necessarily imply that the conversion
function is injective. An example of a subtyping relation with a \emph{surjective}
conversion function is between the function types $P\triangleq\bbnum 1+A\rightarrow\text{Int}$
and $Q\triangleq\bbnum 0+A\rightarrow\text{Int}$ (in Scala, \lstinline!P = Option[A] => Int!
and \lstinline!Q = Some[A] => Int!). We have $P\lesssim Q$ because
$P\cong C^{\bbnum 1+A}$ and $Q\cong C^{\bbnum 0+A}$, where $C^{X}\triangleq X\rightarrow\text{Int}$
is a contrafunctor. The conversion function $P\rightarrow Q$ is an
identity function that reassigns types,
\begin{lstlisting}
def p2q[A](p: Option[A] => Int): Some[A] => Int = { x: Some[A] => p(x) }
\end{lstlisting}
In the code notation, $p\rightarrow x\rightarrow p(x)$ is easily
seen to be the same as $p\rightarrow p$. 

Nevertheless, it is not true that all information from a value of
type $P$ is preserved in a value of type $Q$; the type $P$ describes
functions that also accept \lstinline!None! as an argument, while
functions of type $Q$ do not. So, there is strictly more information
in the type $P$ than in $Q$. The conversion function $\text{p2q}:P\rightarrow Q$
is surjective.

We have now seen examples of injective and surjective type conversions.
Suppose $P_{1}\lesssim Q_{1}$ and $P_{2}\lesssim Q_{2}$, and consider
the product types $P_{1}\times P_{2}$ and $Q_{1}\times Q_{2}$. Since
the product type is one of the functor constructions, the product
$A\times B$ is covariant in both type parameters. It follows that
$P_{1}\times P_{2}\lesssim Q_{1}\times Q_{2}$. If $r_{1}:P_{1}\rightarrow Q_{1}$
is injective but $r_{2}:P_{2}\rightarrow Q_{2}$ is surjective, the
pair product $r_{1}\boxtimes r_{2}:P_{1}\times P_{2}\rightarrow Q_{1}\times Q_{2}$
is neither injective nor surjective. So, type conversion functions
are not necessarily injective or surjective; they can also be anything
\textsf{``}in between\textsf{''}.

A property of functor liftings is that they preserve injectivity and
surjectivity: if a function $f^{:A\rightarrow B}$ is injective, it
is lifted to an injective function $f^{\uparrow L}:L^{A}\rightarrow L^{B}$;
and similarly for surjective functions $f$. Let us prove this property
for injective functions; the proof for surjective functions is quite
similar.

\subsubsection{Statement \label{subsec:f-Statement-functor-preserves-injective}\ref{subsec:f-Statement-functor-preserves-injective}}

If $L^{A}$ is a lawful functor and $f^{:A\rightarrow B}$ is an injective
function then $\text{fmap}_{L}(f)$ is also an injective function
of type $L^{A}\rightarrow L^{B}$. 

\subparagraph{Proof}

We begin by noting that an injective function $f^{:A\rightarrow B}$
must somehow embed all information from a value of type $A$ into
a value of type $B$. The \textbf{image} of $f$ (the subset of all
values of type $B$ that can be obtained as $f(a)$ for some $a^{:A}$)
thus contains a distinct value of type $B$ for each distinct value
of type $A$. So, there exists a function that maps any $b$ from
the image of $f$ back to a value $a^{:A}$ it came from; call that
function $g^{:B\rightarrow A}$. The function $g$ must satisfy 
\[
\forall a^{:A}.\,g(f(a))=a\quad,
\]
equivalently written as 
\[
g\circ f=\text{id}\quad.
\]
It is important that $g$ is a partial function\index{partial function}.
The function $g$ is partial because it is defined only for a subset
of values of type $B$, namely the values within the image of $f$.
Despite the equation $g\circ f=\text{id}$, the function $g$ is not
an inverse for $f$. An inverse\index{inverse function} function
for $f$ must be a \emph{total} (not a partial) function $h$ satisfying
$h\circ f=\text{id}$ and $f\circ h=\text{id}$. The function $g$
is called a \textbf{left inverse}\index{left inverse} for $f$ because
$f\circ g\neq\text{id}$, since $f\circ g$ is only a partial function.

The fact that $f$ has a left inverse is \emph{equivalent} to the
assumption that $f$ is injective. Indeed, if any function $f$ has
a left inverse $g$, we can show that $f$ is injective. Assume some
$x$ and $y$ such that $f(x)=f(y)$; we will prove that $f$ is injective
if we show that $x=y$. Applying $g$ to both sides of $f(x)=f(y)$,
we get
\[
x=g(f(x))=g(f(y))=y\quad.
\]

Now we apply this trick to functions lifted into the functor $L$.
To prove that $\text{fmap}_{L}(f)$ is injective, we need to show
that it has a left inverse. We can lift both sides of the equation
$g\circ f=\text{id}$ to get
\begin{align*}
 & \text{fmap}_{L}(g)\circ\text{fmap}_{L}(f)\\
{\color{greenunder}\text{composition law of }L:}\quad & =\text{fmap}_{L}(g\circ f)\\
{\color{greenunder}\text{use }g\circ f=\text{id}:}\quad & =\text{fmap}_{L}(\text{id})\\
{\color{greenunder}\text{identity law of }L:}\quad & =\text{id}\quad.
\end{align*}
It follows that $\text{fmap}_{L}(g)\circ\text{fmap}_{L}(f)=\text{id}$,
i.e., $\text{fmap}_{L}(g)$ is a left inverse for $\text{fmap}_{L}(f)$.
Since $\text{fmap}_{L}(f)$ has a left inverse, it is injective.

\begin{comment}
the common feature is what I call here the bare functionality of a
container it\textsf{'}s just the functionality that describes the idea of holding
in some way an item or perhaps several items of type T and holding
means you can manipulate this data inside the container that\textsf{'}s the
only way that we can interpret this if there is data inside the container
but we can never manipulate it in any way then that\textsf{'}s not reasonable
to call that a container 

so what does it mean to manipulate it means we can apply functions
to these values because in functional programming that\textsf{'}s all we do
we apply functions and get new values out of old ones so the idea
that the container holds items it means that we can apply a function
to these items and the new items will remain in the container so we're
not extracting items out of it were just transforming the items that
stay within the container 

all the values that were held in the container here have been transformed
through this function into values of type B but they remain in the
same container or in it is a new value of the container type but they
remain within the same kind of container of the same shape 

so for instance if it were a sequence then it will remain a sequence 

so this is the common pattern between sequence and future it\textsf{'}s a pattern
that allows us to transform data while keeping that data within the
same container

so that\textsf{'}s what I'll mean here by the functionality of their container
so for instance making a new container out of a given set of data
items is not part of that functionality or reading values out of the
container or adding more items or deleting items these are also not
part of the functionality of the bigger container so these are specific
containers that we will consider later that can do this but the most
basic and common among all containers is not is not this also not
waiting or getting notified when new items become available like in
the future container none of that is the basic functionality of a
container only this so if we have the map when we have a container
we can additionally have other things 

and of course any kind of useful container will have other methods
and we'll have other functionality it is unreasonable to just use
bare container you can do anything with it you can't even create it
or read values out of it so in any specific case you will have a bunch
of other methods for any specific container so for example you want
to create a future container it means you need to create some parallel
process that will be running and computing this value of type T while
you're still doing your computation so creating a container of this
type actually involves creating a parallel process or parallel thread
of computation creating a container of this type doesn't involve that
necessarily so these are going to be specific things that I'm not
going to talk about in this tutorial

I'm only going to talk about what is common to all containers which
is a bare container functionality which is the map function

do we avoid that is it this function how do we define this function
so that we don't lose information and how do we also define this function
so that it actually allows you allows us to manipulate values so are
there some constraints on this function that we need to be aware of
and that\textsf{'}s a center point of this tutorial 

so these are the main questions that we will be dealing with how to
define this map function in a sensible and reasonable way because
that\textsf{'}s part of the idea of the container because the container does
not lose information you transfer information inside but you should
don't lose it while you're transforming losing it would violate the
whole idea of having a container that holds information 

so the first question about information loss we will translate that
into a formula 

because we don't want to just talk about some intuition intuition
is important but we also want to formulate these both requirements
there\textsf{'}s very precise formulas as requirements or as we call them laws 

so this looks like lifting a function from A to B to option eight
option B so from one kind of function space to another kind of function
space 

you're lifting the function so that\textsf{'}s much more visually clear what
this function has map does but for coding in scholar especially it\textsf{'}s
actually easier to use this order of arguments because you usually
have mapped as a method in a class so you have a dot map of F and
that\textsf{'}s also easier to read in code 

a definition of functor so functor is the term which is used in functional
programming recently to denote this abstraction of the functionality
of a bare container the bare container having a map with laws so definition
is that functor first of all it is a data type that has a type parameter
such as something like this now if you don't have a type traveler
you cannot have a container obviously so or you cannot abstract the
functionality of a container if you don't first abstract the type
of values to a type parameter and you can have a sequence of integers
and you can say that\textsf{'}s a container it is but it is not abstracted
and you cannot reason about its properties as an abstract container
and so you cannot see what is commonly between a sequence of integers
and some other sequence like a sequence of boolean\textsf{'}s or a sequence
of options of some other thing you cannot see what\textsf{'}s common between
them unless you abstract the type into a type parameter and so the
abstraction for this functionality requires you first to do that and
to have a type parameter second require is that a function map should
be available for this data type with this type signature or F map
these are equivalent they just differ by order during the order of
curried arguments and these functions must be such that the laws of
identity and the laws of composition must hold so these laws are written
like that in terms of F map is much easier to write them shorter and
also easier to see what these laws do and you remember why you need
them and of course to check them so the F map applied to the same
type a identity and you a gives you an identity F a to FA and F my
applied to a composition of functions gives you a composition of lifted
functions and so now I just want to mention that the word functor
comes from category theory it is not going to be useful for us to
go into category theory right now but there is different usage of
the word factor in programming or in software engineering which are
not the same because what we're doing here which do not come from
category theory so for example even C++ now has something called a
functor and Kokomo has some other thing called a functor these are
not the same and as is just a very specific usage of the word function
that is now being being dominated being the dominant one in an functional
programming so this functor is a concept from category theory and
not something else but for us this is not very important that it comes
from category theory because we are actually motivating these laws
by requirements of practical use so you want to manipulate data in
a container and if you want to do that in a reasonable way and be
able to understand your code after you have written it several years
back then the children there should be surprises so if you for example
identity law just tells you that if you manipulate data by doing nothing
it shouldn't change your container so it would be very surprising
in your code if that were the case and you would never find the bug
until you debug every step painfully and so that\textsf{'}s the kind of thing
I want to avoid by imposing these walls and the second law the composition
law also says that basically functions are applied in the way you
expect expect them to be applied and there\textsf{'}s no surprises so you can
simplify your code for instance if you discover that this function
G is identity you can omit it from the code and the code will still
be working and that\textsf{'}s not the case if this law is violated so let
us go to actual code demonstration - very fine the law is for the
option in the F map implementation for the option is written here
it\textsf{'}s a very easy thing that you take a function f as a parameter and
you return a function from option a to option B so now if you return
a function like this an option is a case class or a sealed trade with
case classes or disjunction as I prefer to call them then a short
syntax in Scala is to just write a case partial function with case
so you did not say much more than this it\textsf{'}s just very easy to to write
that it\textsf{'}s a shorter syntax and then if the optional a is empty and
you return none and if the option is not empty and they return a non
empty option B with the value inside that this transformed so let\textsf{'}s
go to the code so this is the code that is written a little more verbally
and there are three versions here well first of all I do the map not
just the F map so the map takes an option I exit function A to B and
returns an option B and so since there\textsf{'}s this syntax I have to say
the first argument is this option is the option a the second argument
is f so I didn't check the types control shift P so option a is being
matched and then if it\textsf{'}s not empty we return this it\textsf{'}s empty return
death now the bad implementation would be we did the same thing but
we always return none so that would be a bad implementation but doesn't
satisfy the laws and the F map is exactly what was from basically
what was written there except for the F being here just the syntax
difference and I also have a fourth implementation which is F map
although which is automatically implemented so I'm using this the
function from the Greek Harvard library which I'm using here just
as a reference check to see whether these methods can be automatically
implemented or not so then we verify the identity law for all these
implementations so this is the way to verify the laws is to use the
Scala check library that allows you to say things like this for all
value of option int this must be true and I just write here what I
want to be true like for example here the map which I define right
there it plot applied to this option and to the identity function
should be able to that option so that is my statement that identity
being lifted is equal to identity but I cannot compare functions directly
in Scala I have to say for all argument opt the function applied to
the argument is equal to the argument that\textsf{'}s the only way I can say
that the function is equal to identity the library Scala check will
go through a large number of a randomly chosen values of this type
and it can generate these values automatically the checking for s
map is quite similar except for the order of arguments which was inverted
as compared with that map and F exactly the same as f ma'am now for
my bad actually the test would fail if I wrote the same test for me
from my bad it would have failed because my bad does not give you
identity it always Maps your option to none so once you take here
an option that\textsf{'}s not none you get and there and also we verify the
composition law which means we say for all X and for all functions
F and G and here I chose some specific types like integers string
and lon all our functions here are completely type parametric so I
can say any types I want but when i test i have to give a specific
type because there will be specific values randomly generated for
these functions and it is impossible to do that unless you specify
type so you have to choose and I choose some random different types
so then here is how I check the composition law the end then is a
standard scala function or method rather that is defined on function
so f and then G means that the same as you you would write this circle
in my slides the composition is a circle it\textsf{'}s in Scott will be and
then so first a and then these and sorry first F and then G so if
you apply this to a value of type a then first F will map it to be
and then G will map it to C so then it is easier to to read it that
way so there\textsf{'}s this so basically I write down this law has written
here putting ends then instead of the circle if map F and then G I've
not have and then with man G and then I also have to apply the resulting
function to an X and I say that the result should be equal to applying
the other function to the same X that\textsf{'}s the only way to check that
functions are equal here this is an equality of functions this lifted
function should be equal to that with the function and in the test
you cannot directly compare functions we have to apply functions to
values and say that for all values the results are the same so that\textsf{'}s
what I do and again I have a check for all the implementations including
the bad one for each for which the composition law actually holds
because this this thing always gives you none so whatever you compose
with the result is going to be none at in any case and so the composition
law will trivially hold what the identity law does not does not hold
here are some examples of functors so like I said we are only concerned
about the properties of map here and all these specific examples will
have lots of different other methods so that you can do other things
so for example option T has methods to get values out of it and to
put values in it and so on so basically anything in the Scala standard
library that has a map method is a functor except for certain map
which are almost functors except that for certain and not well behaved
types the laws will not hold let\textsf{'}s see how that works so here\textsf{'}s a
here\textsf{'}s some example so have a I took this idea from from here from
Rob Norris so imagine we have a type bad which calls an integer but
it has the equals function that is not well behaved the eCos function
always returns true for any other thing which would be kind of unreasonable
and useless but what if this was true for some reason what if you
need this behavior then you define a function f in the function G
in a very obvious way you take an integer you put it into the container
so this can be seen as an container itself that contains a single
integer but we're not using that as a container using that as a data
type inside the container so you put an integer in there in the open
this way and you get the integer back in the obvious way so there
are two functions F and G and now let\textsf{'}s take a set of integers and
map with the composition of these two functions and so then we write
it like this so you see in scours much easier to use map rather than
F map because of the syntax so then you get this set because F and
then G is identity because you take this F you put an integer inside
and you get that integer back with no changes you're not comparing
anything while you're doing this and so this function is identity
and of course this set does not change when you map it over identity
but if you first map into F and then map into G which you accept expect
to be the same as that then you get wrong answer because when you
first map with your f then it becomes a set of bad one bad to bad
three and then the set try to see if they're equal because a set is
trying to eliminate duplications right because isettas cannot have
duplicated elements but we have made the eCos operation so that it
always returns true and so the set will think that all of them are
the same and it will eliminate all of them except the first one perhaps
and then you map it back to integer and you get a set of one element
so the composition law fails and that\textsf{'}s that\textsf{'}s bad so basically for
a set of integers it is not even set of integers integers well behaved
the quality of operation but because you go through some bad type
while you're composing functions you are violating the composition
law and the map is similar it has a map values method which is a good
function but it also has a map method which is mapping with respect
to both key and value and it behaves like a set with respect to keys
because it will not allow you to have duplicate key and that\textsf{'}s the
same problem as with this set so if you have a map and you have functions
that map you to type with none ill-behaved equals operation then you
will violate the composition law and that code could have difficult
to find bugs will be very difficult to reason about that code so that\textsf{'}s
the real value of these laws I would like to have a little more intuition
about functors and type constructors in general and a good way rather
than look at some types defined in the standard library which are
complicated let\textsf{'}s look at very simple types that we can define ourselves
and work with ourselves to see to understand how functors work and
what it means to be their container what it means to be a functor
so here I have three examples first example is a type clearly result
Jeremy tries by a the type variable a and it holds a triple of string
integer and a so the short notation for this type is this and the
Scala code would be that I have to put names on each so I'm just writing
down whatever comes to mind what would be appropriate for a query
result and another example would be a vector of three and three dimensions
having coordinates of type a so I have three coordinates of type a
so it could be maybe double or real or complex or something like that
three dimensional vector of type a with coordinates of type a and
the third example would be disjunction tie the clear is out that could
be so the short notation is strain plus strain times integer times
again in Scala code that would be sealed trait with two cases so I'm
just interpreting what it could be so one could be an error with the
message and another would be a success with a value of type a so C
the first element of the disjunction does not actually contain in
the elements of type a and any values of type a let\textsf{'}s the second one
does so let\textsf{'}s look at the test code to see how we make them into factors
so the first example was the string times integer times a so we need
to define F map and all these examples are all beginning with F map
because it\textsf{'}s much easier to write down but the map would be equivalent
we called so to implement F map and what do we do well we need to
take even a function I have take a query result of type a return a
query result of type B well I could have written this as that because
what I need here is I need to return a function with arguments of
type quick query result a so if I need to return a function and my
code needs to start like this this would be the argument of that function
and that would be the result value now the first thing I have to do
that is to match all this query result because it\textsf{'}s a the easiest
thing Regus way to extract values from it would be to match and then
I would say case quite a result with three parts and then I returned
query result with the same parts except I apply the function f to
the data so you see I need to transform string x integer times a into
string times integer times B and I have a function from A to B the
only way I can do it is to apply this function to the a and leave
the string and integer and changed and that\textsf{'}s the code I have written
here and this is a simplification of syntax but this is not necessary
to write otherwise it\textsf{'}s exactly the same code it is a function takes
an argument of this type immediately does a match on that argument
and returns us an equivalent way of writing this code would be like
this so you take this argument and use the copy function on case classes
that copies only so it copies the entire value of the case class and
it only changes one part of it which is data data the new value is
equal to this F of the old of the old QR dot data so in this case
because the code is so simple the case match would be a lot more writing
than this in India in every other way these are equivalent I'm creating
a new value of type query result be and I'm copying all the parts
name and time and I'm only modifying the data part in this way and
I can do this automatically using the current hardliner so that\textsf{'}s
the last implementation and I verify identity law and composition
law for all three implementations the second example was the type
constructor that has a Triple A three dimensional vector of A\textsf{'}s so
here I do the same thing I do a case match on the vector and now I
have to apply the function f to all three of the elements here so
again I've I've tried the the automatic implementation but actually
there are different ways of implementing this automatically because
you could for example you could interchange x and y here so the type
would still be correct and the automatic implementation only looks
at the type and tries to find out what what code could be of this
type now if I wrote here Y and X instead of x and y the type would
still be correct the function being correct as well we'll see later
the composition law will not hold the identity law or not called no
it\textsf{'}s obvious that identity law will not hold because if F is identity
then you are exchanging wine X you're not leaving the vector3 unchanged
but the Kirk Howard library doesn't know that we want to have the
identity law and so it only looks at the type the type will be correct
so then there are six different ways of permuting the order of these
and it finds all six and has no idea what what to choose as a workaround
I say give me all of those implementations and take the first of them
and actually turns out to be the right one so all of type returns
a sequence of values on this table and I check the identity law and
the composition law and all these tests pass and a final example right
now is to commit to make this into a factor and you find that never
depend checking laws so again this is very similar to one we had before
except now we have two cases of a disjunction in order to transform
this into the query result of B we still just need to apply F to this
a and to be and we're done but now there are two cases in the first
case we actually don't have any values of type a we just have a string
and in the other case we have a string and int and the name so we
need to do a case match so here\textsf{'}s how will you find it if it\textsf{'}s an
error so you see the two cases error in success if it\textsf{'}s an error then
we just return error with no change to the message and if it\textsf{'}s a success
then we do what we did before we apply F to data and leave other parts
of the case classic unchanged and the curry Howard library can implement
this automatically there\textsf{'}s no no uncertainty as to what to do only
one good implementation so that\textsf{'}s these three examples in these three
examples what we found is that we can define F map so we try to define
F map guided by types and also guided by laws so sometimes we have
different possibilities for the same type to write code but the laws
only give us one possibility and that\textsf{'}s the general situation with
all the type constructors and that we are going to work with laws
and type dictate how to implement map in only one way what are examples
that are not factors where you cannot implement map well one type
of these examples is types that cannot have any map function at all
due to type problems is an example is is this not container just which
is defined as a product of a function from a to end and a so the case
us will be like this function from A to E and and also a value of
type a so why is this not a container well I can try to implement
an app for it but that won't satisfy the one what what I would do
is it would transform the Y and then it would apply it would apply
that function to that a that you had and get an integer and you cannot
produce a function from beta integer but you can produce a function
that\textsf{'}s returning your constant integer and so that con because you
have an integer you can compute a constant integer by applying X to
one and so you just return a constant function that always returns
that constant integer now that is not the right implementation for
F map because it doesn't satisfy the laws which I check here so I
check that the try F Maps that I get here does not satisfy identity
another example of a data type that they cannot have a map due to
type problems is this one where we have a disjunction type but its
type values are nonparametric in some strange way so here\textsf{'}s what I
mean by that you have a sealed trait with the type parameter called
dress or some kind of action with results let\textsf{'}s say and the the case
classes that extend this trait are defined in this curious way so
the first one is normal it has a value and it extends the survey with
the same time parameter race but the second case class doesn't have
a typewriter an external extends server action with a specific title
loan and and also a third case class extends that with specific type
string so it uses type values that are not parametric type values
that are not equal to this parameter so the the way that we have keys
classes for option or either for such parametric disjunction types
is that each case class extends the trait with the same type parameter
as it has and here it\textsf{'}s not the case it extends with a fixed type
so these kind of disjunction types are called generalized algebraic
data types and I don't have short notation for them that\textsf{'}s very useful
and at this point I'm not sure what that thread notation should be
and how to reason about them in terms of in terms of data containers
because they are not data containers they're very odd type constructors
if you try to think about them as containers they may be very useful
certainly are very useful in a number of situations but as containers
they fail you cannot even implement FF map or map for this because
you do not actually have a value let\textsf{'}s say observer action long or
server action integer for this there\textsf{'}s no if you want to have a function
that map\textsf{'}s long to integer you cannot do a store ID server action
integer is that that\textsf{'}s a fixed type so there is no way you can define
the app for this kind of type because of type problems another type
of things that are not factors are types that could be factors but
we didn't implement correctly we need to have a well behaved of map
to have a functor and a number of things could go wrong if we try
to implement F map so one thing could be that F map ignores F it always
returns none for an option so for instance if that is true usually
it would not be satisfying laws or F map reorders data in a continued
for instance here\textsf{'}s a container with two values of type a and we define
F map that applies F but also reorders the values in the container
now this or here is this example I was showing in the code now this
would immediately violate that identity you are so that\textsf{'}s not good
and we can verify that so I'm gonna have tests with a special method
exists some that I implemented we can verify that identity law is
not is not satisfied another example is that you could have enough
map that checks the types because in Scala you can you can see what
type II or your past using reflection and that\textsf{'}s that\textsf{'}s a very risky
thing to do because it\textsf{'}s easy to make mistakes I'm difficult to write
code that will always work but if you do that you could check that
a and B are the same type or not and if same time then you do one
thing in your F math let\textsf{'}s say you return identity ignoring the F
and if you have another not the same type then you do something else
we apply f of X in some way now this would obviously satisfy the identity
law but this would violate the composition law because you could have
functions F and G whose composition his identity and so then you would
check that the type is the same and you would sorry whose composition
is not identity and you would check here that the type is the same
you would return an identity for them and it would not be equal to
the composition of F and G so you would violate the composition law
if you do that or you could do other special computations in in case
that the type is equal to some specific type like integer then you
do something else in the general case or if the function is equal
to some specific function then you do a special thing and otherwise
they do the general case so none of this would give you an F map that\textsf{'}s
well behaved that satisfies the laws so an interesting example of
functor is a recursive type defined like this for example so let\textsf{'}s
say a list of pairs which I could just define my hand as type LP with
parameter a that satisfies this recursive equation this is a type
equation LP array is equal to 1 plus a times a times of P of a so
if you expand this using the algebraic or arithmetic very hard correspondence
rules the arithmetic identity would be this kind of expansion so then
you can visualize what this type does so it\textsf{'}s either empty or it\textsf{'}s
two copies of a well-tuned in values of type a or four values of type
a or more so it\textsf{'}s kind of list of pairs except it\textsf{'}s a list that can
only have even number of elements it\textsf{'}s not really a list of pairs
as as I've defined it so here\textsf{'}s a definition installer so you have
a co trade copy of a and it has two parts of a disjunction so one
I would write it like this with empty or unit here and the other with
X\&Y of type a and the tail which is this which is again of type LP
of a so that is allowed so this kind of type recursion is allowed
in Scala when you do take when you when you use case classes so now
we can implement a map here you use recursion so that\textsf{'}s implement
F map for simplicity you match on LP of a and there are two cases
so either it\textsf{'}s empty and the result will be also empty or it\textsf{'}s going
to be a pair of XY tail and then you apply F to x apply F to a sorry
F to Y and you apply F to tail now tail is of the same type and so
you need to use the same F map that should be enough map here should
the same F map that you're defining this use the same recursive call
here applying to the tail so that\textsf{'}s all right and that works and actually
this is the only way to implement an app that satisfies the funky
laws what would be another way well we could always return empty here
or we could match on this tail further and let\textsf{'}s say if if there at
least four of them then we return empty if there is not for we don't
return it or something like this all of that would be wrong all that
would be incorrect factor instance as it\textsf{'}s called incorrect implementation
of map so let\textsf{'}s look at the test code so here\textsf{'}s the implementation
of this recursive type is defined like that so here I don't have this
title obviously because tests pass we can compile if I find maybe
typos so that\textsf{'}s now for for this example it\textsf{'}s exactly the same code
as before verifying the laws now notice that I'm able to generate
arbitrary values of this recursive type so how does the library do
it just a short digression I'm using a library called college XJ plus
which is a library made by Alex Horne planar Shambo and he allows
me to use case classes in for all so I can do for all {[}Music{]}
value of some case closed or sealed trait now I don't have to write
any code to do that it\textsf{'}s automatic so this is some macro and library
that I'm user now another curious thing is that a type such as this
one a function from a to hints is not a funder it cannot implement
map for it but you can implement something called contra map or here
contra F map which is this similar to f map except that I interchange
the order of B and a here so the function goes from B to a but the
lifted function goes from C a to C B so that\textsf{'}s the contract it reverses
the arrow between a and B the Contra function laws are very similar
to function laws except for the interchanging of the order of function
here in composition so this is the control direction so composition
G and F and that\textsf{'}s composition of F lifted and G lifted an interesting
observation here is that in this type the type parameter is to the
left of the function area so this type parameter a is consumed by
the function in all our examples of functors here this was not the
case the type a was produced or if it was there it was not consumed
here it was consumed in all these examples of function so that was
our example a non-factor and all of these examples they have a that
is produced or it\textsf{'}s already there but it is not consumed so that\textsf{'}s
an interesting observation but functors contain data that contractors
consume data so contra functor is not a container is not a functor
so it\textsf{'}s not a container should be thought of as a container it\textsf{'}s something
that consumes data of this type and this is an example that we had
before the non container it is neither a function or a contra factor
so I have here of some test code I tried to implement contra F maps
for this type but actually there aren't any implementations there\textsf{'}s
zero implementations possible there\textsf{'}s one implementation possible
of the F map but it does not satisfy the law there are no implementations
of this type at all so this type is neither a function or a contra
factor so it can be not cannot be thought of as a container and also
cannot be thought of as something that consumes the data of this type
it\textsf{'}s a strange thing maybe maybe useful maybe not but in any case
it\textsf{'}s not a factor or not a contractor another so now that we see functors
and conscious factors another thing that comes to mind is the concepts
of covariance and contravariance now the concepts of covariance and
contravariance our relevant to subtyping so what is subtyping an example
would be when you say class extends something trait so in scholar
these are traits and class diseases a bit specific to scholars and
other languages would not be called traits maybe and we just classiest
but this is a subtype so zero is a subtype of at most two so this
is an example I have at most two is a disjunction of zero integers
one integer or two integers and so the 0 1 \& 2 in Scala are types
themselves that are subtypes of at most 2 which means that if you
have a function that takes an at most 2 as an argument you can pass
this value to that function and it will take it so this is how we
have been using these junctions until now so we're using this feature
of Scala that they're implemented as subtypes and subtype means that
you can automatically convert this type into that type whatever you
need that so this function going from two to at most two is identity
function has just relabeled the type because there\textsf{'}s nothing to convert
in this case this class is just an instance or a subtype of that so
but logically speaking there are different types so this type is different
from this and so you can think about it as having always an automatic
type conversion of this sort whenever you need it and so this looks
like a function of this type that\textsf{'}s always available let\textsf{'}s never need
to be written out explicitly but we could write it out explicitly
if we wanted to and we will in in a second so what does it mean that
a type constructor is covariant so it means that if the container
or type constructor applied to the type two is a subtype of the container
of at most two in other words when you lift these types into the container
type they still are subtypes of each other so this two is a subtype
of at most two and then C of two is also a subtype of C at most two
if that is so and C is called covariant and then you have this type
conversion function automatically available whenever you need it so
more generally what is covariance so covariant so C is covariant type
constructors see is covariant in its argument X when whenever if if
whenever if you have xn is a subtype of Y then you have also this
conversion automatically and obviously if you have this kind of function
which is taking this and producing that if you have this kind of function
then you're guaranteed to have this kind of conversion in other words
and this is this is this type signature of f map in other words if
you have a type constructor that has F map and it\textsf{'}s guaranteed to
have the right type conversions and to be covariant and so all functors
are covariant and in Scala you can put the little plus next to the
type when you declare the sealed trait to tell a compiler that you
want this explicitly to be known that this is covariant in that argument
and similarly conservatives are contravariant so contravariant means
that this arrow goes into the opposite direction if too is a subtype
of at most to anicon chiffon jerkoff at most two will be a subtype
of a contra founder of two and so because of this very easy argument
with the implicit or automatic type conversion functions we write
them explicitly then we see immediately that functors are automatically
covariant and contravariant variant so if you want to make this explicit
in scala if for any reason you need subtyping which is advanced topic
in functional programming and it\textsf{'}s not something I'm going to talk
about a lot right now you can put a plus sign or a minus sign for
contravariant and then the compiler will check that you actually have
covariance or contravariance correct so so this is the correspondence
between functors control factors covariance and contravariance which
is a very interesting thing so in other words we are talking about
centers and country hunters and this is exactly parallel to covariance
and contravariance but usually in object-oriented programming people
talk about so let\textsf{'}s go through some more examples with actual coding
where we will do certain things roll first we can now decide if a
data type is a functor a country founder or neither of the Eastern
so to decide that we look we look at the data type write it down in
the short notation and see if there is any type trainer to the left
of the area which is consumed and to the right of the arrow or off
without an error which is produced or which is already there and that
allows us to decide whether it\textsf{'}s a functor or country function then
we implement a map or country map that satisfies the laws so we are
implementing first looking at the type just guided by what types need
to be produced and if there is a choice or ambiguity we then see whether
laws are satisfied so let\textsf{'}s look at the first example which is this
one so we have this type string plus a times int plus a times a times
a our task is to define case classes for this type and implement map
so this is how we would write our case classes in all these examples
I'm going to call this type data so that all my code is always ready
to cut and paste so in this case we have three pieces of the injunction
or 3s rate disjunction so there are three parts of the disjunction
the first part is just string so we have this second part is a product
of a and int so we have four tuple we have a and we have an int in
the third part is a a a so F map makes a match on data disjunction
if I have a message then I don't change that message you know in any
case I couldn't do anything else at this point I have to return the
message of type B because there is nothing else I could possibly return
if I have a second case class then it means I have one I have one
item of the of type a so then I apply F to that item and I'm done
and the third case I have three items of type a and I apply F on each
of them and I don't change the order so that\textsf{'}s how I'm implemented
the second example is this one so now in this example first what I
notice is that I have a type of the for one plus something so in Scala
I already have option type I don't have to myself implement the cases
for that so let me use it so it will be an option of this and that
is a tuple of a and this Junction for this this Junction here I declare
another sealed trait which is a called data two and here this trade
is still parameterize by a it has two cases the message and the value
so so now I have a bit more complication in my code because when I
match on data then that\textsf{'}s a case class that has a data constructor
over option so it\textsf{'}s a data of option and the option needs to be matched
as well so I can match this in the same case expression and I have
two possibilities data of some of blob data of none know if I have
data of none and clearly all I can do is return data of not there\textsf{'}s
nothing else I can do I couldn't possibly return any of those things
so it remains to to do this with a case data of some and the sum is
of a tuple of the of a pair work most more precisely of a and this
data to so there is a pair of value a and data to so then I am going
to return a data of some of some new value a and some new data to
so I'm going to return this and in order to make the code more readable
I'm going to write explicitly what new value a is and what new data
to is so new value a is just a function f of the old value a so in
order to make this more visual I in my head I do this maybe for tutorial
purposes let me write this out this is what I want to do I want to
transform data alien today to be using this function f so I need to
replace a by B exactly at each place that I have a here I replace
it by B and replacing by B means I apply F to that value here so then
all I need to do is I need to case match and whenever I get a value
of a I apply F to it here I have another value of a control shifty
that\textsf{'}s it I apply F to it everything else I don't touch so I don't
change these integers this string I don't change the order of anything
I just replace a by B by using the final through using the function
f and change nothing else so this one goes to this one so that\textsf{'}s this
line this a goes to this B that\textsf{'}s this line integer string goes to
integer string that\textsf{'}s this line and this n goes to has B that\textsf{'}s that
line so that\textsf{'}s all I'm doing and of course the laws will hold them
the third example is a bit more involved in this example I notice
first of all this the structure of the data and must be that first
it\textsf{'}s a disjunction of two parts and the parts are quite similar so
each part has this structure that I have string to integer to a here
I have boolean to double to a other than the other and end times a
so that\textsf{'}s always a tuple with a except for these types boolean and
double is exactly the same structure so I'm going to parameterize
boolean and so on by X Y and I'm going to define a structure this
and that\textsf{'}s going to be data too so it\textsf{'}s going to be a case class with
two parts so that or tuple with two parts and I'm going to do X and
y setting them to string int here and the boolean double here so then
I define not a case class but a type I don't have to do it against
plus if I can I already have either as a case class and so that\textsf{'}s
actually a little less writing than what I was doing the previous
example where I could also use either or or tuple here but I chose
to do here I chose a tuple then I chose this data to and explicitly
had two cases but I could have used either and tuples and just draw
write all of this as one expression with either in tuples I could
have done that just a little less writing perhaps and certain points
could be more writing so this is not clear how best to arrange this
you have a choice and so let me try it try it this way of course all
these tights are going to be equivalent they're going to be isomorphic
and no matter what I do I can put them inside a case class or not
it\textsf{'}s just going to be more wrapping if I put them more into case classes
so let me try without the case class and this level but I do have
a case class for this repeating structure that I found maybe that\textsf{'}s
also in Scala that\textsf{'}s easier to read because you can have documentation
so to speak as the name of your data type names of your elements they
could tell in the program or what they mean so that it\textsf{'}s clear clear
what needs to do what in this case for this example I'm just using
very short names like the data of type anywhere gene just a function
whatever kind so I define the data like this just in either of two
double of two data two structures and that\textsf{'}s precisely mirrors what
I have here so that\textsf{'}s Scala syntax for the same I also need to define
the function that compares values of this type that is necessary because
as you remember functions cannot be compared directly in Scala and
we will have to compare things in this test if we want to verify laws
so they'll need to compare values of this type for example if something
is an identity function data to going to data two and I want to verify
that and I need to compare the data to the head before applying that
function and after and so that\textsf{'}s why I need to have a method of comparing
two values of data of type data a so that is implemented in the same
way using case matches and certain parts should be equal to certain
other parts and if not then I have this special method called fail
that will throw an exception so you can look on that code so here\textsf{'}s
how I would implement F map so data of type a is in either so first
of all I match on that and I get two cases left and right now if I'm
in the left case it means I have a data tool of this type string and
be and so like string it into a so I compute a new string and be out
of that and I return the left of that and if I'm in the right and
I compute the new data tool and and return that so now the only non-trivial
part is what to do with this higher order function x2 y2 a so I need
to transform X to Y of a into X to Y to be so how do I do that well
I do that here in line but I could have done it as another Val if
I wanted to all right for example Yugi I could have done that let
me see if that would be perhaps more instructive so what is the type
of new G I need to specify the type otherwise be arrived in Scala
so I need a function from string to int to be so how can I make this
function I have a function G which is of type string katha into a
and I have a function f from A to B so how do I put this new be here
well new G is a function so let me return the function so it takes
X returns a function that takes Y and then I need to produce a B so
how they produce a B and I have an x and y I can put those x and y
into G just drink them into a which will be this so that\textsf{'}s the value
of type A when I apply F to it I get a B and that\textsf{'}s what I returned
so in this way I can implement the F map for my type so notice this
kind of trick with putting first taking arguments out and then putting
them back in that is necessary whenever you have a type of the sort
which produces a value of your interesting type using some other values
you have to do this kind of code but this code is actually this code
could be generated automatically if I wanted if I wanted to be clever
here and use my curry Howard library I could have done like this and
I need to specify what values I'm allowed to use so I'm allowed to
use FMG so let me do an import so that I have this off type I will
just import everything right so now of type works so I need to say
auf type is a an interface to a lecture a Harvard library that automatically
generates expressions of a given type using other expressions that
are already available so it will find that code basically this code
will be generated automatically there is no other way of generating
this let me let me run this test and I'll see if that actually works
but I expect this to work because there\textsf{'}s no other way so this combination
X to Y T or F of G X Y is the only combination that has the right
type and so you'll be able to derive this automatically but of course
it\textsf{'}s important to understand how to write this code by hand as well
only then you can correctly use automatic tools but warning is expected
but the strip ass warnings are fine Oh actually it\textsf{'}s compiling the
first tests still right they will take a long time it\textsf{'}s not compiling
this test the very Harvard library is used in the first one several
times and it\textsf{'}s slow it goes through different combinations to find
this expression all right looks like we're compiled and once we compile
you're pretty much guaranteed to work excellent so we have just checked
all these laws work the second series of examples is to decide whether
these are functors or country factors and then it can implement either
F map or contra F map as appropriate the first example is this type
so now if you look at this type here is on the left of the arrow so
that looks like it\textsf{'}s consuming an A so it looks like we're having
a contra hunter here certainly not a functor now here are we consuming
an A or are we producing an A now the syntax might be a little confusing
until you get used to it but all these things are to the left of the
arrow so these are the syntax is like this by convention the arrow
associates to the right and so this consumes an A and produces a function
that again consumes an A and produces a strain so this function actually
consumes two different values of a let\textsf{'}s type a and so all this consumes
a this also consumes two different values of type in and so we have
a hope of getting a country founder here it\textsf{'}s certainly not effective
let\textsf{'}s try the country hunter so here I'm going to do some easier that
go the easier route a room but I won't defend I won't define any case
classes I'll just use standard library so I have a single disjunction
and I'll just use either and then I'll just write down these types
more or less like in this formula I also define the data equal function
otherwise these tests won't run because data contains functions so
whenever that is so I need to define it for these tests a function
that will compare for equality so then I define contra F nap so how
do I do that it\textsf{'}s a very similar trick as well we did before we just
need to replace arguments here in functions by our own arguments and
that will be it so here\textsf{'}s contra F map we have an either our data
type isn't either so in your case match first of all the left is a
function from a to int so now we we need to produce let me just write
down again for convenience what we want to produce is this so we want
to produce a database so if we are in the left and we should produce
this and if we're in the right and we should produce this so if you're
in the left then we produce left of a function that takes some value
let\textsf{'}s actually it\textsf{'}s a what\textsf{'}s always because that\textsf{'}s the type it\textsf{'}s more
and more clear it\textsf{'}s a function that takes B and returns an integer
so we'll take a beat now we need to return an integer so how do we
return you into we can do F of B and we get an A and then we apply
this function a to an integer to that a and we get an integer so that\textsf{'}s
how we can do this very similarly we doing the right so there is an
A and a in string now this B actually is the first eight let\textsf{'}s called
a one and this let\textsf{'}s call it a two just to be more clear so a 1 and
a 2 or 20 what\textsf{'}s sorry let\textsf{'}s call them so they're of type B so we
are supposed to produce a function from B to B to strength so we produce
the function tends be one takes B 2 and then we produce a string so
we get this applied to an A and again applied to an A so that\textsf{'}s all
we do here so f of B 1 is in a f of B 2 is again here not an interesting
question is what order should this be should this be B 2 B 1 or B
1 B 2 well let\textsf{'}s see so this we just drink tests actually we didn't
run this test but it\textsf{'}s from this test and I think the test would fail
if we do it the wrong way so let me run this test first and then make
a change and then run the test again and the reason is that we shouldn't
interchange the order of arguments salafi give these two arguments
we should not interchange them that would probably violate the identity
law if we did interchanged some of you do the bad thing and run the
test again so as a rule of thumb when we when we build an F map cone
for some data type you just change every instance of a tree instance
of B or vice versa we don't change any order of anything we don't
yeah it fails so there\textsf{'}s some value our delta T law yes then did you
all failed to hold so damaged yeah so as a rule of thumb never changes
order anything never interchange things that would violate the laws
the type of deep grip but that would violate the laws and so since
here we're implementing the contra hunter it could be a bit difficult
to decide whether it should be V 2 V 1 or V 1 V 2 what is the order
of these arguments really just run the tests and make your laws testable
and you are sure there\textsf{'}s not much choice here they're these choices
when you have different values at the same time of same type or different
arguments of the same type then you might find a stake interchange
them the laws would tell you that this is not so this is not bad and
the Contra composition law remember that was composed of F with G
lifted is equal to a composed of G E and an F so that\textsf{'}s contra composition
a second example is this type so it has two parameters a and B so
which should we use as a type Trevor now it\textsf{'}s important to realize
that we are free to use either of these two typewriters a functor
is a type constructor with a type triangle but if we have a type constructor
it has many type parameters we need to choose one and say that this
type constructor is a functor with respect to that type parameter
and our F map will modify that type parameter only and not others
and so in this example it shows we can choose for example a you our
typewriter and B is just a fixed type I'm going to f map is not going
to change that let\textsf{'}s examine this type so there is this there is this
arrow here and this arrow okay so it looks like a is behind them error
so does it look like a is actually contravariant but wait here\textsf{'}s another
here so this entire thing is to the left of an error so this is the
entire thing is consumed so we consume something that consumes a so
that actually makes a covariant again as we will see if you consume
something that consumes a then you can implement F map with respective
a and not the Contra map so it looks like you're not this is a strange
container if it doesn't actually have values of type a but instead
it consumes something that consumes a value of type a but that is
actually so in other words you don't have a value of type a that is
true and not all containers have actual values of type a inside but
one example that would be the future container future is a functor
but it doesn't actually have a value inside not yet in any case it
might have it in the future or might not at all so that\textsf{'}s an example
of a container that is a fun turn with respect to a but it does this
funny thing of it consumes a function of consumes a oh so be is in
a covariant position here we have a B inside of this Junction and
here we produce a B so we know that when we produce a B that\textsf{'}s covariant
and when so again we could be a type parameter and choose to have
a functor instance or a functor instance is the same as to say we
have an implementation of f map so in this code I will actually define
both F map with respect to a and f map with respect to B so let me
start with B it\textsf{'}s a little easier so in order to have an F map with
respect to B I'm going to put the first trailer rename it to Z so
let\textsf{'}s rename that to Z then F map will have to be paralyzed by Z and
B and C and we'll map B to C and the result will be mapping from data
ZB into the data ZC so that\textsf{'}s I chose the letter Z far from B and
C so that it\textsf{'}s clear that Z is not changing B is changing to see after
F map alright so how do we do that so we again need to think so this
is this case class there is no disjunction at the top there is a disjunction
inside so we need to get the data data dot a B will be matched the
result will be that we need to return a new value of type data with
which means we need to produce some new a B and some new D right so
data has a B and it has a D you need to produce new Indian new D so
I structured the code like this to be more clear about that and now
we need to produce a new a B of this type and gnudi of this type so
how do we do that so new a B is just an either of Z C so we just need
to map over the easy either somehow so that\textsf{'}s easy to do if we're
on the Left we don't matter because it\textsf{'}s an a so that\textsf{'}s this or Z
so that\textsf{'}s not mapped not changed and the B value is mapped with F
so it\textsf{'}s changed it remains to do this so that\textsf{'}s the trick we just
saw this is not changing and we have a D which is dated of D which
is of type 0 integer to be so we put an argument out which is 0 integer
and for the apply this D 2 that the result is the value of B apply
F to that value and get the value of C so then we get a function from
G to that see and that\textsf{'}s what you probably need this function so like
before this part of code is probably unique it can be generated automatically
so I try to I haven't tried to actually generate all this automatically
with Kurihara stuff you could try to see maybe maybe in this case
there are no ambiguities and you couldn't just generate all this code
automatically in which case we'll just say def F map of all this equals
implement you can just say that yes let\textsf{'}s run this list to see if
that is so and I will explain the rest now the F map here is with
respect to a so a is changing so I rename that into X just so that
it\textsf{'}s clear we are going to map data xB into data Y B so B is the type
trailer that rent remains constant and we're mapping X to Y through
the function X all right there\textsf{'}s some problem ok doesn't work it might
be a bug or there might be some other problems correct Harvard library
is working progress so it works in many cases but not in all cases
I'll make a note of this it\textsf{'}s a good idea to have some words and tests
or bug fixes so to implement F map we do a very similar thing we match
on data so sorry we return the data with new a B mu D but now the
types of new a B are different so this is going to be either of Y
B and this is going to be this so the either is dealt with in the
same way as before matching over and pulling B so B is unchanged and
a any value is actually enough type X now so we could bring in this
for clarity into x value there could you name this this is of time
the path type Y so can you move this into my ability oh no this is
a big spoke to you sir now what they do what do we do with this how
do we map x2 ends to be into y2 entity so this is the non trivial
part where we have something data dot d is x2 end to be which consumes
a function that consumes X and we need to produce a function that
consume as a function that produces light how do we do that well we
just write the code directed by type so this is a function so G is
type white urgent data dot d has the type of x2 ends to be so we need
to use data dot d on a function that takes X and produces int so how
do we produce int well the only way to produce an int is to get G
acting on some Y so G of something so that something must be some
Y the only way to get a Y is to apply F to X so this code could be
generated automatically even if that whole thing didn't work I'm pretty
sure this would have worked if I just say off type here I put G and
F and data table D data gene has flat major so I need F G actually
F and data no G myself of type and I copy this type expression over
here and I want to run this test so in any case the tests should pass
with or without this change and this shows how the types that consume
something that consumes a are actually go variant in a their functors
in a it\textsf{'}s the same same thing let\textsf{'}s see what failed alright so that\textsf{'}s
also didn't work so let me undo that drama test again goodish Oregon
compiles the first one the first test that here\textsf{'}s a great covered
land anymore very card library is slowing long examples all right
right now we're alright so these are probably bugs or something that
I could fix in the creek or we'd like it but the important thing is
for us to understand how to do this by him and that\textsf{'}s what this tutorial
is about the last worked example right now is to to do this you have
a a bunch of scholar can use classes with seal trait you need to identify
which types are used covalently and conveniently and verify that with
covariance annotations yes how we do this so here\textsf{'}s the seal trait
so I put already the covariance annotations now the first thing I
would do is to write this in a short notation because this is a lot
of text with names and all that in the short notation so what do I
have the first class as a case class has B and C so that\textsf{'}s a times
B is this and then B to hint the second one is a B and int so that\textsf{'}s
this product the third one the she is the string to a and B to a so
product of these two and now I just look at this and figure out what
types it has and whether these types are used covariant and contravariant
so I find that a is only used in covariant positions so here I have
an a here I have an a here I produce an A here I produced any so those
are covariant positions to the right of the arrow or define the final
arrow or without a mirror now the B is used here in a covariant position
here in a contravariant position here covariant here contravariant
so that\textsf{'}s hopeless so B cannot be B is neither covariant nor contravariant
now there are other types like int so int is used here in the covariant
position here in a covariant position so int would be covariant so
if I wanted to parameterize this by the type over int it would be
a covariant or a functor with respect to that parameter and the string
here is used contravariant lee only in one place so that\textsf{'}s that would
be a candidate for contra factor if I needed to parameterize by that
type that would be a Concha factor so let me do that so I changed
integer to I and string to s and then I put minus on s plus an i plus
on a and B is not not too marked because it\textsf{'}s neither covariant nor
contravariant and now here\textsf{'}s what would happen if I put a plus here
instead of - so let me compile this and run this test if I if I do
that the compiler will tell me that this mirror so it will know that
this s is used here in a contravariant position i intellij doesn't
show me the red and for that but here\textsf{'}s the error message' Kovarian
type s occurs on contravariant position and type este or bound effects
so that\textsf{'}s the air and that concludes our worked examples for this
part and now there are some exercises of the same kind for you so
after you have done these exercises you have an understanding of how
to work with functor types and we have been checking we have been
checking laws of function by hand every time by writing tests now
this is not very satisfactory because a main question still remains
here is it true that any data type where a is well a type parameter
in covariant position is it true that the data type is a function
with respect to a I could write any kind of stuff like this and it
would take time and effort to check the laws each time I have a type
like this but just visually I see that here a is consumed but the
whole thing is consumed so this is covariant usage of a here is covariant
here\textsf{'}s covariant and here\textsf{'}s covariant so the entire type is a functor
with respective a that seems obvious I could write an F map function
very easily by just mapping each of these A\textsf{'}s to be a to be a to be
this I map to the same integer a to be they sent map to the same integer
this R I'm after the same R I know how to do this we just had examples
of all kinds of different types of this kind types of the sort that
we can implement as factors but do the laws hold and do we need to
write tests every time for this kind of thing in fact so the way we
answer this question is to to realize that these data types are built
from parts they're built like from a Lego set and what are the parts
so these are the constant types like integer or unit you know type
parameters and then there are operations that for example take two
parts and put a plus between them when you get a new type and or you
use the arrow or we use the product so basically these are type expressions
that are produced out of constant types type parameters and these
operations or you can have also other operations like composition
of function like you take one factor and apply it to another like
we did with our examples we took an either and under the either we
had some data types that we defined and so on so that\textsf{'}s a composition
of functions or type constructors and we have noticed that every time
that some type is moved to the left of an arrow its covariance is
reversed so this would be contravariant in a and this is again covariant
in a and so this that does this intuition always work or are there
some cases when this is wrong that this is not the right right functor
or the laws don't hold and also note that if we don't use the function
error then everything is going to be always covariant so if we have
a function error then we have to trace which one is covariant but
if there is no error if if a.type was made without using the area
operational types then all positions are covariant and so these are
these are types that are called before polynomial types or polynomial
type constructors and if this intuition is correct and they're always
functors so all polynomial type constructors are functors and to answer
this question we are going to build F map incrementally as we build
up the type expression out of these parts and operations and at every
step when we take for example two parts and put them together we define
what the F map is for the new type constructor and we check the law
left hold the dead step and once we go through all possible steps
which are only for as far as I can see here then we're done we will
prove we have we will have proved by induction that any type constructed
from these operations will be a factor so let\textsf{'}s see how that works
so the building blocks for functors are constant factors and identity
factor so what does that mean the constant factor is a type constructor
that takes a type parameter a and always returns the quantity so for
example int I can be considered as a functor that is parameterize
by some a and always has a type int not a very interesting factor
of course but nevertheless a valid function if you take F map which
is always equal to identity so the value of type C is never changed
and interestingly enough this is a contra factor at the same time
with contrib also equal to identity and all the laws hold trivially
because it\textsf{'}s always identity so all the compositions are always identities
that\textsf{'}s not much to check the identity function is the factor that
takes a type parameter and always returns that side type right now
notice how I started to use terminology let the type the functor takes
a type and returns at like as if factor is a function on types so
that\textsf{'}s indeed the case so you can consider factors or type constructors
as functions of types functions that take types and return other types
so that\textsf{'}s in Scala it is noted it is very clear that this this is
the case because we use square brackets to denote type application
so this is very similar in exactly almost like having a function f
that is applied to a type which is a function G applied to type a
now is type level function the function that takes types and returns
types so these are type constructors so it is a good way of thinking
about type constructors just functions in the type space so the F
map for the identity function takes the F and returns the same F so
it means that if you want to transform a to B and you just apply the
same F to a and you get to be the the walls hold in a very easy way
obviously if F is identity then this has identity and composition
is composition because we haven't changed anything so f is unchanged
now the operations of creating new factors are the previous ones is
what we're going to be concerned with next so imagine we have some
functors F and G and we already have the F map implementations for
them and we already checked the laws so f and G are functions and
we already check that and now we need to build a functor such as this
one so we need to build a new F map and we need to show that the laws
hold for it and then we will we need to show there further that this
type constructor is a functor and we need we do the same for these
for this for this and also well we can see how that works because
we already have experience in examples implementing F map for various
constructions like this for example this is just built by pattern
matching we preserve the left side to the left and the right side
to the right and here we to pull the two results and again preserve
the order and here we substitute the function argument into F map
remember that trick we have G goes to f of X of whatever G so that\textsf{'}s
the kind of trick we need to do here an interesting thing in this
case is that for this construction to work F must be a contractor
and G must be a functor so then the contractor F will be in the contravariant
position and the result of this will be covariant in a in every place
now this wanna do composition of factors when you compose the to F
Maps we just do F map here and you do F map of that and the final
case it\textsf{'}s interesting this type recursion so type recursion means
that you define a type F using a recursive equation so you have some
R which is a functor and both a and X must be a factor and then you
write this equation so remember what we had as an example of recursive
polynomial type so we wrote this equation so actually this equation
can be written as L PA equals some function applied to LP a and a
because this is a type construction of the same kind as we are considering
as we are considering here so this can be just paralyzed like this
and then this is a recursive equation because we're using FA inside
here to define itself and we can then define F map for this just like
we did before in that example and the F map function will be recursive
and we'll use the F maps of this factor so that is all the operations
that we need so for contra functors with appropriate changes we will
have exactly the same instructions except that here this should be
a functor that should be a contra function then the result is a contra
functor so what remains is to check that in each case the filter laws
still hold after each operation so let me check this for a few cases
and I will leave other cases as exercises the first case is the disjunction
of two factors so how does that work let me actually go and show you
the code for this disjunction before we look at the short notation
so in this code what I'm doing is I define two factors in some random
way for example F 1 is just to prove a and integer and F 2 is an either
of integer to a and a so see this is a is covariant and covariant
here so I define these two factors I have defined some helper functions
to help me check laws and and so on but just ignore it where these
F map classes that I defined in the tests this is not essential and
essential is that we define the F map of the right type and here I
use implement because it just works and I need to define an equal
function for F 2 so but suppose I'm given F 1 and F 2 so how do i
define a disjunction well first of all I need to define the data type
data okay but again I call this data just to be consistent with all
other examples this is just a name this type so this is an either
of F 1 away and F 2 away so you see in this case we just use it either
and we don't need an extra case class strapless in and I define the
f map so how do i define it I use the two f maps from the previous
factors so these are the classes that I define they have this dot
code method that represents the code of the F map and this is necessary
for type reasons but not necessarily the best way of doing the functor
constructions one will be sufficient for now so I define how do i
define this F map well the data is a disjunction of functor f1 and
frontier f2 so I need to match on this disjunction so I match when
I'm on the Left then I also return the left I'm supposed to return
so I need to return a left part for the left and the right part for
the right so this is the code that needs to be written for that purpose
note that this quote does not use any details about what these functors
are it just uses the F map for them so I have the F map 1 and F map
2 for the factors F 1 F 2 and I just call these F maps on the data
that I have so that\textsf{'}s what works when I check the laws of identity
and composition but can I understand why these laws hold they do hold
the tests pass but I want to have understanding and assurance that
this is really correct for all factors not just for particular factors
I have chosen and this is what I would do I would reason about the
code more generally and I would use the short notation for the code
to make it easier so it assumed that F n GA are type constructors
for which we already have the F map with map F and F G and we already
know that the laws hold for them now we define the F map for the factor
F plus G or it would be function f plus G you find it like this so
the data is a disjunction of P and Q and P is of type FA and Q is
of type GA so this is our either from the code and this is the argument
of this function and the result is a disjunction of left and right
corresponding to P and Q so exactly what the code was doing I just
wrote this in the show notation in short notation P is here and so
if P is given that we return this plus zero and if Q is given them
will return zero plus that so that Chris points in the code to returning
left or returning right and that\textsf{'}s just a short notation that I'll
use the reason about these laws let us check the laws we have defined
the function f map F plus G let\textsf{'}s check that the laws hold for it
now first law is identity law if F is identity then f map F plus G
or identity must be identity and we assume that the law holds also
for already holds for F and G at maps so this is a quick check then
P plus Q is an arbitrary value of type F plus G and then by the formula
we need to apply F map of F to the P which is identity and we need
to apply F map of G to the F map G of F to the Q which is again and
that is again identity so we just have P plus Q so we get we take
P plus Q will return v plus Q and that\textsf{'}s obviously identity so identity
law holds for F map F plus G the composition law is a little longer
to check but note as we noted in the code we don't actually use the
structure of the factors F and G we just use the fact that they're
functors and that F maps for them work correctly so the composition
of these two F maps let\textsf{'}s decompose first we need to first apply F
1 and then F 2 so when we apply F 1 and we have this this is a definition
of our F map F plus G and then we apply that and then gives us the
two compositions in the left hand and the right and then we can simplify
that you the law for F into this using the law of G into that and
that\textsf{'}s exactly the same as if we have applied our F map F plus G to
the composition of functions F 1 and F 2 so I suggest you go through
this computation yourself and check that for any P and Q that you
give here for any or rather not for any pienty for any value of P
plus Q because there\textsf{'}s only P or there\textsf{'}s only Q in that disjunction
so for any P plus Q you always return exactly this so the law holds
and the law holds precisely because this may f map f what G was defined
to work it was defined here to work separately on P and Q and to return
corresponding parts of the disjunction if we mixed up some house and
some of these parts oil for instance or given P but we return the
right part of the disjunction sometimes that law would not hold at
the law obviously here it depends on having the two parts completely
separate so that concludes the proof that this construction gives
you a factor if the fmg are factors the next example is to show that
if f is a contra function G is a function then the function from G
to F is itself a factor so let me go to the code which I have for
this so I have a some contra factor very simple one and it\textsf{'}s F map
is so simple that it can be automatically implemented our defining
quality for it as necessary for the test and then I define this data
as a type I don't want to have an extra case class for extra complexity
and this is just defined like that so exactly the same as the formula
for the type can i define f map so how do i define f map all this
seems to be a lot of code but actually this is just a definition of
types for clarity i have an argument f of this type argument da which
is data in an argument CF 1 B which is a part of data be so data B
is C F 1 of B going to F 2 of B so it\textsf{'}s a function that takes CF one
of these an argument so I have to return that function so that\textsf{'}s how
I'm returning it and the result must be F 2 of 8 so all this must
be F - ok so how do I make this work I use the sorry F 2 of be nice
to UM be the only way to get of 2 of B is to apply something to have
two away because I cannot just construct f2 of be here from nothing
from scratch no idea what this function is would be so I need to use
the data I'm given so I'm given this FD a and C F 1 B so what\textsf{'}s my
plan well first I'm going to map CF on B into CF 1 a and that\textsf{'}s a
contra functor map contract map because I have a 2 B so I can map
for the control factor C F 1 B to save one hey so then I have this
I put that into da I get F 2 of a and then I EV map with F F to a
goes to F to B of two of me so that\textsf{'}s the whole thing I put the contra
map first contra factor 1 into here apply this to CF I get a CF 1a
I put that as an argument into da I get a tf2a and then I use F map
2 which is 4 F 2 again with the same F on the result and the laws
hold why do they hold let\textsf{'}s prove them they hold in general in the
tests we only have specific contracts on certain specific function
and specific types you know we cannot just run tests for generic functor
so let\textsf{'}s prove mathematically that this is true it\textsf{'}s a very similar
proof to the previous one the details are been different so here will
help if we have some shorter notations definitions so instead of f
map G of F I would just write gamma F instead of contra map F of F
I would read 5x5 so Phi for F and gamma for G and just replacing Latin
letters with Greek ones so this is the code that we had in our skull
example written in this notation the XQ first uses the contra map
of F on Q puts that into P which is this and then uses the map gamma
gamma F on that P so then we check the identity law we just substitute
these expressions so we take an arbitrary P of this type which is
now our new functor to be and then we use we just substitute we get
Q goes to P of Q because gamma is identity Phi is identity so it\textsf{'}s
kill going to P of Q right here now Q going to P of Q is the same
as P because this is a function that takes argument and applies P
to that same argument that\textsf{'}s the same as what P would do by itself
so this expression is exactly the same as this expression in its effect
so that concludes the proof of the identity law we showed that F map
of identity applied to some P gives you exactly the same P a composition
law is checked like this so we assume that the composition law already
holds for a fine gamma and here we see opposite order of composition
for Phi because it\textsf{'}s a control factor so then we apply the definition
and we get first we apply F map F G or F 1 so first we apply this
and that\textsf{'}s this gamma F 1 P Phi Q now we'll put that in there apply
again and we have a curious thing that we have gummys together and
Phi\textsf{'}s together when you do this computation because here instead of
P you have to put in that definition this Q goes to this so when you
do that the gamma is next to the gamma and the Phi is next to the
phone so now those are compositions of gammas so we can use the composition
law that already holds for gamma and replace this by this and also
for fine note the order so from gamma is this order finds the reversal
and this is therefore exactly the same as what we would have if we
apply this new F map to the composition of functions F 1 and F 2 so
then we have proved the composition law note that the order of the
compositions must be reversed for fine otherwise this thing just won't
work so if I did not have this reversed order here in this position
the proof would not go through so this won't work if F is a factor
F must be a country from K which is what we expected from intuition
without from our intuition whatever is on the left has reversed its
covariance and so if this is a contractor if this were a functor then
being on the left makes this whole thing a contra variant in a and
that\textsf{'}s wrong through the function so this is our intuition the intuition
is correct as we have just shown control function behind the error
or to the left of the arrow it becomes covariant and vice-versa here
are some exercises for you to check that this works and to check that
this works for contra functor G and functor F so this is quite similar
but the opposite order and also to show that this is neither a function
or contra function when they're both functors are both country factors
so that\textsf{'}s much easier because you just give an example and show that
the types don't match so to conclude this tutorial I would make a
note that this kind of code is certainly not the best way of dealing
with funder constructions so if you want in your code to construct
new factors like this I would suggest taking a look at the libraries
that do this there are two main libraries scholars II and cats so
these libraries include functionality functionality that is quite
similar to what we're doing here they can deal with functors generally
so the the power of functional programming languages such as Scala
and Haskell is that you can write code that takes an arbitrary function
and transforms it in some way so you can not only as we have done
we parameterize code by types but you can also parameterize code by
a type constructor so you can have a function which I did not show
here because it\textsf{'}s quite advanced stuff but you can have a function
that is permit rised by these things so it would work for any F 1
and F 2 with certain properties and that\textsf{'}s the power of these type
systems which is not present in most programming languages so in Scala
this is a little difficult to write and quite abstract so if you if
you try to write it from scratch so these libraries help and they
can define functors and help you write code with functors so for instance
if you wanted to have code that does something for any factor then
I would suggest you try it yourself but it would be a bit hard explore
these libraries also there there is a library called shapeless which
has some utilities for automatic construction of functors so you see
as you have noticed these operations are quite mechanical so there
is no choice here and all this can be done automatically by by some
preprocessor or or the compiler of Scala so there are libraries that
allow you to write code to automatically implement F map for your
own types with no code that you have to write almost no quality you
have to write because all these operations are dictated by the mathematical
properties of functor there\textsf{'}s no choice the curry Howard library can
help in certain cases but it doesn't know that you are constructing
a functor instance it does not check the Loess so if that is if that
is your purpose you should try to explore these libraries that allow
you to automatically construct a functor implementation for your types
and any types of this kind I think should be supported if not make
a PR for them well this concludes the tutorial 
\end{comment}

