
\chapter{Reasoning about code. Techniques of symbolic derivation\label{chap:Reasoning-about-code}}

\global\long\def\gunderline#1{\mathunderline{greenunder}{#1}}%
\global\long\def\bef{\forwardcompose}%
\global\long\def\bbnum#1{\custombb{#1}}%

In previous chapters, we have performed symbolic derivations of some
laws. To make those derivations more manageable, we gradually developed
special notations and techniques of reasoning. This short chapter
is a summary of these notations and techniques.

\section{Mathematical code notation}

\subsection{The nine constructions of fully parametric code}

The eight basic constructions\index{eight code constructions} introduced
in Section~\ref{subsec:The-rules-of-proof}, together with recursion,
serve as a foundation for \textbf{fully parametric} coding style.
All major techniques and design patterns of functional programming
can be implemented using only these constructions, i.e., by fully
parametric\index{fully parametric!code} code. We will now define
the code notation (summarized in Table~\ref{tab:Mathematical-notation-for-basic-code-constructions})
for each of the nine constructions.\index{nine code constructions}

\begin{table}
\begin{centering}
\begin{tabular}{|c|c|c|}
\hline 
\textbf{\small{}Constructions} & \textbf{\small{}Scala examples} & \textbf{\small{}Code notation}\tabularnewline
\hline 
\hline 
{\small{}use a constant} & {\small{}}\lstinline!()!{\small{} or }\lstinline!true!{\small{}
or }\lstinline!"abc"!{\small{} or }\lstinline!123! & {\small{}$1$, $\text{true}$, $\text{"abc"}$, $123$}\tabularnewline
\hline 
{\small{}use a given argument} & {\small{}}\lstinline!def f(x: A) = { ... x ... }! & {\small{}$f(x^{:A})\triangleq...~x~...$}\tabularnewline
\hline 
{\small{}create a function} & {\small{}}\lstinline!(x: A) => expr(x)! & {\small{}$x^{:A}\rightarrow\text{expr}\left(x\right)$}\tabularnewline
\hline 
{\small{}use a function} & {\small{}}\lstinline!f(x)!{\small{} or }\lstinline!x.pipe(f)!{\small{}
(Scala 2.13)} & {\small{}$f(x)$ or $x\triangleright f$}\tabularnewline
\hline 
{\small{}create a tuple} & {\small{}}\lstinline!val p: (A, B) = (a, b)! & {\small{}$p^{:A\times B}\triangleq a\times b$}\tabularnewline
\hline 
{\small{}use a tuple} & {\small{}}\lstinline!p._1!{\small{} or }\lstinline!p._2! & {\small{}$p\triangleright\pi_{1}$ or $p\triangleright\pi_{2}$}\tabularnewline
\hline 
{\small{}create a disjunctive value} & {\small{}}\lstinline!Left[A, B](x)!{\small{} or }\lstinline!Right[A, B](y)! & {\small{}$x^{:A}+\bbnum 0^{:B}$ or $\bbnum 0^{:A}+y^{:B}$}\tabularnewline
\hline 
{\small{}use a disjunctive value} & {\small{}\hspace*{-0.013\linewidth}}%
\begin{minipage}[c][1\totalheight][b]{0.33\columnwidth}%
{\small{}\vspace{0.14\baselineskip}
}
\begin{lstlisting}
val p: Either[A, B] = ... 
val q: C = p match {
    case Left(x)   => f(x)
    case Right(y)  => g(y)
}
\end{lstlisting}
{\small{}\vspace{-0.1\baselineskip}
}%
\end{minipage}{\small{} \hspace*{-0.009\linewidth}} & {\small{}$q^{:C}\triangleq p^{:A+B}\triangleright\begin{array}{|c||c|}
 & C\\
\hline A & x^{:A}\rightarrow f(x)\\
B & y^{:B}\rightarrow g(y)
\end{array}$}\tabularnewline
\hline 
{\small{}use a recursive call} & {\small{}}\lstinline!def f(x) = { ... f(y) ... }! & {\small{}$f(x)\triangleq...~\overline{f}(y)~...$}\tabularnewline
\hline 
\end{tabular}
\par\end{centering}
\caption{Mathematical notation for the nine basic code constructions.\label{tab:Mathematical-notation-for-basic-code-constructions}}
\end{table}


\paragraph{1) Use a constant}

At any place in the code, we may use a fixed constant value of a primitive
type, such as \lstinline!Int!, \lstinline!String!, or \lstinline!Unit!.
We may also use a \textsf{``}named unit\index{unit type!named}\textsf{''}, e.g.,
\lstinline!None! of type \lstinline!Option[A]! for any type \lstinline!A!.
All named unit values are denoted by $1$ and are viewed as having
type $\bbnum 1$. 

With this construction, we can create \index{constant function}\textbf{constant
functions} (functions that ignore their argument):

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
def c_1(x: String): Int = 123
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.35\baselineskip}
\[
c_{1}(x^{:\text{String}})\triangleq123\quad.
\]
\vspace{-0.85\baselineskip}


\paragraph{2) Use a given argument}

In any expression that has a bound variable (e.g., an argument within
a function\textsf{'}s body), we may use the bound variable at any place, as
many times as we need.

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
def c_2[A](x: String, y: Int): Int = 123 + y + y
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.35\baselineskip}
\[
c_{2}(x^{:\text{String}},y^{:\text{Int}})\triangleq123+y+y\quad.
\]
\vspace{-0.85\baselineskip}


\paragraph{3) Create a function}

We can always make a nameless function \lstinline!{ x => expr }!
out of a variable, say \lstinline!x!, and any expression \lstinline!expr!
that may use \lstinline!x! as a free variable\index{free variable}
(i.e., a variable that should be defined outside that expression).
E.g., the expression \lstinline!123 + x + x! uses \lstinline!x!
as a free variable because \lstinline!123 + x + x! only makes sense
if \lstinline!x! is already defined. So, we can create a nameless
function

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
{ x: Int => 123 + x + x }
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.35\baselineskip}
\[
x^{:\text{Int}}\rightarrow123+x+x\quad.
\]
\vspace{-0.85\baselineskip}

If the expression \lstinline!expr! already contains \lstinline!x!
as a bound variable, the function \lstinline!{ x => expr }! will
have a name clash. As an example, consider an expression \lstinline!expr == { x => x }!
that already contains a nameless function with bound variable \lstinline!x!.
If we want to make a function out of that expression, we could write
\lstinline!x => { x => x }!, but such code is confusing. It is helpful
to avoid the name clash by renaming the bound variables inside \lstinline!expr!,
e.g., \lstinline!expr == { z => z }!:

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
val f = { x: Int => { z: Int => z } }
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.35\baselineskip}
\[
f\triangleq x^{:\text{Int}}\rightarrow z^{:\text{Int}}\rightarrow z\quad.
\]
\vspace{-0.85\baselineskip}


\paragraph{4) Use a function}

If a function is already defined, we can use it by applying it to
an argument.

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
val f = { x: Int => 123 + x + x }
f(100)  // Evaluates to 323.
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.65\baselineskip}
\[
f\triangleq x^{:\text{Int}}\rightarrow123+x+x\quad,\quad\quad f(100)=323\quad.
\]
\vspace{-0.85\baselineskip}


\paragraph{5) Create a tuple}

Given two values \lstinline!a: A! and \lstinline!b: B!, we can create
the tuple \lstinline!(a, b)! as well as \lstinline!(b, a)!. In the
code notation, those tuples are written as $a\times b$ and $b\times a$.

\paragraph{6) Use a tuple}

Given a tuple \lstinline!p == (a, b)!, we can extract each of the
values via \lstinline!p._1! and \lstinline!p._2!. The corresponding
code notation is $p\triangleright\pi_{1}$ and $p\triangleright\pi_{2}$.
The auxiliary functions $\pi_{i}$ (where $i=1,2,...$) may be used
for tuples of any size. Example code defining these functions:

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.75\baselineskip}
\begin{lstlisting}
def pi_1[A, B]: ((A, B)) => A = {
    case (a, b) => a
} // Same as `_._1`
def pi_2[A, B]: ((A, B)) => B = {
    case (a, b) => b
} // Same as `_._2`
\end{lstlisting}

\vspace{-1.2\baselineskip}
\end{wrapfigure}%

~\vspace{-1\baselineskip}
\begin{align*}
\pi_{1}^{A,B} & \triangleq a^{:A}\times b^{:B}\rightarrow a\quad,\\
\pi_{2}^{A,B} & \triangleq a^{:A}\times b^{:B}\rightarrow b\quad.
\end{align*}
The notation $a\times b$ is used in an \emph{argument} of a function
to destructure a tuple.

\paragraph{7) Create a disjunctive value}

Once a disjunctive type such as $A+B+C$ has been defined in Scala,
its named \textsf{``}constructors\textsf{''} (i.e., case classes) are used to create
values of that type:

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.75\baselineskip}
\begin{lstlisting}
sealed trait S
final case class P(w: Int, x: Int)  extends S
final case class Q(y: String)       extends S
final case class R(z: Int)          extends S

val s: S = P(10, 20) // Create a value of type S.
val t: S = R(30)     // Another value of type S.
\end{lstlisting}

\vspace{-0\baselineskip}
\end{wrapfigure}%

~\vspace{0.35\baselineskip}
\[
S\triangleq\text{Int}\times\text{Int}+\text{String}+\text{Int}\quad,
\]
\begin{align*}
s^{:S} & \triangleq10\times20+\bbnum 0^{:\text{String}}+\bbnum 0^{:\text{Int}}\quad,\\
t^{:S} & \triangleq\bbnum 0^{:\text{Int}\times\text{Int}}+\bbnum 0^{:\text{String}}+30\quad.
\end{align*}
\vspace{-0.9\baselineskip}

The code notation for disjunctive values, e.g., $\bbnum 0+\bbnum 0+x$,
is more verbose than the Scala syntax such as \lstinline!R(x)!. The
advantage is that we may explicitly annotate all types and show clearly
the part of the disjunction that we are creating. Another advantage
is that the notation $\bbnum 0+\bbnum 0+x$ is similar to a row vector,
$\,\begin{array}{|ccc|}
\bbnum 0 & \bbnum 0 & x\end{array}$~, which is well adapted to the matrix notation for functions.

\paragraph{8) Use a disjunctive value}

Once created, disjunctive values can be used in a pattern matching
expression (Scala\textsf{'}s \lstinline!match!/\lstinline!case!). Recall
that functions that take a disjunctive value as an argument (\textsf{``}\index{disjunctive functions}\textbf{disjunctive
functions}\textsf{''}) may be written \emph{without} the \lstinline!match!
keyword:

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
val compute: Option[Int] => Option[Int] = {
    case None      => Some(100)
    case Some(x)   => Some(x / 2)
}
\end{lstlisting}

\vspace{-1.65\baselineskip}
\end{wrapfigure}%

~\vspace{-1.45\baselineskip}
\[
\text{compute}^{:\bbnum 1+\text{Int}\rightarrow\bbnum 1+\text{Int}}\triangleq\,\begin{array}{|c||cc|}
 & \bbnum 1 & \text{Int}\\
\hline \bbnum 1 & \bbnum 0 & 1\rightarrow100\\
\text{Int} & \bbnum 0 & x\rightarrow\frac{x}{2}
\end{array}\quad.
\]
\vspace{-0.9\baselineskip}

We will use this example to see how disjunctive functions are written
in the matrix notation\index{matrix notation}\index{disjunctive type!in matrix notation}.

Each row of a matrix corresponds to a part of the disjunctive type
matched by one of the \lstinline!case! expressions. In this example,
the disjunctive type \lstinline!Option[Int]! has two parts, the named
unit \lstinline!None! (denoted by $\bbnum 1$) and the case class
\lstinline!Some[Int]!, which is equivalent to the type \lstinline!Int!.
So, the matrix has two rows labeled $\bbnum 1$ and $\text{Int}$,
showing that the function\textsf{'}s argument type is $\bbnum 1+\text{Int}$.

The columns of the matrix correspond to the parts of the disjunctive
type \emph{returned} by the function. In this example, the return
type is also \lstinline!Option[Int]!, that is, $\bbnum 1+\text{Int}$,
so the matrix has two columns labeled $\bbnum 1$ and $\text{Int}$.
If the return type is not disjunctive, the matrix will have one column.

What are the matrix elements? The idea of the matrix notation is to
translate the \lstinline!case! expressions line by line from the
Scala code. Look at the first \lstinline!case! line as if it were
a standalone partial function,

\begin{wrapfigure}{l}{0.45\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
{ case None => Some(100) }
\end{lstlisting}

\vspace{-0.75\baselineskip}
\end{wrapfigure}%

\noindent Since \lstinline!None! is a named unit, this function is
written in the code notation as $1\rightarrow\bbnum 0^{:\bbnum 1}+100^{:\text{Int}}$. 

The second line is written in the form of a partial function as

\begin{wrapfigure}{l}{0.45\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
{ case Some(x) => Some(x / 2) }
\end{lstlisting}

\vspace{-0.75\baselineskip}
\end{wrapfigure}%

\noindent The pattern variable on the left side is \lstinline!x!,
so we can denote that function by $x^{:\text{Int}}\rightarrow\bbnum 0^{:\bbnum 1}+(x/2)^{:\text{Int}}$. 

To obtain the matrix notation, we may simply write the two partial
functions in the two rows:

\begin{wrapfigure}{l}{0.45\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
val compute: Option[Int] => Option[Int] = {
    case None      => Some(100)
    case Some(x)   => Some(x / 2)
}
\end{lstlisting}

\vspace{-0.75\baselineskip}
\end{wrapfigure}%

~\vspace{-1.35\baselineskip}
\[
\text{compute}^{:\bbnum 1+\text{Int}\rightarrow\bbnum 1+\text{Int}}\triangleq\,\begin{array}{|c||c|}
 & \bbnum 1+\text{Int}\\
\hline \bbnum 1 & 1\rightarrow\bbnum 0+100\\
\text{Int} & x\rightarrow\bbnum 0+\frac{x}{2}
\end{array}\quad.
\]
\vspace{-0.9\baselineskip}

This is already a valid matrix notation for the function $f$. So
far, the matrix has two rows and one column. However, we notice that
each row\textsf{'}s return value is \emph{known} to be in a specific part of
the disjunctive type $\bbnum 1+\text{Int}$ (in this example, both
rows return values of type $\bbnum 0+\text{Int}$). So, we can split
the column into two and obtain a clearer and more useful notation
for this function:
\[
\text{compute}^{:\bbnum 1+\text{Int}\rightarrow\bbnum 1+\text{Int}}\triangleq\,\begin{array}{|c||cc|}
 & \bbnum 1 & \text{Int}\\
\hline \bbnum 1 & \bbnum 0 & 1\rightarrow100\\
\text{Int} & \bbnum 0 & x^{:\text{Int}}\rightarrow\frac{x}{2}
\end{array}\quad.
\]
The void type\index{void type!in matrix notation} $\bbnum 0$ is
written symbolically to indicate that the disjunctive part in that
column is not returned. In this way, the matrix shows the parts of
disjunctive types that are being returned. 

Partial functions are expressed in the matrix notation by writing
$\bbnum 0$ in the missing rows:

\begin{wrapfigure}{l}{0.45\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
def get[A]: Option[A] => A = {
    case Some(x) => x
} // Partial function; fails on `None`.
\end{lstlisting}

\vspace{-0.75\baselineskip}
\end{wrapfigure}%

~\vspace{-1.35\baselineskip}
\[
\text{get}^{:\bbnum 1+A\rightarrow A}\triangleq\,\begin{array}{|c||c|}
 & A\\
\hline \bbnum 1 & \bbnum 0\\
A & x^{:A}\rightarrow x
\end{array}\,=\,\begin{array}{|c||c|}
 & A\\
\hline \bbnum 1 & \bbnum 0\\
A & \text{id}
\end{array}\quad.
\]
\vspace{-0.9\baselineskip}

Scala\textsf{'}s \lstinline!match! expression is equivalent to an application
of a disjunctive function:

\begin{wrapfigure}{l}{0.45\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
val p: Option[Int] = Some(64)
val q: Option[Int] = p match {
    case None      => Some(100)
    case Some(x)   => Some(x / 2)
}    // The value of q equals Some(32).
\end{lstlisting}

\vspace{-2.75\baselineskip}
\end{wrapfigure}%

~\vspace{-0.85\baselineskip}
\[
p\triangleq\bbnum 0^{:\bbnum 1}+64^{:\text{Int}}\quad,\quad q\triangleq p\triangleright\,\begin{array}{|c||cc|}
 & \bbnum 1 & \text{Int}\\
\hline \bbnum 1 & \bbnum 0 & 1\rightarrow100\\
\text{Int} & \bbnum 0 & x\rightarrow\frac{x}{2}
\end{array}\quad.
\]
\vspace{-0.1\baselineskip}
It is convenient to put the argument $p$ to the \emph{left} of the
disjunctive function, as in the Scala code.

Because only one part of a disjunctive type can ever be returned,
a row can have at most one non-void value. That value will be in the
column corresponding to the part being returned. 

The matrix notation allows us to compute such function applications
directly. We view the disjunctive value $\bbnum 0+64^{:\text{Int}}$
as a \textsf{``}row vector\textsf{''} $\,\begin{array}{|cc|}
\bbnum 0 & 64\end{array}$~, written with a single left line to distinguish it from a function
matrix. Calculations use the standard rules of a vector-matrix product:
\[
(\bbnum 0+64)\triangleright\,\begin{array}{||cc|}
\bbnum 0 & 1\rightarrow100\\
\bbnum 0 & x\rightarrow\frac{x}{2}
\end{array}\,=\,\begin{array}{|cc|}
\bbnum 0 & 64\end{array}\,\triangleright\,\begin{array}{||cc|}
\bbnum 0 & 1\rightarrow100\\
\bbnum 0 & x\rightarrow\frac{x}{2}
\end{array}\,=\,\begin{array}{|cc|}
\bbnum 0 & 64\triangleright(x\rightarrow\frac{x}{2})\end{array}\,=\,\begin{array}{|cc|}
\bbnum 0 & 32\end{array}\,=(\bbnum 0+32)\quad.
\]
The pipe ($\triangleright$) operation plays the role of the \textsf{``}multiplication\textsf{''}
of matrix elements, and we drop any terms containing $\bbnum 0$.
We omitted type annotations since we already checked that the types
match.

\paragraph{9) Use a recursive call}

The last construction is to call a function recursively within its
own definition. This construction was not shown in Section~\ref{subsec:The-rules-of-proof}
because the constructive propositional logic (which was the main focus
in that chapter) cannot represent a recursively defined value. However,
this limitation of propositional logic means only that we do not have
an algorithm for \emph{automatic} derivation of recursive code. Similarly,
no algorithm can automatically derive code that involves type constructors
with known methods. Nevertheless, those derivations can be performed
by hand. 

Recursive code is used often, and we need to get some experience reasoning
about it. In derivations, this book denotes recursive calls by an
overline. For example, the standard \lstinline!fmap! method for the
\lstinline!List! functor is defined as
\[
\text{fmap}_{\text{List}}(f)=f^{\uparrow\text{List}}\triangleq\,\begin{array}{|c||cc|}
 & \bbnum 1 & A\times\text{List}^{A}\\
\hline \bbnum 1 & \text{id} & \bbnum 0\\
A\times\text{List}^{A} & \bbnum 0 & h^{:A}\times t^{:\text{List}^{A}}\rightarrow f(h)\times\big(t\triangleright\overline{\text{fmap}_{\text{List}}}(f)\big)
\end{array}\quad.
\]
The recursive call to $\text{fmap}_{\text{List}}$ is applied to a
list\textsf{'}s tail (the value $t$).

In proofs of laws for recursive functions, it is necessary to use
induction in the number of recursive self-calls. However, the proof
does not need to separate the base case (no recursive calls) from
the inductive step. In the proof, we write a symbolic calculation
as usual, except that we may assume that the law already holds for
any recursive calls to the same function.

For example, a proof of the identity law of $\text{fmap}_{\text{List}}$,
which says $\text{fmap}_{\text{List}}(\text{id})=\text{id}$, may
proceed by replacing the recursive call $\overline{\text{fmap}_{\text{List}}}(\text{id})$
by $\text{id}$ during the calculations:
\begin{align*}
 & \text{fmap}_{\text{List}}(\text{id})=\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & h^{:A}\times t^{:\text{List}^{A}}\rightarrow\text{id}(h)\times\big(\gunderline{t\triangleright\overline{\text{fmap}_{\text{List}}}(\text{id})}\big)
\end{array}\\
{\color{greenunder}\text{inductive assumption}:}\quad & =\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & h\times t\rightarrow\gunderline{\text{id}(h)\times(t\triangleright\text{id})}
\end{array}\,=\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & \gunderline{h\times t\rightarrow h\times t}
\end{array}\\
{\color{greenunder}\text{identity matrix}:}\quad & =\,\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & \text{id}
\end{array}\,=\text{id}\quad.
\end{align*}


\subsection{Function composition and the pipe notation}

In addition to the basic code constructions, our derivations will
often need to work with function compositions and lifted functions.
It is often faster to perform calculations with functions when we
do not write all of their arguments explicitly; e.g., writing the
right identity law as $f\bef\text{id}=f$ instead of $\text{id}\left(f(x)\right)=f(x)$.
This is known as calculating in \index{point-free calculations}\textbf{point-free}
style (meaning \textsf{``}argument-free\textsf{''}). Many laws can be formulated and
used more easily in the point-free form. 

Calculations in point-free style almost always involve composing functions.
This book prefers to use the \emph{forward} function composition $(f\bef g$)
defined for arbitrary $f^{:A\rightarrow B}$ and $g^{:B\rightarrow C}$
by

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
f andThen g == { x => g(f(x)) }
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.35\baselineskip}
\[
f\bef g\triangleq x\rightarrow g(f(x))\quad.
\]
\vspace{-0.85\baselineskip}

A useful tool for calculations is the \textbf{pipe}\index{pipe notation}\index{\$@$\triangleright$-notation!see \textsf{``}pipe notation\textsf{''}}
operation, $x\triangleright f$, which places the argument ($x$)
to the \emph{left} of a function ($f$). It is then natural to apply
further functions at \emph{right}, for example $(x\triangleright f)\triangleright g$
meaning $g(f(x))$. In Scala, methods such as \lstinline!map! and
\lstinline!filter! are often combined in this way:

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
x.map(f).filter(p)
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.35\baselineskip}
\[
x\triangleright\text{fmap}\,(f)\triangleright\text{filt}\,(p)\quad.
\]
\vspace{-0.85\baselineskip}

To enable this common usage, the $\triangleright$ operation is defined
to group towards the left. So, the parentheses in $(x\triangleright f)\triangleright g$
are not needed, and we write $x\triangleright f\triangleright g$.\index{pipe notation!operator precedence}

Since $x\triangleright f\triangleright g=g(f(x))$ by definition,
it follows that the composition $f\bef g$ satisfies
\[
x\triangleright f\triangleright g=x\triangleright(f\bef g)\quad.
\]
Such formulas are needed often, so we follow the convention that the
pipe operation ($\triangleright$) groups weaker than the composition
operation ($\bef$).\index{pipe notation!operator precedence} We
can then omit parentheses: $x\triangleright(f\bef g)=x\triangleright f\bef g$. 

Another common simplification occurs with function compositions of
the form
\[
(x\rightarrow t\triangleright f)\bef g=x\rightarrow g(t\triangleright f)=x\rightarrow(t\triangleright f\triangleright g)=x\rightarrow t\triangleright f\bef g\quad.
\]
The function arrow groups weaker than the pipe operator: $x\rightarrow t\triangleright f\bef g=x\rightarrow(t\triangleright f\bef g)$.

How can we verify this and other similar computations where the operations
$\triangleright$ and $\bef$ are combined in some way? Instead of
memorizing a large set of identities, we can rely on knowing only
one rule that says how arguments are symbolically substituted as parameters
into functions, for example:
\begin{align*}
{\color{greenunder}\text{substitute }x\text{ instead of }a:}\quad & \gunderline x\triangleright(\gunderline a\rightarrow f(\gunderline a))=f(x)\quad.\\
{\color{greenunder}\text{substitute }f(x)\text{ instead of }y:}\quad & (x\rightarrow\gunderline{f(x)})\bef(\gunderline y\rightarrow g(\gunderline y))=x\rightarrow g(f(x))\quad.
\end{align*}
Whenever there is a doubt (is $x\triangleright(f\triangleright g)$
or $(x\bef f)\triangleright g$ the correct formula..?), one can always
write functions in an expanded form, $x\rightarrow f(x)$ instead
of $f$, and perform calculations more verbosely. After getting some
experience with the $\triangleright$ and $\bef$ operations, the
reader will start using them more freely without writing functions
in expanded form.

The matrix notation is adapted to the pipe operation and the forward
function composition. As an example, let us write the composition
of the functions \lstinline!compute! and \lstinline!get[Int]! shown
above: 
\[
\text{compute}\bef\text{get}=\,\begin{array}{|c||cc|}
 & \bbnum 1 & \text{Int}\\
\hline \bbnum 1 & \bbnum 0 & 1\rightarrow100\\
\text{Int} & \bbnum 0 & x\rightarrow\frac{x}{2}
\end{array}\,\bef\,\begin{array}{|c||c|}
 & \text{Int}\\
\hline \bbnum 1 & \bbnum 0\\
\text{Int} & \text{id}
\end{array}\,=\,\begin{array}{|c||c|}
 & \text{Int}\\
\hline \bbnum 1 & (1\rightarrow100)\bef\text{id}\\
\text{Int} & (x\rightarrow\frac{x}{2})\bef\text{id}
\end{array}=\,\begin{array}{|c||c|}
 & \text{Int}\\
\hline \bbnum 1 & 1\rightarrow100\\
\text{Int} & x\rightarrow\frac{x}{2}
\end{array}\quad.
\]
In this computation, we used the composition ($\bef$) instead of
the \textsf{``}multiplication\textsf{''} of matrix elements.

Why does the rule for matrix multiplication work for function compositions?
The reason is the equivalence $x\triangleright f\triangleright g=x\triangleright f\bef g$.
We have defined the matrix form of functions to work with the \textsf{``}row-vector\textsf{''}
form of disjunctive types, i.e., for the computation $x\triangleright f$
(where $x$ is a row vector representing a value of a disjunctive
type). The result of computing $x\triangleright f$ is again a row
vector, which we can pipe into another matrix $g$ as $x\triangleright f\triangleright g$.
The standard rules of matrix multiplication make it associative; so,
the result of $x\triangleright f\triangleright g$ is the same as
the result of piping $x$ into the matrix product of $f$ and $g$.
Therefore, the matrix product of $f$ and $g$ must yield the function
$f\bef g$.

A \textsf{``}non-disjunctive\textsf{''} function (i.e., one not taking or returning
disjunctive types) may be written as a $1\times1$ matrix, so its
composition with disjunctive functions can be computed via the same
rules. 

\subsection{Functor and contrafunctor liftings}

Functions and function compositions lifted to a functor (or to a contrafunctor)
are used in derivations so often that we need shorter notation than
$x\triangleright\text{fmap}_{F}(f)$ or its Scala analog \lstinline!x.map(f)!.
This book uses the notation $x\triangleright f^{\uparrow F}$ for
functors $F$ and $x\triangleright f^{\downarrow C}$ for contrafunctors
$C$. This notation graphically emphasizes the function $f$ being
lifted and also shows the name of the relevant functor or contrafunctor.
Compositions of lifted functions are visually easy to recognize, for
example:
\[
f^{\downarrow H}\bef g^{\downarrow H}=\left(g\bef f\right)^{\downarrow H}\quad,\quad\quad f^{\uparrow L}\bef g^{\uparrow L}\bef h^{\uparrow L}=\left(f\bef g\bef h\right)^{\uparrow L}\quad.
\]
In these formulas, the labels $^{\downarrow H}$ and $^{\uparrow L}$
clearly indicate the possibility of pulling several functions under
a single lifting. We can also split a lifted composition into a composition
of liftings. 

The lifting notation helps us recognize that these steps are possible
just by looking at the formula. Of course, we still need to find a
useful sequence of steps in a given derivation or proof.

\section{Derivation techniques}

\subsection{Auxiliary functions for handling products}

The functions denoted by $\pi_{1}$, $\pi_{2}$, $\Delta$, and $\boxtimes$
proved to be helpful in derivations that involve tuples. (However,
the last two functions are unlikely to be frequently used in practical
programming.) 

We already saw the definition and the implementation of the functions
$\pi_{1}$ and $\pi_{2}$. 

The \textsf{``}diagonal\textsf{''} function $\Delta$ is a right inverse for $\pi_{1}$
and $\pi_{2}$:

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
def delta[A]: A => (A, A) = { x => (x, x) }
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-1.15\baselineskip}
\[
\Delta^{A}:A\rightarrow A\times A\quad,\quad\quad\Delta\triangleq a^{:A}\rightarrow a\times a\quad.
\]
\vspace{-1.15\baselineskip}

It is clear that extracting any part of a pair \lstinline!delta(x) == (x, x)!
will give back the original \lstinline!x!. This property can be written
as an equation or a \textsf{``}law\textsf{''},

\begin{wrapfigure}{l}{0.5\columnwidth}%
\vspace{-0.65\baselineskip}
\begin{lstlisting}
delta(x)._1 == x
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-0.35\baselineskip}
\[
\pi_{1}(\Delta(x))=x\quad.
\]
\vspace{-0.85\baselineskip}

We can transform this law into a point-free equation by first using
the pipe notation,
\[
\pi_{1}(\Delta(x))=(\Delta(x))\triangleright\pi_{1}=x\triangleright\Delta\triangleright\pi_{1}=x\triangleright\Delta\bef\pi_{1}\quad,
\]
and then bringing the equation $x\triangleright\Delta\bef\pi_{1}=x=x\triangleright\text{id}$
to a point-free form: 
\begin{align}
{\color{greenunder}\Delta\text{ is a right inverse of }\pi_{1}:}\quad & \Delta\bef\pi_{1}=\text{id}\quad.\label{eq:pair-identity-law-left}
\end{align}
The same property holds for $\pi_{2}$.

The \index{pair product of functions}\textbf{pair product} operation
$f\boxtimes g$ is defined for any functions $f^{:A\rightarrow P}$
and $g^{:B\rightarrow Q}$ by
\begin{lstlisting}
def pairProduct[A,B,P,Q](f: A => P, g: B => Q): ((A, B)) => (P, Q) = {
    case (a, b) => (f(a), g(b))
}
\end{lstlisting}
\[
f\boxtimes g:A\times B\rightarrow P\times Q\quad,\quad\quad f\boxtimes g\triangleq a\times b\rightarrow f(a)\times g(b)\quad.
\]
Two properties of this operation follow directly from its definition:\index{composition law!of pair product}\index{identity laws!of pair product}
\begin{align}
{\color{greenunder}\text{composition law}:}\quad & (f^{:A\rightarrow P}\boxtimes g^{:B\rightarrow Q})\bef(m^{:P\rightarrow X}\boxtimes n^{:Q\rightarrow Y})=(f\bef m)\boxtimes(g\bef n)\quad,\label{eq:pair-product-composition-law}\\
{\color{greenunder}\text{left and right projection laws}:}\quad & (f^{:A\rightarrow P}\boxtimes g^{:B\rightarrow Q})\bef\pi_{1}=\pi_{1}\bef f\quad,\quad\quad(f\boxtimes g)\bef\pi_{2}=\pi_{2}\bef g\quad,\label{eq:pair-product-projection-laws}\\
{\color{greenunder}\text{identity law}:}\quad & \text{id}^{A}\boxtimes\text{id}^{B}=\text{id}^{A\times B}\quad.\nonumber 
\end{align}
An equivalent way of defining $f\boxtimes g$ is via this Scala code,
\begin{lstlisting}
def pairProduct[A,B,P,Q](f: A => P, g: B => Q)(p: (A, B)): (P, Q)  =  (f(p._1), g(p._2))
\end{lstlisting}
\[
f\boxtimes g=p^{:A\times B}\rightarrow f(p\triangleright\pi_{1})\times g(p\triangleright\pi_{2})=p\rightarrow(p\triangleright\pi_{1}\triangleright f)\times(p\triangleright\pi_{2}\triangleright g)\quad.
\]

The pair product notation can shorten calculations with functors that
involve product types (tuples). For example, the lifting for the functor
$F^{A}\triangleq A\times A\times Z$ can be shortened to
\[
f^{\uparrow F}\triangleq\big(a_{1}^{:A}\times a_{2}^{:A}\times z^{:Z}\rightarrow f(a_{1})\times f(a_{2})\times z\big)=f\boxtimes f\boxtimes\text{id}\quad.
\]
The last formula is often more convenient in symbolic derivations. 

\subsection{Deriving laws for functions with known implementations}

The task is to prove a given law (an equation) for a function whose
code is known. An example of such an equation is the \index{naturality law!of the function Delta@of the function $\Delta$}naturality
law of $\Delta$, which states that for any function $f^{:A\rightarrow B}$
we have
\begin{equation}
f\bef\Delta=\Delta\bef(f\boxtimes f)\quad.\label{eq:naturality-law-of-Delta}
\end{equation}

Laws for fully parametric functions are often written without type
annotations. However, it is important to check that types match. So
we begin by finding suitable type parameters for Eq.~(\ref{eq:naturality-law-of-Delta}).

Since it is given that $f$ has type $A\rightarrow B$, the function
$\Delta$ in the left-hand side of Eq.~(\ref{eq:naturality-law-of-Delta})
must take arguments of type $B$ and thus returns a value of type
$B\times B$. We see that the left-hand side must be a function of
type $A\rightarrow B\times B$. So, the $\Delta$ in the right-hand
side must take arguments of type $A$. It then returns a value of
type $A\times A$, which is consumed by $f\boxtimes f$. In this way,
we see that all types match. We can put the resulting types into a
type diagram and write the law with type annotations:

\begin{wrapfigure}{L}{0.25\columnwidth}%
\vspace{-2\baselineskip}
\[
\xymatrix{\xyScaleY{1.6pc}\xyScaleX{4.0pc}A\ar[d]\sb(0.45){f}\ar[r]\sb(0.45){\Delta^{A}} & A\times A\ar[d]\sp(0.45){f\boxtimes f}\\
B\ar[r]\sp(0.45){\Delta^{B}} & B\times B
}
\]
\vspace{-0.1\baselineskip}
\end{wrapfigure}%

~\vspace{-0.3\baselineskip}
\[
f^{:A\rightarrow B}\bef\Delta^{:B\rightarrow B\times B}=\Delta^{:A\rightarrow A\times A}\bef(f\boxtimes f)\quad.
\]

\noindent To prove the law, we need to use the known code of the function
$\Delta$. We substitute that code into the left-hand side of the
law and into the right-hand side of the law, hoping to transform these
two expressions until they are the same.

We will now perform this computation in the Scala syntax and in the
code notation.

\begin{wrapfigure}{L}{0.54\columnwidth}%
\vspace{-0.6\baselineskip}
\begin{lstlisting}
x.pipe(f andThen delta)
  == (f(x)).pipe { a => (a, a) }
  == (f(x), f(x)) // Left-hand side.
x.pipe(delta andThen { case (a, b) => (f(a), f(b)) })
  == (x, x).pipe { case (a, b) => (f(a), f(b)) }
  == (f(x), f(x)) // Right-hand side.
\end{lstlisting}
\vspace{-3\baselineskip}
\end{wrapfigure}%

~\vspace{-1.4\baselineskip}
\begin{align*}
 & x\triangleright f\bef\Delta=f(x)\,\gunderline{\triangleright\,(b}\rightarrow b\times b)\\
 & \quad=f(x)\times f(x)\quad.\\
 & \gunderline{x\triangleright\Delta}\bef(f\boxtimes f)\\
 & \quad=(x\times x)\gunderline{\,\triangleright\,(a\times b}\rightarrow f(a)\times f(b))\\
 & \quad=f(x)\times f(x)\quad.
\end{align*}
\vspace{-1.5\baselineskip}

At each step of the derivation, typically there is only one symbolic
transformation we can perform. In the example above, each step either
substitutes a definition of a known function or applies some function
to an argument and computes the result. To help us remember what was
done, we use a green underline as a hint indicating a sub-expression
to be modified in that step. 

We will prefer to derive laws in the code notation rather than in
Scala syntax. The code notation covers all fully parametric code,
i.e., all programs that use only the nine basic code constructions.

\subsection{Working with disjunctive types in matrix notation\label{subsec:Working-with-disjunctive-functions}}

The matrix notation provides a general way of performing symbolic
derivations with disjunctive types in point-free style (the matrix
elements are \emph{functions}). Writing all code matrices with type
annotations makes it easier to translate between matrices and Scala
code.

In many cases, the rules of matrix multiplication and function composition
are sufficient for calculating with disjunctive types. For example,
consider the following functions \lstinline!swap[A]! and \lstinline!merge[A]!:

\begin{wrapfigure}{L}{0.46\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
def swap[A]: Either[A, A] => Either[A, A] = {
    case Left(a)    => Right(a)
    case Right(a)   => Left(a)
}
def merge[A]: Either[A, A] => A = {
    case Left(a)    => a
    case Right(a)   => a
}
\end{lstlisting}

\vspace{-1\baselineskip}
\end{wrapfigure}%

~\vspace{-1.2\baselineskip}
\[
\text{swap}^{A}\triangleq\,\begin{array}{|c||cc|}
 & A & A\\
\hline A & \bbnum 0 & \text{id}\\
A & \text{id} & \bbnum 0
\end{array}\quad,~\quad\text{merge}^{A}\triangleq\,\begin{array}{|c||c|}
 & A\\
\hline A & \text{id}\\
A & \text{id}
\end{array}\quad.
\]
\vspace{-0.4\baselineskip}

We can quickly prove by matrix composition that $\text{swap}\bef\text{swap}=\text{id}$
and $\text{swap}\bef\text{merge}=\text{merge}$:\vspace{0\baselineskip}
\begin{align*}
 & \text{swap}\bef\text{swap}=\,\begin{array}{||cc|}
\bbnum 0 & \text{id}\\
\text{id} & \bbnum 0
\end{array}\,\bef\,\begin{array}{||cc|}
\bbnum 0 & \text{id}\\
\text{id} & \bbnum 0
\end{array}\,=\,\begin{array}{||cc|}
\text{id}\bef\text{id} & \bbnum 0\\
\bbnum 0 & \text{id}\bef\text{id}
\end{array}\,=\,\begin{array}{||cc|}
\text{id} & \bbnum 0\\
\bbnum 0 & \text{id}
\end{array}\,=\text{id}\quad,\\
 & \text{swap}\bef\text{merge}=\,\begin{array}{||cc|}
\bbnum 0 & \text{id}\\
\text{id} & \bbnum 0
\end{array}\,\bef\,\begin{array}{||c|}
\text{id}\\
\text{id}
\end{array}\,=\,\begin{array}{||c|}
\text{id}\bef\text{id}\\
\text{id}\bef\text{id}
\end{array}\,=\,\begin{array}{||c|}
\text{id}\\
\text{id}
\end{array}\,=\text{merge}\quad.
\end{align*}

The identity function for any disjunctive type, e.g., $A+B+C$, is
the \textsf{``}identity diagonal\textsf{''} matrix:
\[
\text{id}^{:A+B+C\rightarrow A+B+C}=\,\begin{array}{|c||ccc|}
 & A & B & C\\
\hline A & \text{id} & \bbnum 0 & \bbnum 0\\
B & \bbnum 0 & \text{id} & \bbnum 0\\
C & \bbnum 0 & \bbnum 0 & \text{id}
\end{array}\quad.
\]

As another example, consider the function \lstinline!fmap! for the
functor $E^{A}\triangleq A+A$:

\begin{wrapfigure}{L}{0.6\columnwidth}%
\vspace{-0.8\baselineskip}
\begin{lstlisting}
def fmap[A, B](f: A => B): Either[A, A] => Either[B, B] = {
    case Left(a)    => Left(f(a))
    case Right(a)   => Right(f(a))
}
\end{lstlisting}

\vspace{-1.65\baselineskip}
\end{wrapfigure}%

~\vspace{-1.45\baselineskip}
\[
(f^{:A\rightarrow B})^{\uparrow E}\triangleq\,\begin{array}{|c||cc|}
 & B & B\\
\hline A & f & \bbnum 0\\
A & \bbnum 0 & f
\end{array}\quad.
\]
\vspace{-0.7\baselineskip}

With this definition, we can formulate a law of \lstinline!merge!,
called the \textsf{``}naturality law\textsf{''}:

\begin{wrapfigure}{L}{0.3\columnwidth}%
\vspace{-2\baselineskip}
\[
\xymatrix{\xyScaleY{1.6pc}\xyScaleX{4.0pc}A+A\ar[d]\sb(0.45){f^{\uparrow E}}\ar[r]\sb(0.55){\text{merge}^{A}} & A\ar[d]\sp(0.45){f}\\
B+B\ar[r]\sp(0.55){\text{merge}^{B}} & B
}
\]
\vspace{-0.1\baselineskip}
\end{wrapfigure}%

~\vspace{-0.3\baselineskip}
\[
(f^{:A\rightarrow B})^{\uparrow E}\bef\text{merge}^{B}=\text{merge}^{A}\bef f^{:A\rightarrow B}\quad.
\]
Proving this law is a simple matrix calculation:
\begin{align*}
{\color{greenunder}\text{left-hand side}:}\quad & f^{\uparrow E}\bef\text{merge}=\,\begin{array}{||cc|}
f & \bbnum 0\\
\bbnum 0 & f
\end{array}\,\bef\,\begin{array}{||c|}
\text{id}\\
\text{id}
\end{array}\,=\,\begin{array}{||c|}
f\bef\text{id}\\
f\bef\text{id}
\end{array}\,=\,\begin{array}{||c|}
f\\
f
\end{array}\quad,\\
{\color{greenunder}\text{right-hand side}:}\quad & \text{merge}\bef f=\,\begin{array}{||c|}
\text{id}\\
\text{id}
\end{array}\,\bef\gunderline f=\,\begin{array}{||c|}
\text{id}\\
\text{id}
\end{array}\,\bef\,\begin{array}{||c|}
f\end{array}\,=\,\begin{array}{||c|}
\text{id}\bef f\\
\text{id}\bef f
\end{array}\,=\,\begin{array}{||c|}
f\\
f
\end{array}\quad.
\end{align*}
In the last line we replaced $f$ by a $1\times1$ matrix, $\,\begin{array}{||c|}
f\end{array}$~, in order to apply matrix composition.

Matrix rows and columns can be split or merged when necessary to accommodate
various disjunctive types. As an example, let us verify the \textsf{``}associativity
law\textsf{''} of \lstinline!merge!,

\begin{wrapfigure}{L}{0.3\columnwidth}%
\vspace{-2\baselineskip}
\[
\xymatrix{\xyScaleY{1.6pc}\xyScaleX{4.0pc}E^{A+A}\ar[d]\sp(0.45){\text{merge}^{\uparrow E}}\ar[r]\sp(0.55){\text{merge}^{A+A}} & A+A\ar[d]\sb(0.5){\text{merge}^{A}}\\
E^{A}\ar[r]\sb(0.55){\text{merge}^{A}} & A
}
\]
\vspace{-0.1\baselineskip}
\end{wrapfigure}%

~\vspace{-0.3\baselineskip}
\[
(\text{merge}^{A})^{\uparrow E}\bef\text{merge}^{A}=\text{merge}^{A+A}\bef\text{merge}^{A}\quad.
\]
Both sides of this law are functions of type $A+A+A+A\rightarrow A$.
To transform the left-hand side, we use the definition of $^{\uparrow E}$
and write
\[
\text{merge}^{\uparrow E}\bef\text{merge}=\,\begin{array}{|c||cc|}
 & A & A\\
\hline A+A & \text{merge} & \bbnum 0\\
A+A & \bbnum 0 & \text{merge}
\end{array}\,\bef\,\begin{array}{|c||c|}
 & A\\
\hline A & \text{id}\\
A & \text{id}
\end{array}\,=\,\begin{array}{|c||c|}
 & A\\
\hline A+A & \text{merge}\\
A+A & \text{merge}
\end{array}\quad.
\]
However, we have not yet substituted the definition of \lstinline!merge!
into the matrix. To do that, add more rows to the matrix in order
to accommodate the disjunctive type $(A+A)+(A+A)$:
\[
\text{merge}^{\uparrow E}\bef\text{merge}=\,\begin{array}{|c||c|}
 & A\\
\hline A+A & \text{merge}\\
A+A & \text{merge}
\end{array}\,=\,\begin{array}{|c||c|}
 & A\\
\hline A & \text{id}\\
A & \text{id}\\
A & \text{id}\\
A & \text{id}
\end{array}\quad.
\]
Now we compute the right-hand side of the law by substituting the
code of \lstinline!merge!:
\[
\text{merge}^{A+A}\bef\text{merge}^{A}=\,\begin{array}{|c||c|}
 & A+A\\
\hline A+A & \text{id}\\
A+A & \text{id}
\end{array}\,\bef\,\begin{array}{|c||c|}
 & A\\
\hline A & \text{id}\\
A & \text{id}
\end{array}\quad.
\]
We cannot proceed with matrix composition because the dimensions of
the matrices do not match. To compute further, we need to expand the
rows and the columns of the first matrix:
\[
\begin{array}{|c||c|}
 & A+A\\
\hline A+A & \text{id}\\
A+A & \text{id}
\end{array}\,\bef\,\begin{array}{|c||c|}
 & A\\
\hline A & \text{id}\\
A & \text{id}
\end{array}\,=\begin{array}{|c||cc|}
 & A & A\\
\hline A & \text{id} & \bbnum 0\\
A & \bbnum 0 & \text{id}\\
A & \text{id} & \bbnum 0\\
A & \bbnum 0 & \text{id}
\end{array}\,\bef\,\begin{array}{|c||c|}
 & A\\
\hline A & \text{id}\\
A & \text{id}
\end{array}\,=\,\begin{array}{|c||c|}
 & A\\
\hline A & \text{id}\\
A & \text{id}\\
A & \text{id}\\
A & \text{id}
\end{array}\quad.
\]
This proves the law (and also helps visualize how the transformations
work with various types).

In some cases, we cannot fully split the rows or the columns of a
matrix. For instance, if we are calculating with an arbitrary function
$f^{:\bbnum 1+A\rightarrow\bbnum 1+B}$, we cannot write this function
in a form of a $2\times2$ matrix because we do not know which parts
of the disjunction will be returned (the code of the function $f$
is arbitrary and unknown). At most, we could split the \emph{rows}
by writing the function $f$ as a product of arbitrary functions $g^{:\bbnum 1\rightarrow\bbnum 1+B}$
and $h^{:A\rightarrow\bbnum 1+B}$:
\[
f=\,\begin{array}{|c||c|}
 & \bbnum 1+B\\
\hline \bbnum 1 & g\\
A & h
\end{array}\quad.
\]
The single column of this matrix remains unsplit. Either that column
will remain unsplit throughout the derivation, or additional information
about $f$, $g$, or $h$ will allow us to split the column.

Finally, there are two tricks that complement the matrix intuition
and may sometimes simplify a disjunctive function.\footnote{These tricks are adapted from Section~2.8 of the book \textsf{``}Program
design by calculation\textsf{''} (draft version of October 2019), see \texttt{\href{http://www4.di.uminho.pt/~jno/ps/pdbc.pdf}{http://www4.di.uminho.pt/$\sim$jno/ps/pdbc.pdf}}}

\paragraph{Ignored arguments}

If all rows of the disjunctive function ignore their arguments and
always return the same results, we may collapse all rows into one,
as shown in this example:

\begin{wrapfigure}{L}{0.5\columnwidth}%
\vspace{-0.2\baselineskip}
\begin{lstlisting}
def same[A]: Either[A, Option[A]] => Option[A] = {
    case Left(a)          => None
    case Right(None)      => None
    case Right(Some(a))   => None
}
\end{lstlisting}
\vspace{-3\baselineskip}
\end{wrapfigure}%

~\vspace{-1.4\baselineskip}
\begin{align*}
 & \text{same}^{:A+\bbnum 1+A\rightarrow\bbnum 1+A}=\,\begin{array}{|c||cc|}
 & \bbnum 1 & A\\
\hline A & \_\rightarrow1 & \bbnum 0\\
\bbnum 1 & \_\rightarrow1 & \bbnum 0\\
A & \_\rightarrow1 & \bbnum 0
\end{array}\\
 & =\,\begin{array}{|c||cc|}
 & \bbnum 1 & A\\
\hline A+\bbnum 1+A & \_\rightarrow1 & \bbnum 0
\end{array}\quad.
\end{align*}
\vspace{-1\baselineskip}

A more general formula for arbitrary functions $f^{:X\rightarrow C}$
is
\[
x^{:X}\rightarrow p^{:A+B}\triangleright\,\begin{array}{|c||c|}
 & C\\
\hline A & \_\rightarrow f(x)\\
B & \_\rightarrow f(x)
\end{array}\,=x^{:X}\rightarrow f(x)=f\quad.
\]
In this case, we can completely collapse the matrix, getting an ordinary
(non-disjunctive) function.

\paragraph{Simplification of diagonal pair products}

Consider the pair product of two disjunctive functions such as $f^{:A+B\rightarrow R}$
and $g^{:P+Q\rightarrow S}$. Computing $f\boxtimes g$ in the matrix
notation requires, in general, to split the rows and the columns of
the matrices because the type of $f\boxtimes g$ is 
\begin{align*}
f\boxtimes g & :(A+B)\times(P+Q)\rightarrow R\times S\\
 & \cong A\times P+A\times Q+B\times P+B\times Q\rightarrow R\times S\quad.
\end{align*}
So, the pair product of two $2\times1$ matrices must be written \emph{in
general} as a $4\times1$ matrix:
\[
\text{for any }f\triangleq\,\begin{array}{|c||c|}
 & R\\
\hline A & f_{1}\\
B & f_{2}
\end{array}\quad\text{and}\quad g\triangleq\,\begin{array}{|c||c|}
 & S\\
\hline P & g_{1}\\
Q & g_{2}
\end{array}\quad,\quad\text{we have }\quad f\boxtimes g=\,\begin{array}{|c||c|}
 & R\times S\\
\hline A\times P & f_{1}\boxtimes g_{1}\\
A\times Q & f_{1}\boxtimes g_{2}\\
B\times P & f_{2}\boxtimes g_{1}\\
B\times Q & f_{2}\boxtimes g_{2}
\end{array}\quad.
\]

A simplification trick exists when the pair product is composed with
the diagonal function $\Delta$:
\[
\Delta\bef(f\boxtimes g)=\Delta^{:A+B\rightarrow(A+B)\times(A+B)}\bef(f^{:A+B\rightarrow R}\boxtimes g^{:A+B\rightarrow S})=p\rightarrow f(p)\times g(p)\quad.
\]
This \textsf{``}diagonal pair product\textsf{''} is well-typed only if $f$ and $g$
have the same argument types (so, $A=P$ and $B=Q$). It turns out
that the function $\Delta\bef(f\boxtimes g)$ can be written as a
$2\times1$ matrix, i.e., we do not need to split the rows:
\[
\text{for any }f\triangleq\,\begin{array}{|c||c|}
 & R\\
\hline A & f_{1}\\
B & f_{2}
\end{array}\quad\text{and}\quad g\triangleq\,\begin{array}{|c||c|}
 & S\\
\hline A & g_{1}\\
B & g_{2}
\end{array}\quad,\quad\text{we have }\quad\Delta\bef(f\boxtimes g)=\,\begin{array}{|c||c|}
 & R\times S\\
\hline A & \Delta\bef(f_{1}\boxtimes g_{1})\\
B & \Delta\bef(f_{2}\boxtimes g_{2})
\end{array}\quad.
\]
The rules of matrix multiplication do not help in deriving this law.
So, we use a more basic approach: show that both sides are equal when
applied to arbitrary values $p$ of type $A+B$,
\[
p^{:A+B}\triangleright\Delta\bef(f\boxtimes g)=f(p)\times g(p)\overset{?}{=}p\triangleright\,\begin{array}{|c||c|}
 & R\times S\\
\hline A & \Delta\bef(f_{1}\boxtimes g_{1})\\
B & \Delta\bef(f_{2}\boxtimes g_{2})
\end{array}\quad.
\]
The type $A+B$ has two cases. Applying the left-hand side to $p\triangleq a^{:A}+\bbnum 0^{:B}$,
we get
\begin{align*}
 & f(p)\times g(p)=\big((a^{:A}+\bbnum 0^{:B})\triangleright f\big)\times\big((a^{:A}+\bbnum 0^{:B})\triangleright g\big)\\
 & \quad=\big(\,\begin{array}{|cc|}
a & \bbnum 0\end{array}\,\triangleright\,\begin{array}{||c|}
f_{1}\\
f_{2}
\end{array}\,\big)\times\big(\,\begin{array}{|cc|}
a & \bbnum 0\end{array}\,\triangleright\,\begin{array}{||c|}
g_{1}\\
g_{2}
\end{array}\,\big)=\big(a\triangleright f_{1}\big)\times\big(a\triangleright g_{1}\big)=f_{1}(a)\times g_{1}(a)\quad.
\end{align*}
Applying the right-hand side to the same $p$, we find
\begin{align*}
{\color{greenunder}\text{expect to equal }f_{1}(a)\times g_{1}(a):}\quad & \gunderline p\triangleright\,\begin{array}{||c|}
\Delta\bef(f_{1}\boxtimes g_{1})\\
\Delta\bef(f_{2}\boxtimes g_{2})
\end{array}\,=\,\begin{array}{|cc|}
a & \bbnum 0\end{array}\,\triangleright\,\begin{array}{||c|}
\Delta\bef(f_{1}\boxtimes g_{1})\\
\Delta\bef(f_{2}\boxtimes g_{2})
\end{array}=\gunderline{a\triangleright\Delta}\bef(f_{1}\boxtimes g_{1})\\
{\color{greenunder}\text{definition of }\Delta:}\quad & \quad=(a\times a)\triangleright(f_{1}\boxtimes g_{1})=f_{1}(a)\times g_{1}(a)\quad.
\end{align*}
A similar calculation with $p\triangleq\bbnum 0^{:A}+b^{:B}$ shows
that both sides of the law are equal to $f_{2}(b)\times g_{2}(b)$.

\subsection{Derivations involving unknown functions with laws}

A more challenging task is to derive an equation that uses arbitrary
functions about which we only know that they satisfy certain given
laws. Such derivations usually proceed by trying to transform the
code until the given laws can be applied.

As an example, let us derive the property that $L^{A}\triangleq A\times F^{A}$
is a functor if $F^{\bullet}$ is known to be a functor. We are in
the situation where we only know that the function $\text{fmap}_{F}$
exists and satisfies the functor law, but we do not know the code
of $\text{fmap}_{F}$. Let us discover the derivation step by step.

First, we need to define $\text{fmap}_{L}$. We use the lifting notation
$^{\uparrow F}$ and write, for any $f^{:A\rightarrow B}$,
\begin{lstlisting}
def fmap_L[A, B](f: A => B): ((A, F[A])) => (B, F[B]) = { case (a, p) => (f(a), p.map(f)) }
\end{lstlisting}
\[
f^{\uparrow L}\triangleq a^{:A}\times p^{:F^{A}}\rightarrow f(a)\times(p\triangleright f^{\uparrow F})\quad.
\]
To verify the identity law of $L$:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{id}^{\uparrow L}=a^{:A}\times p^{:F^{A}}\rightarrow\text{id}\,(a)\times(p\triangleright\text{id}^{\uparrow F})=\text{???}
\end{align*}
At this point, the only things we can simplify are the identity functions
applied to arguments. We know that $F$ is a lawful functor; therefore,
$\text{id}^{\uparrow F}=\text{id}$. So we continue the derivation,
omitting types:
\begin{align*}
{\color{greenunder}\text{expect to equal }\text{id}:}\quad & \text{id}^{\uparrow L}=a\times p\rightarrow\gunderline{\text{id}\,(a)}\times(p\triangleright\gunderline{\text{id}^{\uparrow F}})\\
{\color{greenunder}\text{identity law of }F:}\quad & =a\times p\rightarrow a\times(\gunderline{p\triangleright\text{id}})\\
{\color{greenunder}\text{apply function}:}\quad & =a\times p\rightarrow a\times p=\text{id}\quad.
\end{align*}

To verify the composition law of $L$, we assume two arbitrary functions
$f^{:A\rightarrow B}$ and $g^{:B\rightarrow C}$:
\begin{align*}
{\color{greenunder}\text{expect to equal }(f\bef g)^{\uparrow L}:}\quad & f^{\uparrow L}\bef g^{\uparrow L}=\big(a\times p\rightarrow f(a)\times f^{\uparrow F}(p)\big)\bef\big(b\times q\rightarrow g(b)\times g^{\uparrow F}(q)\big)\quad.
\end{align*}
At this point, we pause and try to see how we might proceed. We do
not know anything about $f$ and $g$, so we cannot evaluate $f(a)$
or $f^{\uparrow F}(p)$. We also do not have the code of $^{\uparrow F}$
(i.e., of $\text{fmap}_{F}$). The only information we have about
these functions is that $F$\textsf{'}s composition law holds,
\begin{equation}
f^{\uparrow F}\bef g^{\uparrow F}=(f\bef g)^{\uparrow F}\quad.\label{eq:composition-law-F-derivation1}
\end{equation}
We could use this law only if we somehow bring $f^{\uparrow F}$ and
$g^{\uparrow F}$ together in the formula. The only way forward is
to compute the function composition of the two functions whose code
we \emph{do} have:
\begin{align*}
 & \big(a\times p\rightarrow f(a)\times f^{\uparrow F}(p)\big)\bef\big(b\times q\rightarrow g(b)\times g^{\uparrow F}(q)\big)\\
 & =a\times p\rightarrow g(f(a))\times g^{\uparrow F}(f^{\uparrow F}(p))\quad.
\end{align*}
In order to use the law~(\ref{eq:composition-law-F-derivation1}),
we need to rewrite this code via the composition $f\bef g$. We notice
that the formula contains exactly those function compositions:
\[
g(f(a))\times g^{\uparrow F}(f^{\uparrow F}(p))=(a\triangleright f\bef g)\times(p\triangleright f^{\uparrow F}\bef g^{\uparrow F})\quad.
\]
So, we can now apply the composition law of $F$ and write up the
complete derivation, adding hints:
\begin{align*}
{\color{greenunder}\text{expect to equal }(f\bef g)^{\uparrow L}:}\quad & f^{\uparrow L}\bef g^{\uparrow L}=\big(a\times p\rightarrow f(a)\times f^{\uparrow F}(p)\big)\bef\big(b\times q\rightarrow g(b)\times g^{\uparrow F}(q)\big)\\
{\color{greenunder}\text{compute composition}:}\quad & =a\times p\rightarrow\gunderline{g(f(a))}\times\gunderline{g^{\uparrow F}(f^{\uparrow F}(p))}\\
{\color{greenunder}\triangleright\text{-notation}:}\quad & =a\times p\rightarrow(a\triangleright f\bef g)\times\big(p\triangleright\gunderline{f^{\uparrow F}\bef g^{\uparrow F}}\big)\\
{\color{greenunder}\text{composition law of }F:}\quad & =a\times p\rightarrow(a\triangleright\gunderline{f\bef g})\times\big(p\triangleright(\gunderline{f\bef g})^{\uparrow F}\big)\\
{\color{greenunder}\text{definition of }^{\uparrow L}:}\quad & =(f\bef g)^{\uparrow L}\quad.
\end{align*}

The derivation becomes significantly shorter if we use the pair product
($\boxtimes$) to define $^{\uparrow L}$:
\[
f^{\uparrow L}\triangleq\text{id}\boxtimes f^{\uparrow F}\quad.
\]
For instance, verifying the identity law then looks like this:
\[
\text{id}^{\uparrow L}=\text{id}\boxtimes\text{id}^{\uparrow F}=\text{id}\boxtimes\text{id}=\text{id}\quad.
\]
This technique was used in the proof of Statement~\ref{subsec:functor-Statement-functor-product}.
The cost of having a shorter proof is the need to remember the properties
of the pair product ($\boxtimes$), which is not often used in derivations.

\subsection{Exercises\index{exercises}}

\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-4-1}\ref{subsec:Exercise-reasoning-1-4-1}}

Assume functors $F$, $G$, $K$, $L$ and a natural transformation
$\phi:F^{A}\rightarrow G^{A}$.

\textbf{(a)} Prove that $\phi^{\uparrow K}:K^{F^{A}}\rightarrow K^{G^{A}}$
is also a natural transformation.

\textbf{(b)} Given another natural transformation $\psi:K^{A}\rightarrow L^{A}$,
prove that the pair product of $\phi$ and $\psi$, that is, $\phi\boxtimes\psi:F^{A}\times K^{A}\rightarrow G^{A}\times L^{A}$,
as well as the pair co-product $\phi\boxplus\psi:F^{A}+K^{A}\rightarrow G^{A}+L^{A}$,
are also natural transformations. The \textbf{pair co-product}\index{pair co-product of functions}
of two functions $\phi$ and $\psi$ is defined by
\[
(\phi\boxplus\psi):F^{A}+K^{A}\rightarrow G^{A}+L^{A}\quad,\quad\quad\phi\boxplus\psi\triangleq\begin{array}{|c||cc|}
 & G^{A} & L^{A}\\
\hline F^{A} & \phi & \bbnum 0\\
K^{A} & \bbnum 0 & \psi
\end{array}\quad.
\]


\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-4}\ref{subsec:Exercise-reasoning-1-4}}

Show using matrix calculations that $\text{swap}\bef\text{swap}=\text{id}$,
where \lstinline!swap! is the function defined in Section~\ref{subsec:Working-with-disjunctive-functions}.

\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-6}\ref{subsec:Exercise-reasoning-1-6}}

Now consider a different function \lstinline!swap[A, B]! defined
as

\begin{wrapfigure}{L}{0.63\columnwidth}%
\vspace{-0.9\baselineskip}
\begin{lstlisting}
def swap[A, B]: ((A, B)) => (B, A) = { case (a, b) => (b, a) }
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-1.25\baselineskip}
\[
\text{swap}^{A,B}\triangleq a^{:A}\times b^{:B}\rightarrow b\times a\quad.
\]
\vspace{-0.15\baselineskip}
Show that $\Delta\bef\text{swap}=\Delta$. Write out all types in
this law and draw a type diagram.

\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-1}\ref{subsec:Exercise-reasoning-1-1}}

Given an arbitrary functor $F$, define the functor $L^{A}\triangleq F^{A}\times F^{A}$
and prove, for an arbitrary function $f^{:A\rightarrow B}$, the \textsf{``}lifted
naturality\textsf{''} law
\[
f^{\uparrow F}\bef\Delta=\Delta\bef f^{\uparrow L}\quad.
\]
Write out all types in this law and draw a type diagram.

\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-5}\ref{subsec:Exercise-reasoning-1-5}}

Show that the types $(\bbnum 1+\bbnum 1)\times A$ and $A+A$ are
equivalent. One direction of this equivalence is given by a function
\lstinline!two[A]! with the type signature

\begin{wrapfigure}{L}{0.63\columnwidth}%
\vspace{-0.85\baselineskip}
\begin{lstlisting}
def two[A]: ((Either[Unit, Unit], A)) => Either[A, A] = ???
\end{lstlisting}

\vspace{-0.25\baselineskip}
\end{wrapfigure}%

~\vspace{-1.15\baselineskip}
\[
\text{two}^{A}:(\bbnum 1+\bbnum 1)\times A\rightarrow A+A\quad.
\]
\vspace{-0.35\baselineskip}
Implement that function and prove that it satisfies the \textsf{``}naturality
law\textsf{''}: for any $f^{:A\rightarrow B}$,
\[
(\text{id}\boxtimes f)\bef\text{two}=\text{two}\bef f^{\uparrow E}\quad,
\]
where $E^{A}\triangleq A+A$ is the functor whose lifting $^{\uparrow E}$
was defined in Section~\ref{subsec:Working-with-disjunctive-functions}.
Write out the types in this law and draw a type diagram. 

\subsubsection{Exercise \label{subsec:Exercise-reasoning-1}\ref{subsec:Exercise-reasoning-1}}

Prove that the following laws hold for arbitrary $f^{:A\rightarrow B}$
and $g^{:C\rightarrow D}$:
\begin{align*}
{\color{greenunder}\text{left projection law}:}\quad & (f\boxtimes g)\bef\pi_{1}=\pi_{1}\bef f\quad,\\
{\color{greenunder}\text{right projection law}:}\quad & (f\boxtimes g)\bef\pi_{2}=\pi_{2}\bef g\quad.
\end{align*}


\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-2}\ref{subsec:Exercise-reasoning-1-2}}

Given arbitrary functors $F$ and $G$, define the functor $L^{A}\triangleq F^{A}\times G^{A}$
and prove that for arbitrary $f^{:A\rightarrow B}$,
\[
f^{\uparrow L}\bef\pi_{1}=\pi_{1}\bef f^{\uparrow F}\quad.
\]
Write out the types in this naturality law and draw a type diagram. 

\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-3}\ref{subsec:Exercise-reasoning-1-3}}

Consider the functor $L^{A}$ defined as 
\[
L^{A}\triangleq\text{Int}\times\text{Int}+A\quad.
\]
Implement the functions \lstinline!fmap! and \lstinline!flatten!
(denoted $\text{ftn}_{L}$) and write their code in matrix notation:
\begin{align*}
(f^{:A\rightarrow B})^{\uparrow L} & :\text{Int}\times\text{Int}+A\rightarrow\text{Int}\times\text{Int}+B\quad,\\
\text{ftn}_{L} & :\text{Int}\times\text{Int}+\text{Int}\times\text{Int}+A\rightarrow\text{Int}\times\text{Int}+A\quad.
\end{align*}


\subsubsection{Exercise \label{subsec:Exercise-reasoning-1-3-1}\ref{subsec:Exercise-reasoning-1-3-1}{*}}

Show that \lstinline!flatten! (denoted $\text{ftn}_{L}$) from Exercise~\ref{subsec:Exercise-reasoning-1-3}
satisfies the naturality law: for any $f^{:A\rightarrow B}$, we have
$f^{\uparrow L\uparrow L}\bef\text{ftn}_{L}=\text{ftn}_{L}\bef f^{\uparrow L}$.
