\documentclass[preprint]{sigplanconf}

\input{preamble}

\begin{document}

\maketitle

\begin{abstract}
Strong typing presents the programmer with a trade-off between correctness and
code complexity: more exact types prevent errors but less exact types enable
reuse.
%
Current functional programming practice prefers general types over exact types
in large programs like compilers because of the reuse. Exact typing in these
programs would require numerous similar data types and conversions between
them.
%
We factor out a pattern in such conversions as a reusable Haskell function. We
extend existing generic programming techniques to define it and to use it
without introducing undue obfuscation.
%
Our reusable function eliminates the boilerplate for conversions between the
numerous exact types. It therefore delivers the benefits of exact types and
simulates the reusability of general types with lightweight generic
programming.
%
We demonstrate our function by using it to define a lambda-lifting function
with an exact range type that has no constructor for lambdas.
\end{abstract}

%\category{D.3.3}{Programming Languages}
%                {Language Constructs and Features}
%                [Data types and structures]

%\terms
%  Object-Oriented Programming, Philosophy
\keywords Datatypes, Generic Programming, Type Invariants, Type-Level
Programming

\section{Introduction}

In strongly-typed functional languages, programmers can declare types that
accurately capture the intended values and nothing else, thereby guaranteeing
the absence of many runtime errors. If one type subsumes another, we call the
smaller one the \emph{exact type}. Exact typing is ideal from an assurance
point-of-view, but
%
in practice has a serious disadvantage: it requires many additional,
special-purpose types. Each exact type is necessarily specific and therefore
useful at only a
%
few points in the program. Though the number of these types is not itself
detrimental, multiple functions must be defined for converting between
them. Worse still, these functions currently incur code
%
duplication.  In this paper, we eliminate a major source of this code
duplication by defining a
%very reusable 
function that abstracts a common
pattern in conversions between exact types.


%So much so that programmers
%often intentionally avoid exact typing, especially for data types with
%numerous constructors, as a legitimate engineering tradeoff. 

Because of exact types' disadvantages, the implementers of the Glasgow Haskell
compiler (GHC) occasionally prefer general types in the GHC source over
potentially more exact types. The source
%an archetypical example of a large Haskell program,
includes
many functions that immediately raise fatal run-time errors for some of the domain
types' constructors.
%
%Instead of redefining these functions over exact data
%types, 
The GHC implementers use type synonyms of the inexact types that
document the functions' intended pre- and post-conditions, use
indicative constructor names, and include comments that explain which
constructors are expected as input, output, or both.
%
%While their documentation helps them maintain the intended properties,
%they cannot \emph{enforce} those properties without exact typing. 
Thus code duplication is avoided at the cost of exact typing's
assurance; a reasonable engineering choice because the duplicate code would be
untenable at the real-world scale. 
%
%This is a reasonable choice, since tedious coding invites
%inattentive programming and obfuscates essential semantics: 
%Unfortunately, duplicate code is
%untenable at the real-world scale. 

%\newpage

Our approach factors out a major pattern in
the duplicated code, thereby insulating programmers from having to choose
between exact typing and reuse.
The pattern we factor out frequently occurs in the definition of
\emph{property-establishing} functions, those with similar but meaningfully
distinct domain and range types. Such functions are crucial to exact typing:
they guard the downstream use of other functions that rely on the property and
therefore use the property-establishing function's range as their domain. We
work from the premise that the majority of cases in real-world
property-establishing functions are \emph{homomorphic}; that is they merely structurally recur
in order to map between constructors with compatible fields and corresponding
semantics. The explicit use of homomorphism in these cases' definition is the pattern we encapsulate. The premise is legitimate
because it characterizes the common practice of defining real-world programs,
such as compilers\todo{cite Sankar's nanopasses?}, as a sequence of minimal transformations. In this context,  
an exact typing approach would insert exact data types between the
transformations, which would therefore become property-establishing functions.

The homomorphic pattern, when used to define a function with the same domain
%
and range type, is well understood and can be factored out with existing
techniques \cite{compos, multirec}. This approach, however, requires the domain
and range of the homomorphism to be equivalent: it only applies to degenerate
homogeneous homomorphisms, which are better known as \emph{compositional
functions}. It thus cannot be used to define property-establishing functions,
%
because their domain and range are distinct by definition.  Inspired by the
`compos` function defined by \citet{compos}, we enrich this existing approach
for compositional functions with support for the inherent heterogeneity of
homomorphisms.

%We define this pattern as a very reusable
%function named `hcompos`. 

%In introducing our approach to pragmatic exact types, 
Specifically, we make the following contributions.
\begin{itemize}
\item We define a reusable homomorphism, called `hcompos`, by using
  \emph{datatype-generic} programming techniques, and show how to use it to
  eliminate duplication when using exact typing. (\Sref{sec:hcompos-defn})
%
\item 
We demonstrate our use of `hcompos` by defining
lambda-lifting as a property-establishing function that maps from a term data
type with lambdas to a similar term data type with only top-level function
declarations. (\Sref{sec:lambda-lifting})
%In this definition, the `hcompos` function implicitly handles the
%homomorphic cases, which include every term construct unrelated to binding.
%\end{enumerate}

%\noindent Our secondary contributions extend existing generic programming
%techniques in order to enable the generic definition of `hcompos`.

%\begin{enumerate}[resume]
\item We use a \emph{delayed representation} of data types' structure to use
  `hcompos` within a function definition without obfuscating that
  definition. (\Sref{sec:yoko})
\item We add \emph{constructor name reflection}, which augments the reflection
  of data types' in a straight-forward manner. This extension directly enables
  our novel support of the heterogeneity of homomorphisms. (\Sref{sec:yoko})
\end{itemize}

%\noindent Though these secondary contributions show promise for broader
%usefulness, the emphasis
%%
%of the current work is that `hcompos` reduces a major practical cost of exact
%typing.

\pagebreak

\section{Using Exact Types}
\label{sec:motivation}

This section motivates the technical developments required in order to
generically define our new function `hcompos` by demonstrating its utility on a simple
example. The same function will be defined three ways: first, with exact typing
and explicit homomorphic cases; second, with implicit homomorphic cases
but without exact typing; and third, with both exact typing and implicit
homomorphism via our new function `hcompos`.

The `exact_nnf` function declared in \tref{Figure}{fig:arith-nnf} transforms
from the `Exp` data type to the more exact `NNF` data type, thereby
establishing a negation normal form property. However, the distinct domain and
range types of `exact_nnf` prevent its definition from handling the tedious
cases implicitly; note that the `Plus` case must explicitly implement the
homomorphic behavior. Ultimately, the problem is that the Haskell language and
existing generic programming techniques, such as \citep{syb, multirec,
  instant-generics, generic-deriving}, are blind to the obvious correspondence
%
between the `Plus` and `PlusN` constructors, and so cannot be used to define a
reusable homomorphism.

On the other hand, less exactly typed functions like `compos_nnf` (also in
\tref{Figure}{fig:arith-nnf}) can leverage the reusable `compos` function for
compositional functions \cite{compos}, because `Plus` is trivially mapped to
itself. Moreover, a function like `compos` is commonly defined anyway since the
bottom-up traversal it implements is so broadly useful \cite{compos}. While the
`compos_nnf` function does actually establish the negation normal form, its range
type does not encode that property. The downstream functions requiring negation
normal form are therefore necessarily partial over the `Bad_NNF` type. Though
workarounds might add an exception continuation to be used for non-negation
normal form cases, this pollutes the types and merely forces every call site to
handle the unsupported cases. Defining the `NNF` type and a function like
`exact_nnf` is the only way to ensure both minimal and exact types. But, due to
its use of `compos`, the definition of `compos_nnf` is significantly more modular
than that of `exact_nnf`. The heterogeneous homomorphism `hcompos` provides both
exact types and modularity.

Having no choice but to define the `Plus` case of `exact_nnf` explicitly might
not itself seem burdensome because `Exp` and `NNF` have so few
constructors. But for real-world programs, there are multiple properties to be
encoded as data types like `NNF`. Lambda-lifting establishes the absence of
lambdas just as `exact_nnf` guarantees only variables are negated. Moreover,
real-world data types can have dozens of constructors, so each
property-establishing function like `exact_nnf` requires dozens of explicit
tedious cases.

The heterogeneous homomorphism `hcompos` defined in this paper is demonstrated
from the user's perspective in the definition of the `best_nnf` function in
\tref{Figure}{fig:arith-nnf-yoko}. The generic definition of `hcompos` relies
on our two generic programming extensions. Our first extension, delayed
representation, makes it possible to implicitly partition data types into
anonymous subsets of constructors without any ad-hoc boilerplate. This lets the
definition of `best_nnf` distinguish the interesting constructors modularly and
clearly. This is exemplified by the intuitive types of the `important` and
`tedious` variables: \emph{the \inlineHaskell{Var} or \inlineHaskell{Neg}
  constructor} and \emph{some other \inlineHaskell{Exp} constructor},
respectively. The second extension, constructor name reflection, makes it
possible to automatically identify corresponding constructors by their names,
so `hcompos` can infer that `Plus` should map to `PlusN`. This is how the call
to `hcompos` in the definition of `best_nnf` handles the tedious cases
implicitly. Only the `Var` and `Neg` cases --- essential to the transformation
--- are explicit.

The delayed representation extension
automatically generates a data type for each constructor in the original data
type. This generated data type has one constructor isomorphic to the original,
and both the type and the constructor are predictably named. We
%
have chosen the arbitrary convention to add an underscore. These generated data
types are crucial to letting the programmer partition data types into subsets
of constructors. For example, the patterns of the `nnfVar` and `nnfNeg`
functions are exhaustive.

Generic programming, without our extensions, supports the modular and minimal
definition of functions with inexact types like `compos_nnf`. Our extensions
extend the benefits of generic programming to functions with distinct domain
and range types, especially property-establishing functions like
`exact_nnf`. In particular, existing techniques can generically define
`compos`, as shown in the next section.

\begin{figure}[t]
\begin{haskell}
data Exp = Var       String | Plus  Exp Exp | Neg Exp
data NNF = VarN Bool String | PlusN NNF NNF

exact_nnf :: Exp -> NNF
exact_nnf = w False where
  w n (Var s)      = VarN n s
  w n (Neg e)      = w (not n) e
  w n (Plus e1 e2) = PlusN (w n e1) (w n e2)

type Bad_NNF = Exp -- informal: only Neg (Var s)

compos_nnf :: Exp -> Bad_NNF
compos_nnf = w False where
  w n v@(Var _)    = if n then Neg v else v
  w n (Neg e)      = w (not n) e
  w n e            = compos (w n) e

class Compos a where -- an existing generics technique
  compos :: Applicative i => (a -> i a) -> a -> i a
\end{haskell}
\caption{\inlineHaskell{exact_nnf} is the ideal negation normal
  form-establishing function, though \inlineHaskell{compos_nnf} is an
  attractive alternative because it admits existing generic programming
  techniques.\label{fig:arith-nnf}}
\end{figure}

\begin{figure}[t]
\begin{haskell}
best_nnf :: Exp -> NNF
best_nnf = w False where
  w e = case Y.partition (Y.disband e) of
    Left  important -> (nnfVar Y..|. nnfNeg) important
    Right tedious   -> Y.hcompos w tedious

  -- these functions are total
  nnfVar (Var_ s) n = VarN n s
  nnfNeg (Neg_ e)   = w e . not

-- (hcompos from module Y, given for reference)
class HCompos a dcs b where
  hcompos :: Applicative i => (a -> i b) -> dcs -> i b
\end{haskell}
\caption{A preferable definition of \inlineHaskell{nnf} as enabled by
  techniques developed in this paper (imported here as \inlineHaskell{Y}; types
  of \inlineHaskell{disband}, \inlineHaskell{partition}, and
  \inlineHaskell{(.|.)} not yet given).
%  The underscores are intentional.
  \label{fig:arith-nnf-yoko}}
\end{figure}

\section{Background: Generic Programming}

This section summarizes the \ig\ approach \cite{instant-generics}, the
foundation of our generic programming.
%
Existing generic programming techniques, without our extensions, are sufficient
%
for generically defining the `compos` function of \citet{compos}. These same
techniques convey the majority of the reusability of our `hcompos` function;
our extensions just enable its
%
heterogeneity. 

\ig\ derives its genericity from two major Haskell language features: type
classes and type families \citep{type-families}. We demonstrate \ig\ with a
simple example type and two generically defined functions in order to set the
stage for our extensions. In doing so, we introduce our own vocabulary for the
concepts
%
underlying the \ig\ Haskell declarations.

%\pagebreak

The core \ig\ declarations are listed in
\tref{Figure}{fig:instant-generics}. In this approach to generic programming,
any value with a \emph{generic semantics} is defined as a method of a type
class, called a \emph{generic class}. That method's \emph{generic definition}
is a set of instances for each of a small set of \emph{representation types}:
`Dep` (called `Var` by \citet{instant-generics}), `Rec`, `U`, `:*:`, `C`, and
`:+:`. The representation types encode a
%
data type's structure as a sum of products of fields. A data type is associated
with its structure by the `Rep` type family, and a corresponding instance of the
%
`Representable` class converts between
%
a type and its `Rep` structure. Via this conversion, an instance of a generic
class for a representable data type can delegate to the generic definition by
invoking the method on the type's structure. Such instances are not
%
required to rely on the generic semantics: they can use it partially or
completely ignore it.

\subsection{The Sum-of-Products Representation Types}

Each representation type models a particular structure in the declaration of a
data type. The `Rec` type represents occurrences of types in the same mutually
recursive family as the represented type (roughly, its binding group), and the
`Dep` type represents non-recursive occurrences of other types. Sums of
%
constructors are represented by nestings of the higher-order type `:+:`, and
products of fields are represented similarly by `:*:`. The `U` type is the
empty
%
product. Since an empty sum would represent a data type with no constructors,
there is no interesting functions to define generically. The representation of
each constructor is annotated by means
%
of `C` to carry more reflective information in `C`'s
%
phantom type parameter. The `:+:`, `:*:`, and `C` types are all higher-order
representations in that they expect representations as arguments. If Haskell
supported subkinding \citep{promotion}, these parameters would be of a subkind
of `*` specific to representation types. Since parameters of `Dep` and `Rec`
are not supposed to be representation types; they would have the standard `*`
kind.

\begin{figure}[t]
\begin{haskell}
-- set of representation types
data Dep a = Dep a            data Rec a = Rec a
data U = U                    data a :*: b = a :*: b       
data C c a = C a              data a :+: b = L a | R b

-- maps a type to its sum-of-products structure
type family Rep a
class Representable a where
  to :: Rep a -> a
  from :: a -> Rep a

-- further reflection of constructors
class Constructor c where
  conName :: C c a -> String
\end{haskell}
\caption{The core \ig\ interface.\label{fig:instant-generics}}
\end{figure}

Consider a simple de Bruijn-indexed abstract syntax for the untyped lambda
calculus, declared as `ULC`.

\begin{haskellq}
data ULC = Var Int | Lam ULC | App ULC ULC
\end{haskellq}

\noindent An instance of the `Rep` type family maps `ULC` to its structure as
encoded in terms of the representation types.

\begin{haskellq}
type instance Rep ULC =
  C Var (Dep Int) :+: C Lam (Rec ULC) :+:
  C App (Rec ULC :*: Rec ULC)

data Var; data Lam; data App
instance Constructor Var where conName _ = "Var"
instance Constructor Lam where conName _ = "Lam"
instance Constructor App where conName _ = "App"
\end{haskellq}

The void `Var`, `Lam`, and `App` types are considered auxiliary by \ig. They
were added to the sum-of-products representation types only to define another
class of generic
%
values, such as `show` and `read`. We call these types \emph{constructor
  types}. They are analogous to a primary component of our delayed
representation extension, and so will be referenced in
\tref{Section}{sec:delayed-representation}. Each
%
constructor type corresponds directly to a constructor from the represented
data type.

The `Var` constructor's field is represented with `Dep`, since `Int` is not a
recursive occurrence. The `ULC` occurrence in `Lam` and the two in `App` are
recursive, and so are represented with `Rec`. The entire `ULC` type is
represented as the sum of its constructors' representations---the products of
fields---with some further reflective information provided by the `C`
annotation. The `Representable` instance for `ULC` is almost entirely
determined by the types.

\begin{haskellq}
instance Representable ULC where
  from (Var n)     = L    (C (Dep n))
  from (Lam e)     = R (L (C (Rec e)))
  from (App e1 e2) = R (R (C (Rec e1 :*: Rec e2)))
  to (L    (C (Dep n)) )             = Var n
  to (R (L (C (Rec e))))             = Lam e
  to (R (R (C (Rec e1 :*: Rec e2)))) = App e1 e2
\end{haskellq}

\subsection{Two Generic Definitions}


The \ig\ approach can generically define the `compos` function from
\cite{compos}. The `Compos` class provides the compositional behavior
underlying bottom-up traversals.

\begin{haskellq}
class Compos a where
  compos :: Applicative i => (a -> i a) -> a -> i a
\end{haskellq}

\noindent The `compos` method also threads effects using an \emph{applicative
  functor} \cite{af}. The `pure` function corresponds to monadic `return`, and
the `<*>` function is a weaker version of `>>=`.

\begin{haskellq}
class Applicative i where
  pure :: a -> i a
  (<*>) :: i (a -> b) -> i a -> i b
\end{haskellq}

\noindent For example, the definition of `compos_nnf` in
\tref{Figure}{fig:arith-nnf} used the `(->)` `Bool` applicative functor to
track polarity.

The generic definition of the `compos` method extends its first argument by
applying it to the second argument's recursive occurrences.  Accordingly, the
essential case of the generic definition is for `Rec`. All other cases merely
structurally recur. Note that the `Dep` case always yields the `pure` function,
since a `Dep` contains no recursive occurrences.

\begin{haskellq}
instance Compos (Dep a) where compos _ = pure
instance Compos (Rec a) where
  compos f (Rec x) = pure Rec <*> f x
instance Compos U where compos _ = pure
instance (Compos a, Compos b
         ) => Compos (a :*: b) where
  compos f (x :*: y) =
    pure (:*:) <*> compos f x <*> compos f y
instance Compos a => Compos (C c a) where
  compos f (C x) = pure C <*> compos f x
instance (Compos a, Compos b
         ) => Compos (a :+: b) where
  compos f (L x) = pure L <*> compos f x
  compos f (R x) = pure R <*> compos f x
\end{haskellq}

For the `Rec` case, the original function is applied to the recursive field,
but `compos` itself does not recur. As shown in the definition of `compos_nnf`
in \tref{Section}{sec:motivation}, the programmer, not `compos`, introduces the
recursion.
%
With this generic definition in place, the `Compos` instance for `ULC` is a
straight-forward delegation.

\begin{haskellq}
instance Compos ULC where
  compos f = to . compos f . from
\end{haskellq}

\pagebreak
Further, we can generically define an equality test function. We reuse the `Eq`
class as the generic class.

\begin{haskellq}
instance Eq a => Eq (Dep a) where
  Dep x == Dep y   =   x == y
instance Eq a => Eq (Rec a) where
  Rec x == Rec y   =   x == y
instance Eq U where _ == _ = True
instance (Eq a, Eq b) => Eq (a :*: b) where
  x1 :*: x2 == y1 :*: y2   =   x1 == x2 && y1 == y2
instance Eq a => Eq (C c a) where
  C x == C y   =   x == y
instance (Eq a, Eq b) => Eq (a :*: b) where
  L x == L y   =   x == y
  R x == R y   =   x == y
  _   == _     =   False
\end{haskellq}

\noindent With these instance declarations, `Eq ULC` is immediate. As
\citet{is-easy} show, the GHC inliner can be compelled to optimize away
much of the representational overhead.

\begin{haskellq}
instance Eq ULC where x == y   =   from x == from y
\end{haskellq}

As demonstrated with `compos` and `==`, generic definitions --- \ie\ the
instances for representation types --- provide a default behavior that is easy
to invoke. If that behavior suffices for a representable type, then an instance
of the generic class at that type can simply convert with `to` and `from` in
order to invoke the same method at the type's representation. If a particular
type needs a distinct ad-hoc definition of the method, then that type's
instance can use its own specific method definitions, defaulting to the generic
definitions to a lesser degree or even not at all.

The \ig\ approach cannot support the heterogeneity of the `hcompos`
function. We extend \ig\ in the next section so that `hcompos` can be defined
generically and used without introducing undue obfuscation.

\section{\texttt{yoko}: Our Generic Technique}
\label{sec:yoko}

We must extend \ig\ in order to generically define `hcompos`. Existing generic
programming techniques cannot in general be used to define functions with
distinct domain and range types, which is an essential quality of
`hcompos`. Existing techniques can only define functions with a range type that
is either
\begin{inparaenum}[(i)]
\item the same as the domain,
\item some type with monoidal properties, or
\item degenerate in the sense that its constructors are structurally-unique and
  also subsume the domain's constructors.\label{unique-structure}
\end{inparaenum}
In this section, we enable a more feasible restriction: the function must have
a subset of homomorphic cases that map a domain constructor to a similar
constructor in
%
the range. The notion of similarity is based on constructor names; we define it
in \tref{Section}{sec:hcompos-defn} below. This improved restriction is enabled
by our two extensions to \ig.

Our first extension is the basis for clear and modular use of `hcompos`. It
emulates subsets of constructors. Thus the programmer can split a data type
into the relevant constructors and the rest, then explicitly handle the
relevant ones, and finally implicitly handle the rest with
`hcompos`. Specifically, this extension makes it possible for the programmer to
use individual constructors independently of their siblings from the data type
declaration. We therefore call the resulting generic programming approach
\yoko, a Japanese name that can mean \quotes{free child}. This extension is the
foundation for combining \quotes{freed} constructors into subsets and for
splitting data types into these subsets; both of these mechanisms are defined
in \tref{Section}{sec:partitioning} below.

Our second extension enables the generic definition of `hcompos` to
automatically identify the similar pairs of constructors in its domain and
range. Existing \todo{said this too many times}? generic programming techniques
in Haskell can only identify the
%
corresponding constructors under the degenerate circumstances of
(\ref{unique-structure}) because they do not reflect enough information about
data types. Our extension reflects constructor names at the type-level, which
is how our generic definition of `hcompos` automatically identifies
corresponding constructors.

Both of our extensions, and the further developments in
\tref{Section}{sec:partitioning}, involve non-trivial type-level
programming. Thus we first introduce the newer Haskell features we use as well
as some conveniences we assume for the sake of presentation.

\subsection{Background: Type-level Programming in Haskell}
\label{sec:pretending}

The type-level programming necessary for our three generic programming features
is only partially supported in Haskell. Our current implementation simulates
two desirable features in particular. For clarity of presentation, we assume
throughout this paper that these features are already available:

\begin{enumerate}
\item A type family implementing decidable type equality.
\item Direction promotion of strings to the type-level.
\end{enumerate}

\noindent The implementation simulates these features without exposing
%
them to the user. The user cannot observe type-level strings at all, and the
type equality test is only exposed as the occasional redundant-looking
constraint `Equal` `a` `a` on some type-variable. The GHC implementers are currently discussing how to implement these
features. In particular, the decidable type
%
equality will likely be defined using \emph{closed} type families, with the
common fall-through matching semantics from value-level patterns.

\begin{haskellq}
type family Equal a b :: Bool where
  Equal a a = True
  Equal a b = False
\end{haskellq}

We simulate this definition of `Equal` in a way that requires
%
all potential arguments of `Equal` to be injectively mapped to an associated
globally unique type-level list of natural numbers. The \yoko\ library provides
an easy-to-use Template
%
Haskell function that derives such a mapping for a type according to its
globally unique name (\ie\ package, version, module, and name);
\citet[\texttt{\#TTypeable}]{oleg-typeeq} uses the same mapping.  This
simulation of `Equal` is undefined for some arguments for which the ideal
`Equal` is defined. One example case is when both arguments are the same to
polymorphic variable. The simulation can only determine concrete types to be
equal; it is incapable of identifying two uses of the same type variable in its
arguments. Thus a `~` constraint implies that the ideal `Equal` is `True` but
not so for the simulated one. However, if each `~` constraint were to be
accompanied by a corresponding `Equal` constraint with `True`, the simulation
becomes
%
otherwise entirely transparent. Furthermore, the simulation is only defined for
concrete types that have been reflected with \yoko's bundled Template Haskell,
which we tacitly assume for all of our examples.

Promotion of data types to \emph{data kinds} is a recent extension of
%
GHC \cite{promotion}. Beyond the simulated promotion of strings, the
definitions in this paper use the genuine `Bool` data kind: `True`
%
and `False` are also types of kind `Bool`. We omit the straight-forward
declarations of the type-level conditional and disjunction as the `If` and `Or`
type families. Furthermore, the `Maybe` data kind is explicitly simulated,
since
%
`*->*` promotion is not yet fully supported.

\begin{haskellq}
data Nothing   ;   data Just a
type family   MaybePlus (a :: *) (b :: *)
type instance MaybePlus Nothing  b = b
type instance MaybePlus (Just a) b = Just a
\end{haskellq}

\noindent It is used for type-level backtracking.

\pagebreak

\subsection{Delayed Representation}
\label{sec:delayed-representation}

Our first extension is the \emph{delayed representation} of data types. While
\ig\ maps a type directly to a sum of products of fields, \yoko\ maps a type to
a sum of its constructors, which can later be mapped to a product of their
fields if necessary. The intermediate stage of a data type's representation is
the anonymous set of all of its constructors, which the programmer can then
partition into the subsets of interest.

Delayed representation requires a type corresponding to each constructor,
called a \emph{fields type}. Fields types are similar to\linebreak \ig's
constructor
%
types. However, constructor types are void because they merely annotate a
constructor's representation, while a fields type is the
%
representation. Accordingly, each fields type has one constructor with exactly
the same fields as the constructor it represents. For example, the `ULC` data
type needs three fields types, one for each constructor.

\begin{haskellq}
data Var_ = Var_ Int
data Lam_ = Lam_ ULC
data App_ = App_ ULC ULC
\end{haskellq}

\noindent As will be demonstrated in \tref{Section}{sec:lambda-lifting},
programs using the `hcompos` approach use the fields types directly. Thus
fields types and
%
their constructors must be predictably-named. In this paper, we adopt the
convention of adding an underscore.

The \yoko\ interface for data type reflection is listed in
\tref{Figure}{fig:reflection}. It reuses the \ig\ representation types, except
`C` is replaced by `N`, which contains a fields type. The `DCs` type family
disbands a data type to a sum of its fields types; any subset of this sum is
called a
%
\emph{disbanded} data type. The `DCs` mapping is realized at the value-level by
the `DT` class. This family and class are the delayed representation analogs of
\ig's `Rep` family and the `from` method of its `Generic` class. The inverse
mapping, from a fields type back to its original type, is specified with the
`Range` family and `DC` class. The `ULC` data type is represented as follows.

\begin{haskellq}
type instance DCs ULC = N Var_ :+: N Lam_ :+: N App_
instance DT ULC where
  disband (Var i)     = L    (Var i)
  disband (Lam e)     = R (L (Lam_ e))
  disband (App e0 e1) = R (R (App_ e0 e1))

type instance Range Var_ = ULC
type instance Range Lam_ = ULC
type instance Range App_ = ULC

instance DC Var_ where rejoin (Var_ i)     = Var i
instance DC Lam_ where rejoin (Lam_ e)     = Lam e
instance DC App_ where rejoin (App_ e0 e1) = App e0 e1
\end{haskellq}

The `DC` class also requires that its parameter be a member of the
\ig\ `Generic` class. The instances for `Var_`, `Lam_`, and `App_` are
straight-forward and as expected. Note, though, that the `Rep` instances for
fields types never involve sums. Because every fields type has one constructor,
sums only occur in the `DCs` family.

\begin{figure}[t]
\begin{haskell}
-- analog of instant-generics' C representation type
newtype N dc = N dc

-- maps a type to a sum of its fields types
type family DCs t
class DT t where disband :: t -> DCs t

-- maps a fields type to its original type
type family Range dc
class (IG.Generic dc, DT (Range dc)) => DC dc where
  rejoin :: dc -> Range dc

-- maps a fields type to its tag
type family Tag dc :: String
\end{haskell}
\caption{The \yoko\ interface for data type reflection.  We presume type-level
  strings and reuse the \inlineHaskell{Generic} class from \ig\ (imported here
  as \inlineHaskell{IG}).\label{fig:reflection} }
\end{figure}

Because `DC` implies `Generic`, the delayed representation subsumes the
sum-of-products representation. In particular, the delay effected by fields
type is straight-forward to eliminate. The following instances of `Rep` and
`Generic` for `:+:` and `N` do just that.

\begin{haskellq}
type instance Rep (a :+: b) = Rep a :+: Rep b
instance (Generic a, Generic b
         ) => Generic (a :+: b) where
  to   (L x) = L (to x)
  to   (R x) = R (to x)
  from (L x) = L (from x)
  from (R x) = R (from x)
\end{haskellq}

\pagebreak

\begin{haskellq}
type instance Rep (N dc) = Rep dc
instance Generic dc => Generic (N dc) where
  to         = N . to
  from (N x) = from x
\end{haskellq}

\noindent With these
%
instances, applying `Rep` after `DCs` yields the corresponding
\ig\ representation, excluding the `C` type. This is mirrored on the term-level
by the `ig_from` function.

\begin{haskellq}
ig_from :: (DT t, Generic (DCs t)) => t -> Rep (DCs t)
ig_from = IG.from . disband
\end{haskellq}

\noindent The `C` types'
%
annotation could be recovered by introducing yet another type family mapping a
fields type to its analogous constructor type; we omit this for brevity. In
this way, the delayed representation could preserve the \ig\ structural
interpretation. In general, with an equivalence $\cong$ that ignores the `C`
type,

\begin{haskellq}
forall t. Rep t #$\cong$# Rep (DCs t)#.#
\end{haskellq}

\subsection{Type-level Reflection of Constructor Names}

The \ig\ approach cannot in general infer the correspondence of constructors
like `Plus` and `PlusN`, because it does not reflect constructor names on the
type-level. We define the `Tag` type family (bottom of
%
\tref{Figure}{fig:reflection}) for precisely this reason. This type family
supplants the `Constructor` type class from \ig; it provides exactly the same
information, only as a type-level string instead of a method yielding a
string. For example, instead of the previous section's `Constructor` instances
for the constructor types `Var`, `Lam`, and `App`, we declare the following
`Tag` instances for the corresponding fields types. The `Constructor` class's
`conName` method can be defined using an interface to the type-level strings
that supports \emph{demotion} to the value-level.

\begin{haskellq}
type instance Tag Var_ = "Var"
type instance Tag Lam_ = "Lam"
type instance Tag App_ = "App"
\end{haskellq}

\subsection{Summary}

Our first extension to \ig\ \emph{delays} the structural representation of a
data type by intermediately disbanding it into a sum of its constructors. Each
constructor is represented by a
%
\emph{fields type}, having exactly one constructor that imitates the syntax and
semantics of the original. Any sum of fields types is called a \emph{disbanded
  data type}. Our second extension maps each fields type to its original
constructor's name, reflected at the type-level.  These extensions of
\ig\ enable the genericity of `hcompos` and let the programmer use it modularly
and clearly.

\pagebreak

\section{The Generic Homomorphism}
\label{sec:hcompos-defn}

In this section, we more precisely explain how the `hcompos` function
semantically generalizes the `compos` function of \citet{compos}, and then
implement `hcompos` accordingly. We begin with a more rigorous definition of
homomorphism and specialize it to both `compos` and `hcompos`. This emphasizes
their shared semantics, and motivates the use of \yoko's reflection of
constructor names to add support for heterogeneity to `compos`.

A homomorphism maps between two mathematical objects while preserving their
common semantics. In Haskell, these objects are types. For example, the
`length` function is a homomorphism from lists to integers that preserves the
monoidal semantics of both types. These semantics are the monoid of `[]` with
`++` for list types and `0` with `+` for the `Int` type. In order
%
to generalize, we model the semantics to be preserved as a mapping between
semantically analogous values, \eg\ \{`[]`~$\mapsto$~`0`,
`++`~$\mapsto$~`+`\}. For some semantics-encoding mapping $S$, a function
`f___::___A___->___B` is a \emph{homomorphism with respect to $S$} if

\begin{haskellq}
forall g #$\in Domain(S)$#. forall #$\vec{\text{\small\ttfamily x}}$#. f (g #$\vec{\text{\small\ttfamily x}}$#) = (#$S$# g) (#$\vec{\text{\small\ttfamily f}}$# #$\vec{\text{\small\ttfamily x}}$#)#,#
\end{haskellq}

\noindent where $\vec{\inlineHaskell{f}}$ is the point-wise extension of `f`
that maps each `x` in the vector $\vec{\inlineHaskell{x}}$ to `f x`.

If the `A` and `B` types are identical and the semantics $S$ maps each
constructor to itself, this equivalence characterizes the `compos`
function. Thus the `compos` function is a homomorphism with respect to a
semantics corresponding to the identity function on a data type's
constructors. This use of the identity function is why `compos` is a degenerate
homomorphism. On the other hand, for the heterogeneous `hcompos` homomorphism,
there is no such default semantics to be shared between the distinct `A` and
`B` types.

In the example from \tref{Section}{sec:motivation}, the `Plus` and `PlusN`
constructors correspond to one another because they have the same actual
%
semantics. This is the ideal semantics to preserve, but no algorithm is capable
of robustly inferring that semantics. With existing generic programming
techniques like \ig, the only available property is the structure of
constructors' fields. This property, though, is too abstract to be unambiguous:
it is common for a data type to have multiple constructors with the same
structure. Clearly `Plus` should map to `PlusN` instead of the hypothetical
`MultN`.

This insufficiency of field structure is the original motivation for reflecting
constructor names with the `Tag` family. We settle for approximating the actual
semantics by using the constructor names. The `hcompos` function is thus a
homomorphism with respect to the semantics that maps a constructor in one data
type to the most similarly named constructor in another data type. This remains
a crude approximation, but can be useful when combined with a convention to
give similar names to semantically similar constructors.

\subsection{Constructor Correspondences}
\label{sec:ctor-correspondence}

We \todo{We can do without this \textparagraph, move it to the end of the
  subsection, or even to a future work section.} determine constructor
correspondence via an
%
algorithm called `FindDC`. This algorithm takes two parameters: a constructor
and the data type in which to find a corresponding constructor. Applied to a
constructor $C$ and a data type $T$, the algorithm finds the constructor of $T$
with the most similar name to $C$. A robust `FindDC` algorithm would implement
an conventional similarity measure for names that computes a scalar. It would
also use two empirically determined thresholds. One would be a minimum
required similarity: the most similar constructors' measure must be greater
than this threshold. The other threshold would be a minimum difference between
the similarity of the best and second-best match. That difference must be
greater than the threshold. If either of these thresholds is violated, then a
type-error is raised; the algorithm cannot identify with confidence a best
correspondence. To avoid those errors, the programmer must not instantiate
`FindDC` in those cases. The role of our first extension, delayed
representation, is precisely to help the programmer to separate the
constructors needing
%
explicit handling from those that can be handled implicitly.
%
Unfortunately, a type-level definition of the robust implementation
described above is prohibitively complicated with the immature GHC support for
type-level computation. We instead implement a much simpler `FindDC` that only
identifies the corresponding
%
constructor if it has exactly the same name. This requires the programmer to
declare some data types in separate modules so that similar data types can have
constructors with the same name.

Because of this simpler algorithm, the definition of `best_nnf` in
\tref{Figure}{fig:arith-nnf-yoko} is actually ill-typed. Fixing it is
straight-forward, but distracting: the `NNF` data type must be declared in a
separate module, presumably also called `NNF`, with the `N` suffix removed from
its constructors. Thus `best_nnf` actually requires `FindDC` to map the `Plus`
constructor of `Exp` to the `Plus` constructor of `NNF`. Where before we wrote
`VarN` and `PlusN`, we would actually write `NNF.Var` and `NNF.Plus`. We look
forward to implementing the robust `FindDC` algorithm once GHC adds more
support for type-level programming.

The `FindDC` type family defined in \tref{Figure}{fig:find-dc} implements the
simple algorithm. An application of `FindDC` to a fields type, modeling the
constructor $C$, and a data type $T$ uses the auxiliary `FindDC_` family to
find a fields type in the `DCs` of $T$ with the same `Tag` as $C$. The
instances of `FindDC_` query $T$'s sum of constructors, using the type equality
predicate and the `Maybe` data kind discussed in
\tref{Section}{sec:pretending}. The result is either the type `Nothing` if no
matching fields type is found or an application of the type constructor `Just`
to a fields type of $T$ with the same name as $C$.

Because the `FindDC` algorithm uses only constructor names, the correspondence
of field structure will be required by `hcompos` as an `~` constraint on the
`Rep` of $C$ and the fields type returned by `FindDC`.

\begin{figure}[t]
\begin{haskell}
-- find a fields type with the same name
type FindDC dc dt = FindDC_ (Tag dc) (DCs dt)

type family FindDC_ s dcs
type instance FindDC_ s (N dc) =
  If (Equal s (Tag dc)) (Just (N dc)) Nothing
type instance FindDC_ s (a :+: b) =
  MaybePlus (FindDC_ s a) (FindDC_ s b)
\end{haskell}
\caption{The \inlineHaskell{FindDC} type family.\label{fig:find-dc}}
\end{figure}

\subsection{The Generic Definition of \texttt{hcompos}}

\begin{figure}[t]
\begin{haskell}
-- convert dcs to b; dcs is sum of a's fields types;
-- uses the argument for recursive occurrences
class HCompos a dcs b where
  hcompos :: Applicative i => (a -> i b) -> dcs -> i b
\end{haskell}
\caption{The \inlineHaskell{hcompos} function, a generalization of
  \inlineHaskell{compos}, implements a homomorphism between similar
  types.\label{fig:hcompos}}
\end{figure}

The generic homomorphism is declared as the `hcompos` method in
\tref{Figure}{fig:hcompos}. To support heterogeneity, its type class adds the
`dcs` and `b` parameters to the original `a` parameter from the `Compos`
class. The `dcs` type
%
variable will be instantiated with the sum of fields types corresponding to the
subset of `a`'s constructors to which `hcompos` is applied. The `dcs` parameter
is necessary because, throughout the generic definition of `hcompos`, it
varies. The `b` type is simply the range of the conversion being defined. The
generic definition of `hcompos` relies on an auxiliary function. The `mapRs`
method extends a function by applying it to every recursive field in a product
of fields.

\pagebreak

\begin{haskellq}
-- q is p with the fields of type a mapped to b
class MapRs a b p q where
  mapRs :: Applicative i => (a -> i b) -> p -> i q

instance MapRs a b (Rec a) (Rec b) where
  mapRs f (Rec x) = pure Rec <*> f x
instance MapRs a b (Dep x) (Dep x) where
  mapRs _ = pure
instance MapRs a b U       U       where
  mapRs _ = pure
instance (MapRs a b aL bL, MapRs a b aR bR) =>
  MapRs a b (aL :*: aR) (bL :*: bR) where
  mapRs f (l :*: r) =
    pure (:*:) <*> mapRs f l <*> mapRs f r
\end{haskellq}

The `hcompos` function, like most generic functions, handles sums
directly. Note how the `dcs` parameter varies in the head and context of the
following instance; it hosts this type-level traversal.

\begin{haskellq}
instance (HCompos a l b, HCompos a r b
         ) => HCompos a (l :+: r) b where
  hcompos cnv (L x) = hcompos cnv x
  hcompos cnv (R x) = hcompos cnv x
\end{haskellq}

\noindent The instance for `N` uses the enhanced data type reflection of
\yoko. It converts a fields type `dc`, with possible recursive occurrences of
`a`, to the type `b` in three steps. First, it completes the representation of
`dc` by applying `from`, eliminating the delay of representation. Second, it
applies the `mapRs` extension of `cnv` in order to convert the recursive
fields. The result is a new product, where the recursive fields are of type
`b`. This use of `mapRs` is well-typed due to the second constraint in the
instance context. That constraint also requires that the corresponding
constructor `dc'`, as determined by `FindDC`, has the appropriate fields. Thus,
the final step is to convert the new product to this fields
%
type before embedding it in `b`. This requires the last two constraints in the
context.

\begin{haskellq}
instance (Generic dc, MapRs a b (Rep dc) (Rep dc'),
          Just dc' ~ FindDC dc b,
          DC dc', Range dc' ~ b
         ) => HCompos a (N dc) b where
  hcompos cnv (N x) = 
    fmap (rejoin . (id :: dc' -> dc') . to) $$
    mapRs cnv $$ from x
\end{haskellq}

We have now explained the \yoko\ generic programming approach and the `hcompos`
function present in the definition of `best_nnf` in
\tref{Figure}{fig:arith-nnf-yoko}. In the next section, we develop a mechanism
that automatically partitions a data type into two subsets of its constructors:
one requested by a programmer and the other that contains the remaining
constructors.

\section{Implicit Partitioning of Disbanded Data Types}
\label{sec:partitioning}

The generic definition of `hcompos` relies on our two extensions for constructor
name reflection and delayed representation. This section
%
develops the implicit partitioning of disbanded data types, which builds on the
delayed representation extension in order to enable modular and clear use of
`hcompos`. First, this mechanism lets the programmer clearly specify an
anonymous subset of a data type's constructors. Second, it automatically
partitions a data type into that subset of interest and the subset of remaining
constructors.

In \tref{Figure}{fig:arith-nnf-yoko}, the `Y.partition` function implicitly
partitions the constructors of the `Exp` type in the definition of `w`, the
essence of `best_nnf`. The following definition of `w` would be ideally
concise, but it is ill-typed.

\begin{haskellq}
-- NB speculative: ill-typed
w :: Exp -> Bool -> NNF
w (Var s) n = NNF.Var n s
w (Neg e) n = w e (not n)
w e       n = hcompos w (disband e) n
\end{haskellq}

\pagebreak
\noindent
The problem is that the type of `e` in the third case is `Exp`. Thus the type
of `disband e` is `DCs Exp`, which includes the `Var_` and `Neg_` fields
types. Because `hcompos` is applied to the disbanded `e`, the `FindDC`
algorithm
%
will fail to find a constructor in `NNF` corresponding to `Neg`. The crucial
insight motivating our entire approach is that
%
those fields types will never occur at run-time, as they are guarded by the
other cases of `w`. Thus, the type error can be avoided by encoding this
insight in the type system. Given the \yoko\ interface up to this point,
%
this can only be done by working directly with the fields types.

\begin{haskellq}
-- NB speculative: well-typed, but immodular
w :: Exp -> Bool -> NNF
w e n = case disband e of
  L    (Var_ s)  -> NNF.Var n s
  R (L (Neg_ e)) -> w e (not n)
  R (R e)        -> hcompos w e n
\end{haskellq}

This second definition is well-typed but unacceptably immodular because it
exposes extraneous details of `Exp`'s representation as a sum to the
programmer. Specifically, the `L` and `R` patterns depend on how the `:+:` type
was
%
nested in `Rep Exp`. Modularity can be recovered by automating the partitioning
of sums that is currently explicit in the `L` and `R` patterns. This automation
is the final \yoko\ capability: the implicit partitioning of
%
disbanded data types. This capability enables the original definition of `w`
from \tref{Figure}{fig:arith-nnf-yoko}.

Beyond motivating implicit partitioning of disbanded data types, the above
definition of `w` is also the original motivation for fields types. Indeed, the
`hcompos` function can be defined --- perhaps less
%
conveniently --- with a more conservative extension of the `C` representation
type that indexes the `Tag` type family by \ig's constructor types. The above
definition of `w` would still need to avoid the type-error by partitioning the
constructors. However, where the fields types' syntax conveniently imitates the
original constructors, the \ig\ `C` type would compound the immodularity and
even obfuscate the code by exposing the representation of fields. Worse still,
the current development of implicit partitioning would require further
%
obfuscation of this definition in order to indicate which summand is intended
by
%
a given product pattern, since two constructors might have the same product of
fields. It is the fields types' precise imitation of the represented
constructor that simultaneously encapsulates the representational details and
determines the intended summand.

\begin{figure}[t]
\begin{haskell}
-- embedding relation
class Embed sub sup where embed :: sub -> sup

-- partitioning relation (ternary)
class Partition sup subL subR where
  partition_ :: sup -> Either subL subR

-- set difference function
type family (:-:) sum sum2
partition :: Partition sup sub (sup :-: sub) =>
            sup -> Either sub (sup :-: sub)
partition = partition_

-- assembling fields type consumers
one   :: (dc -> a)            -> N dc        -> a
(|||) :: (l -> a) -> (r -> a) ->   l :+:   r -> a
(||.) :: (l -> a) -> (r -> a) ->   l :+: N r -> a
(.||) :: (l -> a) -> (r -> a) -> N l :+:   r -> a
(.|.) :: (l -> a) -> (r -> a) -> N l :+: N r -> a
\end{haskell}
\caption{Interface for implicitly partitioning disbanded data
  types.\label{fig:partitioning}}
\end{figure}

The implicit partitioning interface is declared in
\tref{Figure}{fig:partitioning}. Its implementation interprets sums as sets:
`N` constructs a singleton set, and `:+:` unions two sets. The `Embed` type
class models the subset relation, with elements identified by the `Equal` type
family from \tref{Section}{sec:pretending}. Similarly, the `Partition` type
class models partitioning a set into two subsets. Finally, set difference is
modeled by the `:-:` family, which determines the right-hand subset of a
partitioning from the left. It gives a correspondingly more specific type for
the value-level partitioning function.

\begin{haskellq}
type instance (:-:) (N a) sum2 =
  If (Elem a sum2) Void (N a)
type instance (:-:) (a :+: b) sum2 =
  Combine (a :-: sum2) (b :-: sum2)
\end{haskellq}

\noindent The `Elem` family models the decidable membership relation, again
using the `Equal` type family for identifying elements.

\begin{haskellq}
type family Elem a sum :: Bool
type instance Elem a (N b) = Equal a b
type instance Elem a (s :+: t) =
  Or (Elem a s) (Elem a t)
\end{haskellq}

\noindent The `Combine` type family is only used to prevent the empty set,
modeled by `Void`, from being represented as a union of empty sets.

\begin{haskellq}
data Void
type family Combine sum sum2 where
  Combine Void a = a
  Combine a Void = N a
  Combine a b    = a :+: b
\end{haskellq}

\noindent Because `Void` is used only in the definition of `:-:`, an empty
result of `:-:` leads to type-errors in the rest of the program. This is
reasonable since the empty set represents a data type with no~constructors.

We omit the `Embed` and `Partition` instances for space. Because they only
treat the `N`
%
and `:+:` types, the programmer need not declare their own. They have the same
essential semantics as similar classes from existing work, such as that of
\citet{a-la-carte}. However, existing work defines these classes with
overlapping instances. We use type-level programming to avoid overlap for
reasons discussed by \citet[\#anti-over]{oleg-typeeq}. We use the decidable
`Equal` type family and its derivatives, such as `Elem`, to explicitly
distinguish between instances that would otherwise
overlap. \citet[\#without-over]{oleg-typeeq} develops a similar method.

Finally, functions that consume fields types can be assembled into larger
functions with the `one` function and the family of disjunctive
operators. These larger function's domain is a subset of constructors. The
`|||` operator is directly analogous to the `Prelude.either` function. Its
derivatives are defined by composition in one argument with `one`. The `.|.`
function is used in the definition of `best_nnf`.

When combined with the `disband` and `partition` functions, these operators
behave like an extension of the Haskell case expression with more exact
types. For example, the well-typed but immodular definition of `w` at the
beginning of this subsection could be directly transliterated to the following.

\begin{haskellq}
exact_case :: Partition (DCs t) dcs ... =>
  ((DCs t :-: dcs) -> a) -> t -> (dcs -> a) -> a
exact_case g x f =
  either f g $$ partition $$ disband x

w :: Exp -> Bool -> NNF
w e = exact_case (hcompos w) e $$
  (\(Var_ s) -> \n -> VarN n s)   .|.
  (\(Neg_ e) -> w e . not)
\end{haskellq}

The `|||` family of operators help the programmer use disbanded data types in a
clear way. They resemble the `|` syntax used in many other functional
%
languages to separate the alternatives of a case expression. Their inclusion in
the \yoko\ interface for partitioning helps simulate an exactly-typed case
expression.

%% \begin{haskellq}
%% instance (Partition1 (Elem x subL) x subL subR
%%          ) => Partition (N x) subL subR where
%%   partition_ = partition_N (Proxy :: Proxy (Elem x subL))

%% instance (Partition a subL subR, Partition b subL subR
%%          ) => Partition (a :+: b) subL subR where
%%   partition_ = foldPlus partition_ partition_
%% \end{haskellq}

%% \begin{haskellq}
%% class Partition1 (isLeft :: Bool) x subL subR where
%%   partition1 :: Proxy isLeft -> N x -> Either subL subR

%% instance Embed (N x) subR =>
%%          Partition1 False x subL subR where
%%   partition1 _ = Right . embed
%% instance Embed (N x) subL =>
%%          Partition1 True  x subL subR where
%%   partition1 _ = Left  . embed
%% \end{haskellq}

\pagebreak

\section{Example: Lambda Lifting}
\label{sec:lambda-lifting}

We demonstrate the utility of fields types, `hcompos`, and implicit
partitioning with a real-world example. Lambda-lifting (also known as closure
conversion; see, \eg, [18]) makes for a compelling example because it is a
standard compiler pass and usually has many homomorphic cases. For example, if
`hcompos` were used to define a lambda-lifting function over the GHC 7.4 data
type for expressions, it would handle approximately 35 out of 40 constructors
implicitly. Lambda-lifting also involves more sophisticated effects than the
%
`(->) Bool` applicative functor from \tref{Section}{sec:motivation}, and
therefore demonstrates the kind of subtlety that can be necessary in order to
encapsulate effects with an applicative functor.

\subsection{The Types and The Homomorphic Cases}

The lambda-lifting function lifts lambdas out of untyped lambda calculus
terms. As the `ULC` data type has been a working example in the previous sections,
%
its instances for \yoko\ type families and classes are assumed in this section.

\begin{haskellq}
data ULC = Var Int | Lam ULC | App ULC ULC
\end{haskellq}

\noindent Lambda-lifting generates a top-level function declaration for each
seqeuence of lambdas found in a `ULC` term. Each declaration has two sets of
parameters: one for the formal arguments of the lambdas it replaces and one for
the variables that occur free in the original body. We call
%
the second kind of variable \emph{captive} variables, as the lambdas capture
them from the lexical environment.

The bodies of the generated declarations have no lambdas. Their absence is the
characteristic property that lambda-lifting
%
establishes. It is encoded in a derivative of `ULC`, the `TLF` data type, which
is used for the bodies of top-level functions.

\begin{haskellq}
data TLF = Top Int [Occ] | Occ Occ | App TLF TLF
data Occ = Par Int | Env Int
\end{haskellq}

\noindent Instead of modeling lambdas, `TLF` models occurrences of top-level
functions, invocations of which must always be immediately applied to the
relevant captives. It also distinguishes occurrences of formals and
captives. Like the `ULC` type, `TLF` is also assumed to have instances for the
\yoko\ type families and classes.

The range of lambda-lifting is the `Prog` data type, which pairs a telescope of
top-level function declarations with a main term.

\begin{haskellq}
data Prog = Prog [Dec] TLF
type Dec = (Int, Int, TLF) -- #\pound# captives, #\pound# formals
\end{haskellq}

\noindent Each function declaration specifies the sizes of its two sets of
variables: the number of captives and the number formals.

Because both of the `ULC` and `TLF` data types have a constructor named `App`,
the definition of lambda-lifting can delegate the
%
case for applications to `hcompos`. If constructors for any other syntactic
constructs unrelated to binding were added to both `ULC` and `TLF`, the
definition of
%
lambda-lifting below would not need to be adjusted.

The `lambdaLift` function is defined by its cases for lambdas and
variables. (We define the monad `M` next.)

\begin{haskellq}
lambdaLift :: ULC -> Prog
lambdaLift ulc = Prog ds tlf where
  (tlf, ds) = runM (ll ulc) ((nFree, IntMap.empty), 0)
  nFree = IntSet.findMax $$ freeVars ulc

ll :: ULC -> M TLF
ll tm = exact_case (hcompos ll) tm $$ llVar .|. llLam

llVar :: Var_ -> M TLF   ;   llLam :: Lam_ -> M TLF
\end{haskellq}

\noindent This is the principal software engineering benefit of `hcompos`:
homomorphic cases are completely implicit. Furthermore, fields types make it
convenient to handle specific constructors separately.

The traversal implementing lambda-lifting must collect the top-level functions
as they are generated and also maintain a renaming between `ULC` variables and
`TLF` occurrences. The monad `M` declared in \tref{Figure}{fig:llm} provides
these effects. They are automatically threaded by the `hcompos` function: every
monad is also an applicative functor.

\begin{figure}[t]
\begin{haskell}
-- #\pound# of formal variables and the mapping for captives
type Rename = (Int, IntMap Int)
newtype M a = M {runM  :: (Rename, Int) -> (a, [Dec])}

instance Monad M where
  return a = M $$ \_        -> (a, [])
  m >>= k  = M $$ \(rn, sh) ->
    -- NB backwards state: a and w' are circular
    let (a, w)  = runM m     (rn, sh + length w')
        (b, w') = runM (k a) (rn, sh)
    in (b, w ++ w')

-- monadic environment
ask   :: M Rename
local :: Rename -> M a -> M a

-- monadic output and its abstraction as backward state
emit            :: Dec -> M ()
intermediates   :: M Int
ignoreEmissions :: M a -> M a
\end{haskell}
\caption{The monad for lambda-lifting.\label{fig:llm}}
\end{figure}

The `Rename` type includes the number of formal variables that are in scope and
a map from the captives to their new names. It is a standard monadic
environment, accessed with `ask` and updated with `local`. The list of
generated declarations is a standard monadic output, collected in left-to-right
order with the `[]`/`++` monoid, and generated via the `emit` function. The
final effect is the other `Int` in the input, queriable with `intermediates`
and reset to `0` by `ignoreEmissions`. It corresponds to the number of
emissions by \emph{subsequent} computations; note the circularity in the
definition of `>>=`. This \emph{backwards state} \cite[\S 2.8]{essence} is
crucial to maintaining a de Bruijn encoding for the occurrences of top-level
functions.

\subsection{The Interesting Cases}

Variables are straight-forward. Each original variable is either a reference to
a lambda's formal argument or its captured lexical environment. The `lookupRN`
function uses the monadic effects to correspondingly generate either a `Par` or
a `Env` occurrence.

\begin{haskellq}
llVar (Var_ i) = pure (\rn -> lookupRN rn i) <*> ask

lookupRN :: Rename -> Int -> Occ
lookupRN (nLocals, _) i | i < nLocals = Par i
                        | otherwise =
  case IM.lookup (i - nLocals) m of
    Nothing -> error "free var"
    Just i -> Env i
\end{haskellq}

\begin{figure}[t]
\begin{haskell}
llLam :: Lam_ -> M TLF
llLam lams@(Lam_ ulcTop) = do
  -- get the body; count formals; determine captives
  let (k, ulc) = peel ulcTop
  let nLocals = 1 + k
  let captives =
        IntSet.toAscList $$ freeVars $$ rejoin lams

  -- generate a top-level function from the body
  do let m = IntMap.fromDistinctAscList $$
             zip captives [0..]
     tlf <- ignoreEmissions $$
            local (const (nLocals, m)) $$ ll ulc
     emit (IntMap.size m, nLocals, tlf)

  -- replace lambdas with an invocation of tlf
  rn <- ask
  sh <- intermediates
  return $$
    Top sh $$ map (lookupRN rn) $$ reverse captives
\end{haskell}
\caption{The lambda-lifting case for lambas.\label{fig:llLam}}
\end{figure}

The case for lambdas is defined in \tref{Figure}{fig:llLam}. It uses two
auxiliary functions: `freeVars` for computing free variables and `peel` for
peeling a sequence of lambdas off the top of a `ULC` term. We omit the
definition of `freeVars` because it uses only \ig.

\begin{haskellq}
freeVars :: ULC -> IntSet

peel :: ULC -> (Int, ULC)
peel = w 0 where
  w acc (Lam tm) = w (1 + acc) tm
  w acc tm       = (acc, tm)
\end{haskellq}

\noindent
Though `peel` is a property-establishing function---the resulting
`ULC` is not constructed with `Lam`---we do not encode the property as a data
type because we do not actually depend on it.

The `llLam` function uses `peel` to determine the sequence of lambdas' length
and its body, called `nLocals` and `ulc` respectively. The sequence of lambdas
captives are precisely its free variables. The body is lambda-lifted by a
recursive call to `ll` with locally modified monadic effects. Its
%
result, called `tlf`, is the body of the top-level function declaration
generated by the subsequent call to `emit`. Since `tlf` corresponds to the
original lambdas, an invocation of it replaces them. This invocation explicitly
pass the captives as the first set of actual arguments.

The de Bruijn name used to invoke the newly generation top-level function is
determined the `intermediates` monadic effect. The name cannot simply be `0`
because sibling terms to the right of this sequence of lambdas also generate
top-level declarations, and the left-to-right collection of monadic output
places them between this `Top` reference and the intended top-level
function. This is the reason for the backwards state in `M`. The corresponding
circularity in the definition of `>>=` is guarded here because `tlf` is emitted
without regard to the number of subsequent emissions, though its value does
depend on that number.

The recursive call to `ll` must use a new monadic environment for renaming that
maps captives variable occurrences to the corresponding parameters of the new
top-level function; `local` provides the necessary environment to the
subcomputation. The recursive call must also ignore the emissions of
computations occurring after this invocation of `llLam`,
%
since those computations correspond to siblings of the sequence of lambdas, not
to siblings of the lambda body---the body of a lambda is an only child. This is
an appropriate semantics for `intermediates` because the emissions of those
ignored computations do not end up between `tlf` and the other top-level
functions it invokes.

% \footnote{\Ie\ in some function using this computation's result via
% \inlineHaskell{>>=}.}

\subsection{Summary}

The generic homomorphism `hcompos` delivers a software engineering benefit in
real programs. In
%
this example, `hcompos` implicitly handled only the case for applications, but
the same code would support any other syntactic construct unrelated to
%
binding. Just as in the motivating `best_nnf` example, the \yoko\ approach,
with its constructor name reflection, delayed representation of fields, and
implicit partitioning, enables a natural definition of `lambdaLift` that
requires an explicit treatment only for the inherently
%
relevant cases. This implicit handling of homomorphic cases mitigates the code
duplication inherent to property-establishing functions, a major practical cost
of exact typing.

\pagebreak

\section{Disbanding Real-World Types}

Mutual recursion is common in real-world data types, but the example data types
in the previous sections are not mutually recursive. In particular, the
definition of `hcompos` from \tref{Section}{sec:hcompos-defn} only works with
singly recursive data types. That definition also assumes that the `Rec`
representation type is only applied to recursive occurrences, but the
\ig\ semantics of `Rec` permit it to be applied to a composite type that
contains recursive occurrences. In this section, we redefine `hcompos` to
support mutual recursion and improve the representation of composite recursive
fields.

The previous declaration of `hcompos` cannot support mutual recursion because
of the type of its first argument. This type contrains the function argument to
convert only one data
%
type. For mutually recursive data types, this function must instead convert
each type in the mutually recursive family, and so its type must be polymorphic
over those types. There are at least two ways to generalize first argument of
`hcompos` to support mutually recursive data types.

\subsection{Encoding Relations of Types with GADTs}

The first generalization of `hcompos` supports mutually recursive families
using the same technique as \citet{compos}. This approach uses
\emph{generalised algebraic data types} (GADTs) \cite{gadts} and higher-order
polymorphism in order to express the
%
type of polymorphic functions that can be instantiated for precisely the data
types comprising a given mutually recursive family. Its essential ideas are
separated into more elementary components in the \texttt{multirec} generic
programming technique \cite{multirec}. The \texttt{multirec} approach uses
GADTs to encode sets of types. For example, a pair of mutually recursive data
types for even and odd Peano numbers could be encoded as the set \{`Odd`,
`Even`\} by the `OddEven` GADT.

\begin{haskellq}
data Odd = Odd Even
data Even = Nil | Even Odd

data OddEven :: * -> * where
  OddT  :: OddEven Odd
  EvenT :: OddEven Even
\end{haskellq}

\noindent The index of `OddEven` can only ever be `Odd` or `Even`. It thus
emulates a type-level set inhabited by just those types (\ie\ a
\emph{subkind}), and a function with type `forall s. OddEven s -> s -> a` can
consume precisely either of those types. The \texttt{multirec} approach also
provides a class that automatically selects the correct GADT constructor to
witness the membership of a given type in a given set.

\begin{haskellq}
class Member set a where memb :: set a

instance Member OddEven Odd  where memb = OddT
instance Member OddEven Even where memb = EvenT
\end{haskellq}

\noindent The `Member` class is used throughout the \texttt{multirec} approach
for those generic definitions with value-level parameters that must be
instantiatiable at any type in the mutually recursive family.

GADTs can similarly encode the relation between types that is required for
`hcompos`. The `HCompos` class can be parameterized over this relation as
follows.

\begin{haskellq}
class HCompos rel rep b where
  hcompos :: Applicative i =>
    (forall a b. rel a b -> a -> i b) -> rep -> i b
\end{haskellq}

\noindent Where the first argument of `hcompos` was formerly a function from
`a` to `b`, it is now a function between any two types that are related by the
GADT-encoded `rel` relation. The former type of the first argument constrained
the (implicit) relation to exactly \{`a` $\mapsto$ `b`\} and nothing else. The
new type removes this excessive constraint.

This variant of `hcompos` is defined with essentially the same instances as
declared in \tref{Section}{sec:hcompos-defn}: the case for the `:+:` type folds
through the sum and the case for `N` invokes `FindDC` and the auxiliary `MapRs`
class. The `MapRs` class is still used to apply the conversion function to the
recursive fields, but it requires an adjustment in order to provide the
conversion function
%
with its `rel` argument. The `Rec` instance must now use the `Related` type
class, analogous to the `Member` class, in order to determine the first
argument to the conversion function.

\begin{haskellq}
class Related rel a b where rel :: rel a b

instance Related rel a b =>
  MapRs rel (Rec a) (Rec b) where
  mapRs cnv (Rec x) = pure Rec <*> cnv rel x
\end{haskellq}

This encoding of relations requires the user to define the GADT. Otherwise, it
behaves just like the `hcompos` for singly
%
recursive types. For example, a user might need to convert the `Odd` and `Even`
data types to a list data type that uses an index to encode its parity. The
`Peano` is such a data type with a `Bool` index, where `True` indicates even
parity and `False` odd. We assume the obvious `Not` type family.

\begin{haskellq}
data Peano :: Bool -> * where
  Zero ::                  Peano True
  Succ :: Peano (Not p) -> Peano p
\end{haskellq}

The relation at the core of this conversion is
%
\{`Odd`~$\mapsto$~`Peano False`, `Even`~$\mapsto$ `Peano True`\}, which is
encoded with the `Rel` GADT.

\begin{haskellq}
data Rel :: * -> * -> * where
  RelOdd  :: Rel Even (Peano True)
  RelEven :: Rel Odd  (Peano False)

instance Related Rel Even (Peano True)  where
  rel = RelEven
instance Related Rel Odd  (Peano False) where
  rel = RelOdd
\end{haskellq}

The conversion function between the two type families can use `hcompos` with
the `Rel` relation for any shared constructors. In the `w` function below, the
case for `Nil` is implicit.

\begin{haskellq}
newtype Id a = Id {unId :: a}

w :: Rel oe pl -> oe -> Id pl
w RelOdd  (Odd e) = pure Succ <*> w rel e
w RelEven e       = exact_case (hcompos w) $$
  \(Even_ o) -> pure Succ <*> w rel o

oe2p :: Rel oe (Peano p) => oe -> Peano p
oe2p = unId . w rel
\end{haskellq}

\subsection{Encoding Relations of Types with Type Classes}

The second generalization of `hcompos` uses the established
\linebreak\ig\ techniques for
%
mutually recursive data types. The crucial insight is that multiparameter type
classes encode type-level relations. The only distinction is that type classes
encode open relations while GADTs encode closed relations. Since the `hcompos`
type relation need not be closed, the type class approach
suffices.

The type relation between types in the domain family and their target types in
the range family must still be specified by the user. Instead of being defined
with a GADT, however, that relation is defined with instances of `HCompos` for
a given conversion, modeled by the `cnv` type parameter.

\begin{haskellq}
class HCompos cnv a b where
  hcompos :: Applicative i => cnv i -> a -> i b
\end{haskellq}

\noindent This declaration interprets `hcompos` as a mapping from some `*->*`
type `cnv` to a conversion between `a` and `b`. The instances for a given `cnv`
type define not only `hcompos` for that conversion but also the type relation
characterizing that conversion. As a result, that relation is defined
simultaneously with the conversion itself: this `HCompos` class subsumes the
`Related` class. Thus the `MapRs` instance calls `hcompos` instead of using
`rel`.

\begin{haskellq}
instance HCompos cnv a b =>
  MapRs cnv (Rec a) (Rec b) where
  mapRs cnv (Rec x) = pure Rec <*> hcompos cnv x
\end{haskellq}

The conversion from `Odd`/`Even` to `Peano` is expressed as follows.

\begin{haskellq}
data OE2P i = OE2P

instance HCompos OE2P Odd  (Peano False) where
  hcompos cnv (Odd e) = pure Succ <*> hcompos cnv e
instance HCompos OE2P Even (Peano True)  where
  hcompos cnv = exact_case (hcompos cnv) $$
    \(Even_ o) -> pure Succ <*> hcompos cnv o

oe2p :: HCompos OE2P oe (Peano p) => oe -> Peano p
oe2p = unId . hcompos OE2P
\end{haskellq}

The `OE2P` value in this definition subsumes both the `w` function and the
`Rel` data type from the GADT-based definition of `oe2p`. This combination
makes type class encoding of relations more concise.

\subsection{Composite Recursive Fields}

All three definitions of `hcompos`, for singly and mutually recursive data
types, require that `Rec` is only applied to recursive occurrences. Thus
`hcompos` cannot be directly used for data types with recursive composite
fields---those that apply a higher-kinded type, such as `[]`, `(,)`, or
`((,)___Int)` to recursive occurrences. Since real-world data types have such
fields, we show how to represent them without without applying `Rec` to a
composite field.

For `*->*` types, the composite recursive field problem can be solved with a
general instance for `HCompos` using overlapping instances. This instance
converts `Rec (f a)` to `Rec (f b)` and constrains `f` to be a `Functor` and
`a` to be convertible to `b`. This approach, though, fails once a similar
instance is also defined for `*->*->*` types, such as the very common
`(,)`. That instance would introduce a `BiFunctor` constraint and a `HCompos`
constraint for each type argument. The problem is that the `*->*` type
`((,)___Int)` would match the `*->*->*` instances, and therefore require an
`HCompos` constraint that can convert `Int` to an `Int`. Ultimately the user
may have to declare ad-hoc instances of `HCompos`, which is very undesirable.

The overlapping instance approach is an insufficient workaround. The problem is
the imprecision of the \ig\ representation. We instead declare additional
representation types for applications of type constructors. It is
straight-forward to add representation types because \ig\ intentionally uses
only open features: type families and classes.

\begin{haskellq}
newtype Arg1 t a   = Arg1 (t a)
newtype Arg2 t a b = Arg2 (t a b) -- etc, as needed
\end{haskellq}

\noindent In the `Arg1` and `Arg2` representation types, the `a` and `b` type
parameters must be representation types and the `t` parameter must not
be. These types enable a more precise representation of composite fields in
which `Rec` is only applied to recursive type occurrences. The formerly
overlapping instances are now distinguished by having these representation
types in their instance head. Data type representations would use the
unambiguous `Arg1 ((,) Int)`.

This section explains how the generic homomorphism `hcompos` can be applied to
real-world data types with mutual recursion and composite recursive fields. The
handling of mutually recursive data types is consistent with \ig. Robust
support for composite recursive fields, on the other hand, requires a new
variety of representation types. This is our final refinement to \ig, in
addition to the extensions and techniques discussed in
Sections~\Nref{sec:yoko}~and~\Nref{sec:partitioning}.

\pagebreak

\section{Related Work}

\todo{add a-la-carte}

The work most related to ours is that of \citet{compos}, which defines and
demonstrates the `compos` function. The `compos` function can be used much like
`hcompos` to handle the homomorphic cases in the definition of function, but it
is only applicable when the domain and range are the same type. Many existing
generic programming techniques can generically define `compos`, so programmers
can use it \quotes{for free} to improve the modularity of their definitions. We
add heterogeity to `compos` in order to make its benefits available when
defining the property-establishing functions that are pervasive in exactly
typed programs.

We divide the other related work into methods for exact typing and generic
programming techniques.

\paragraph{Exact Typing}

Exact typing is essential for Turner's \emph{total functional programming}
\cite{total-fp}, which requires every function to be total: all patterns must
be exhaustive and all recursion well-founded. Without termination guarantees,
exhaustive pattern matching is trivially acheived by just diverging in what
would be the omitted cases for non-exhaustive patterns. We adopt the stance of
\citet{fast-and-loose} and consider the challenges of exact typing without
concern for termination. Exact typing makes it straight-forward for patterns to
be exhaustive without having to pass around handlers for the error cases, since
such cases are eliminated upstream by property-establishing functions.

While the most exact possible types requires dependent types, we are interested
in the best approximation supported by mainstream non-dependent typing. Even
so, we believe our approach is also applicable in dependently typed languages
like {\sc Agda}; in fact, it should be far simpler to implement with
first-class type-level functions. A significant advance in the typing of
functional programs was the recent adoption of GADTs \cite{gadts} from type theory. We have
experimented with using GADTs to model the closed set of fields types instead
of enumerating the elements of the set with `:+:` and `N`, but have found it
detrimental to the syntax of fields types and thus a burden on the user without
clear benefit for the examples in this paper.

We have found two languages with explicit support for subsets of constructors,
the Common Algebraic Specification Language (\casl) specification language
\cite{casl} and the OCaml programming language \cite{ocaml}. \casl\ supports
named declaration of
%
constructor subsets, by declaring data types as implicit unions of smaller data
types \cite[\S 4]{casl}. In a hypothetical \casl-Haskell pidgin, this
corresponds to data type declarations like the following.

\begin{haskellq}
data List a = Nil | data (NeList a)
data NeList a = Cons a (List a)
\end{haskellq}

\noindent This approach requires the subsets of data types to be identified
\emph{a priori} and invasively incorporated into the declaration of the data
type itself. Modularity concerns are less important for specification languages
like \casl, but highly relevant to real-world programming. For example, the
\casl\ approach cannot be used to characterize subsets of data types defined in
libraries, since their declaration cannot be changed. Our approach is
%
applicable to library data types, because constructor subsets are anonymous and
independent of the containing data type declaration. This is made possible by
the `Tag`, `Range`, `DC`, and `DT` type families, and it is made practical by
generating those instances as well as the related fields types with Template
Haskell, which is a common and accepted dependency for Haskell generic
programming.

In OCaml, polymorphic variants allow any name, called a \emph{variant}, to
occur as if it were a constructor \cite{pv}. Both polymorphic variants and
\yoko's disbanded data types provide anonymous subsets of
constructors. However, polymorphic variants, as a widely-applicable language,
model subsets with purposefully less exact types. In particular, an occurrence
of a variant is polymorphic in its range: it constructs any data type that has
a constructor with the same name and arity. Fields types, on the other hand,
are associated with an original data type via the `Range` type family.

The type-indexed coproducts of \citet[\S C]{hlist} also simulate some aspects
of polymorphic variants. Type-indexed coproducts are a more restricted version
of \yoko's sums and provide a capability similar to implicit
partitioning. Where `:+:` models union of sums in \yoko, the operator of the
same name in the work of \citet{hlist} specifically models a type-level cons
and is therefore not associative.

The recent work on data kinds \cite{promotion} promotes to constructors to
types that are reminiscent of fields type. These type-level constructors of
data kinds, though, already have a notion of corresponding term-level value,
the \emph{singleton types} \cite[\S 3.6]{singleton-types}. Moreover, the
details of a type-level constructor inheriting its data type's type parameters,
as fields type must, are unclear. There may yet be an interesting correlation.

\paragraph{Generic Programming}

Generic programming techniques are broadly characterized by the \emph{universe}
of representable types. \yoko\ has the same universe as \ig. We believe \yoko's
enhancements are orthogonal to other extensions of \ig\ and will investigate
integration. Others major generic programming techniques can be similary
extended.

The most exact typing supported by Haskell requires sophisticated data types
not in the \ig\ universe. In particular, nested recursion \cite{nested} and
GADTs are crucial to encoding many interesting properties, such as well-scoped
and/or well-typed term representations. While some uses of these features can
be forced into the \ig\ representation types, current research, like that of
\citet{gp-indexed}, is investigating more natural representations. One
derivative of \ig\ that is better suited for these features is \gd
\cite{generic-deriving}, which was
%
recently integrated with GHC. Most of the \gd\ representation types are
promotions of the \ig\ types from the kind `*` to the kind
`*->*`. \gd\ represents type constructors directly, with the clever
interpretation of `*` types as `*->*` types that do not use type
parameter. Since type
%
parameters are a prerequisite for nested recursion and GADTs, a representation
designed more naturally handles such sophisticated data types. We plan to add
the \yoko\ extensions to \gd.

\section{Conclusion}

The generic homomorphism, defined in this paper as `hcompos`, factors out a
major pattern in the definition of functions that convert between data types
with analogous constructors. Our approach makes exact typing and its assurance
benefits more affordable in real programs. We used `hcompos` to modularly
define a lambda-lifting function that encodes the absence of lambdas with its
exact range type. Future work will investigate if the existence of analogous
constructors is too strong a prerequisite and also attempt to relax it.

Existing generic programming techniques are inapplicable to the `hcompos`
function specifically because of its disinct domain and range types. In order
to support this heterogeneity, we extend the \ig\ approach with type-level
reflection of constructor names and a delayed representation. These generic
programming extensions may be more broadly useful, as they provide fundamental
capabilites that enable the genericity of `hcompos`. In particular, the delayed
representation lets the programmer use anonymous subsets of a data type's
constructors without having to explicitly define them as a new type. Our
extension of \ig\ and a definiton of `hcompos` are available at
\url{http://hackage.haskell.org/package/yoko}.

\bibliographystyle{abbrvnat}
\bibliography{yoko}

\end{document}
