\documentclass[11pt,twoside]{article}
\usepackage{makeidx,multicol}
\include{psinput}
\include{psfig}
\usepackage{epsf}
\usepackage{epsfig}
\usepackage{graphics,color}
 



\newcommand{\om}{$\Omega$mega}
\newcommand{\plus}[2]{\{plus {#1} {#2}\}}
\newcommand{\existsH}[3]{\begin{tabular}[t]{l}
                          {\small exists} {\small {#1}} . \\
                          ({\small {#2}} \\
                          ,{\small {#3}}) \\
                       \end{tabular}}

\setlength{\textheight}{8.5in}
\setlength{\textwidth}{6.4in}
\setlength{\oddsidemargin}{-.2in}
\setlength{\evensidemargin}{-.2in}
\setlength{\topmargin}{-0.25in}

\renewcommand{\textfraction}{0.1}
\renewcommand{\topfraction}{.80}

%\setcounter   {topnumber}{3}
%\renewcommand {\topfraction}{1.0}
%\setcounter   {totalnumber}{4}
%\renewcommand {\textfraction}{0.2}
%\renewcommand {\floatpagefraction}{0.99} 
%\renewcommand {\floatsep}{0.1}
 


%\setlength{\linewidth}{0.2cm}

\begin{document}



% The first Title Page
\pagestyle{empty}
~
\begin{center}
{\sf
%\hrule
\vspace{0.5cm}
{\bf {\Large \om~ Users' Guide}}\\
\vspace{0.5cm}
\input{version}
%\hrule
{\large Tim Sheard}\\
Computer Science Department\\
Maseeh College of Engineering and Computer Science\\
Portland State University\\
}
\end{center}

\tableofcontents

\newpage
This manual corresponds to the \om\ implementation with the
following version information.

\begin{center}
\input{version}
\end{center}

\input{license}


\newpage
\pagestyle{plain}

\setcounter{page}{1}


\title{\om~ Users' Guide}
\author{
Tim Sheard\\
Computer Science Department\\
Maseeh College of Engineering and Computer Science\\
Portland State University\\
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\maketitle

\section{Introduction}\label{intro}

The \om\ interpreter is styled after the Hugs Haskell Interpreter. In fact,
users of Hugs will find many similarities, and this is intentional. Like Hugs,
the \om\ interpreter is a read-typecheck-eval-print loop. It can load whole
files, and the top-level loop allows the user to experiment by typing
expressions to be evaluated, and by querying the program environment.

The \om\ syntax is based upon the syntax of Haskell. If you're unsure
of what syntax to use, a best first approximation is to use Haskell
syntax. It works most of the time. While clearly descended from Haskell,
\om\ has several important syntactic and semantic differences. The
purpose of this manual is to identify the differences and give
examples of the differences so users can quickly learn them. These
differences include:

\begin{itemize}

\item \om\ supports the introduction of Generalized Algebraic Datatypes.
It does this by generalizing the {\tt data} declaration to
include explicit types for every constructor (Section \ref{gadt}).

\item \om\ adds the ability to introduce new types, and new kinds to
classify these types (Section \ref{kinds}).

\item \om\ allows users to write functions at the type level, and to use these
functions in the type of functions at the value level (Section \ref{typefun}).

\item \om\ is strict while Haskell is lazy. But \om\ has an experimental
explicit laziness annotation (Section \ref{lazy}).

\item \om\ does not support Haskell's class system. Because of this
\om\ supports the {\tt do} syntax using another mechanism (Section \ref{monads}).

\item \om\ has only a fixed set of infix operators, which cannot be extended (Section \ref{op}).

\item \om\ only supports fully applied type synonyms (Section \ref{syn}).

\item \om\ has a very primitive module system (Section \ref{module}).

\item \om\ suports a notion of anonymous existential types (Section \ref{exists}).

\item \om\ has an experimental implementation of Pitts and Gabbay's freshness
mechanism (Section \ref{fresh}).

\end{itemize}


\include{kinds}
%\include{types}

\section{Primitive and Predefined Types and Values} \label{predefined}

\om\ supports a number of built in types. In Figure \ref{types} they are listed
along with their kinds. \om\ also implements a number of primitive
functions and values over the predefined types. In Figure \ref{values} they are
listed along with their types. These figures are generated when a
distribution (including this manual) is built, and should be up-to date. Compare
the version and build date of your \om\ interpreter with the information on the
license page of this manual to see if this manual is up-to-date. In a future
version of \om\ we hope to have a more comprehensive set of primitives.

The following types are predefined. All other types listed in Figure \ref{types}
are primitive, abstract, builtin types. The
predefined types behave as if they were defined as follows:

\input{predef}

\begin{verbatim}
data [] a = [] | (:) a [a]
\end{verbatim}

\section{Infix Operators} \label{op}
In \om\ there are a fixed number of infix operators. The current implementation
precludes dynamically adding new infix operators. The infix operators can be
found in Figure \ref{infix}.
\input{infix}

\section{Simple Module System} \label{module}
The module system in \om\ is very primitive. It works by specifying
a string which is the name of an \om\ file. It is followed
by an optional parenthesized list of import items. The file is loaded, and 
if the import items are present in the file, those import items
are imported into the current context. An import item is either a name
or a syntax item. A syntax item names one of the extensible syntactical extensions
(see Section \ref{synext}). A syntax
item has the form: 
\begin{verbatim}
"syntax" ("Num" | "List" | "Record" | "Pair") "(" id ")"
\end{verbatim}
Some examples are illustrated below.
\begin{verbatim}
import "LangPrelude.prg" 
  (head,tail,lookup,member,fst,snd,Monad,maybeM)
\end{verbatim}
imports only the names listed from the {\tt LangPrelude.prg} file.

If the parenthesized import list is not present, all the names and syntax
extensions defined in the file are imported.
An \om\ program can contain multiple imports. 
\begin{verbatim}
import "fileAll"

import "generic.prg" (name, syntax List(i), syntax Num(n))
\end{verbatim}

Like Haskell, \om\ has
two name spaces. One for values and another for types and kinds. One drawback
of the module system is that if a name is listed in the import list, then
that name is imported into both name spaces if it exists. There is currently
no way to be selective.


\section{Type Synonyms} \label{syn}
\om\ supports both parameterized  and non-parameterized type synomyms.
For example:
\begin{verbatim}
type String = [Char]
type Env t = [(String,t)]
\end{verbatim}
Every use of a type synonym must be fully applied to all
its type arguments. At type checking time a type synonym
is fully expanded.

\section{Monads and the {\tt do} Syntax} \label{monads}
Since \om\ does not have a class system, it uses an alternate mechanism
to support the {\tt do} syntax for monads. Typing a {\tt do} expression
involves computing the type of the variables {\tt return}, {\tt fail},
and {\tt bind} in the scope where the {\tt do} expression occurs.
The typing rule is:
\vspace*{.1in}

$
\begin{array}{l}
 \Gamma(x,d) \vdash {\tt e_2} :: m\;\; c \\
 \Gamma \vdash {\tt e_1} :: m\;\; d \\
 \Gamma \vdash {\tt fail} ::  \forall\, a\, .\;String \rightarrow m\;\; a\\
 \Gamma \vdash {\tt bind} :: \forall\, a\, b\, .\; m\;\;a \rightarrow (a \rightarrow m\;\; b) \rightarrow m\;\; b\\
 \Gamma \vdash {\tt return} :: \forall\, a\, .\; a \rightarrow m\;\; a \\ \hline
 \Gamma \vdash \;{\tt do}\; \{ x  \leftarrow e_1; \; e_2 \} :: m\;\;c \
 \end{array}
$

\vspace*{.2in}
A Monad in \om\ is just an ordinary algebraic datatype with constructor {\tt
Monad} and three polymorphic functions as components, {\tt return}, {\tt bind},
and {\tt fail}. Its definition can be found in Section \ref{predefined} as one
of the predefined types. Some sample monad declarations are:

\begin{multicols}{2}
{\small
\begin{verbatim}
maybeM =  (Monad Just bind fail)
  where return x = Just x
        fail s = Nothing
        bind Nothing g = Nothing
        bind (Just x) g = g x         
        
listM =  (Monad unit bind fail)
  where unit x = [x]
        fail s = []
        bind [] f = []
        bind (x:xs) f = f x ++ bind xs f

ioM = Monad returnIO bindIO failIO

data St st a = St (st -> (a,st)) 

stateM =  (Monad unit bind fail)
  where unit a = St(\ st -> (a,st))
        bind (St f) g = St h
          where h st = case f st of
                        (a,st') -> 
                            case g a of
                             (St j) -> j st'
        fail s = error ("Fail in state monad: "++s)

runstate n (St f) = f n
getstate = St f  where f s = (s,s)
setstate n = St f   where f s = ((),n)
\end{verbatim}}
\end{multicols}



Given any expression {\tt ({\it m}::Monad m)}, the monad declaration
({\tt monad {\it m}}) is syntactic sugar for the pattern based
declaration {\tt (Monad return bind fail) = {\it m}}. Which
introduces bindings for the monad functions
({\tt return}, {\tt bind}, and {\tt fail}) into the local scope.
To use the {\tt do} syntax import the names for {\tt return},
{\tt bind}, and {\tt fail} into the current context either by
defining them or using the {\tt monad} declaration.
\begin{verbatim}
d1 = do { x <- Just 4; return(x+1) }
  where monad maybeM

d2 = runstate 0 (do { setstate 4; x <- getstate; return(x+1) })
  where (Monad return bind fail) = stateM

d3 = runstate 5 (do { a <- getstate; setstate 3; x <- getstate; return(x+a) })
  where monad stateM
\end{verbatim}
To import a monad into the scope of a multi-clause function definition without
polluting the global scope, define the function using a {\tt where} clause.
Place a helper function and the {\tt monad} declaration in the same {\tt where}
clause, and equate the function (you really want to define) to the helper function.

\begin{verbatim}
data Rep:: *0 ~> *0 where
  Int:: Rep Int
  Char:: Rep Char
  Prod:: Rep a -> Rep b -> Rep (a,b)
  Arr:: Rep a -> Rep b -> Rep (a -> b)
  List:: Rep a -> Rep [a]

test = help where    
  help :: Rep a -> Rep b -> Maybe(Equal a b)
  help Int Int = Just Eq
  help Char Char = Just Eq
  help (Prod a b) (Prod m n) =
    do { Eq <- help a m
       ; Eq <- help b n
       ; return Eq } 
  help (Arr a b) (Arr m n) =
    do { Eq <- help a m
       ; Eq <- help b n
       ; return Eq }     
  help (List a) (List b) = 
    do { Eq <- help a b; return Eq } 
  help _ _ = Nothing
  monad maybeM 
\end{verbatim}

\section{Generalized Algebraic Datatypes} \label{gadt}

We assume the reader has a certain familiarity with Algebraic Datatypes
(ADTs). In particular that values of ADTs are constructed by {\em constructors}
and that they are taken apart by the use of {\em pattern matching}. Consider

\begin{verbatim}
data Tree a 
   = Fork (Tree a) (Tree a) 
   | Node a 
   | Tip
\end{verbatim}
This declaration defines the polymorphic {\tt Tree} type constructor. Example tree
types include {\tt (Tree Int)} and {\tt (Tree Bool)}. 
Note how the constructor functions ({\tt Fork},\ {\tt Node}) and constants ({\tt Tip})
are given polymorphic types.\vspace*{0.1in} \\
\noindent
{\tt Fork :: forall a . Tree a -> Tree a -> \underline{Tree a}}\\
{\tt Node :: forall a . a -> \underline{Tree a}}\\
{\tt Tip :: forall a . \underline{Tree a}} \vspace*{0.1in}

When we define a parameterized algebraic
datatype, the formation rules enforce the following restriction. The
range of every constructor function, and the type of every constructor
constant must be a polymorphic instance of the new type constructor being
defined. Notice how the constructors for {\tt Tree} all have range
({\tt Tree {\it a}}) with a polymorphic type variable {\it a}. 
Generalized Algebraic Datatypes (GADTs) remove this restriction.
Since the range of the constructors of an ADT are only implicitly
given (as the type to the left of the equal sign in the ADT definition),
a new syntax is necessary to remove the range restriction. In \om\
a new form of declaring new datatypes is supported. We call this form
the GADT (or explicit) form.
\begin{enumerate}

\item An explicit form of a {\tt data} definition is supported in which
the type being defined is given an explicit kind, and every constructor is
given an explicit type.

\end{enumerate}

\noindent
We illustrate the new form below:

\begin{verbatim}
-- A GADT declaration using explicit classification. Note the range
-- of the constructor 'Pair' is not 'Term' applied to a type variable.
data Term :: *0 ~> *0 where
  Const :: a -> Term a
  Pair :: Term a -> Term b -> Term (a,b)
  App :: Term(a -> b) -> Term a -> Term b

-- The ADT, Tree, using the explicit GADT form (allowed but not necessary).
data Tree:: *0 ~> *0 where
  Fork:: Tree a -> Tree a -> Tree a
  Node:: a -> Tree a
  Tip:: Tree a
\end{verbatim}

Like an ADT style declaration, the GADT style declaration introduces a new
type constructor ({\tt Term}) which is classified by a kind (\verb+*0 ~> *0+). Here
the kind is made explicit. This means that {\tt Term} takes types to types.
In the GADT form no restriction is placed on the types of the constructor
functions except that the range of each constructor must be a fully applied
instance of the type being defined, and that the type of the constructor as
a whole must be classified by \verb+*+$n$, for some $n$. See the rest of this
manual, and many of the papers about \om\
listed in Section \ref{examples}, for many more examples.

Note that all algebraic data types, like {\tt Tree}, can be expressed using
the explicit GADT form. We retain the ADT form for backward compatibility.

\subsection{The Off-side Rule in GADT Declaration}
Note also that the ADT form separates the declaration of constructor
functions by using the vertical bar ( \verb+|+ ), but the GADT form uses
the offside rule. The declarations of all the constructor functions should
begin in the same column.

\subsection{Equality-constrained Types are Deprecated}
Some early versions of \om\ supported an additional form of declaration
using equality constraints in {\tt where} clauses. For example:
\begin{verbatim}
-- Old style "equality-constraint" style.
data Term a 
  = Const a
  | exists x y . Pair (Term x)(Term y) where a =(x,y)
  | exists b . App (Term(b -> a)) (Term b)
\end{verbatim}
This form has been deprecated starting with \om\ version 1.2.3. It is no longer
allowed. In order to help users convert old \om\ programs, the system
prints an error message and suggests a GADT style definition. For example,
given the old-style declaration above the system suggests:
\begin{verbatim}
The implicit form of data decls (using where clauses) has been deprecated.
Please use the explicit form instead. Rather than writing:

data Term a = Const a
            | forall x y . Pair (Term x) (Term y)
                                 where a = (x, y)
            | forall b . App (Term (b -> a)) (Term b)

One should write something like:
(One may need to adjust the kind of 'Term')

data Term:: (*? ~> *0) where
   Const:: forall a . a -> Term a
   Pair:: forall x y a . Term x -> Term y -> Term (x, y)
   App:: forall b a . Term (b -> a) -> Term b -> Term a
\end{verbatim}   
Sometimes the classification of the type constructor is not
correct (See the \verb+"*?"+ above), but other than this, it 
can be cut and pasted into the file to replace the old-style declaration.


\section{Leibniz Equality} \label{equal}

Terminating terms of type \verb+(Equal lhs rhs)+ are values witnessing the equality
of {\tt lhs} and {\tt rhs}. The type constructor {\tt Equal} is defined as:

\begin{verbatim}
data Equal :: a ~> a ~> *0 where
  Eq:: Equal x x
\end{verbatim}

Note that {\tt Equal} is a GADT, since in the type of the constructor {\tt Eq} the two type indexes
are the same, and not just polymorphic variables (i.e. the type of
{\tt Eq} is {\em not} {\tt (Equal x y)} but is rather {\tt (Equal x x)}).
The single constructor ({\tt Eq}) has a polymorphic type {\tt(Equal x x)}.
Ordinarily, if the two arguments of {\tt Equal} are type-constructor terms,
the two arguments must be the same (or they couldn't be equal). But, if we
allow type functions as arguments (see Section \ref{typefun}), then since many functions may compute the
same result (even with with different arguments), the two terms can be
syntactically different (but semantically the same).  
For example \verb+(Equal 2t {plus 1t 1t})+ is a well formed equality type
since 2 is semantically equal to 1+1.
The {\tt Equal} type allows the programmer to reify
the type checker's notion of equality, and to pass this reified evidence
around as a value. The {\tt Equal} type plays a large role
in the {\tt theorem} declaration (see Section \ref{theorem}).

\section{Polymorphism and Data Declarations}

\om\ supports both existentially and universally quantified components in {\tt data}
declarations. An example of existential quantification is:
\begin{verbatim}
data Closure:: *0 ~> *0 ~> *0 where
   Close:: env -> (env -> a -> b) -> Closure a b
\end{verbatim}
In the type of a constructor function, {\em all} type variables not appearing
in the range of the constructor function (here, the variable {\tt env})
are implicitly existentially quantified. The type of {\tt Close} is polymorphic
since it can be applied to any kind of environment, but special typing rules
are employed when pattern matching against a pattern like \verb+(Close e f)+
which ensure that the {\em type} of the pattern variable {\tt e} does not escape its scope.

Polymorphic (or universally quantified) components can be declared by placing a universal quantifer in the
arg position of a constructor. An example of this are the types of the {\tt
return}, {\tt bind}, and {\tt fail} components of the {\tt Monad} declaration
(defined in Section \ref{predefined}) and used in Section \ref{monads}. Another
example would be the type that can only store the identity function,

\begin{verbatim}
data Id = Id (forall a . a -> a)
\end{verbatim}
It is also possible to give prototypes with rank-N types, and the type checker
will check that the defined function is called properly. The \om\ implementation
is based upon Simon Peyton Jones work on implementing type-checking for
rank-N polymorphism\cite{RankN}

\section{Anonymous Existential Types}\label{exists}
Existential types can be encoded in \om\ inside of {\tt data}
declarations by using the implict existential quantification
of type variables not occurring in the range-type of constructor
functions as shown above in the declaration for {\tt Close}. 

Sometimes it is convenient to have existential quantification
without defining a {\tt data} type to encode it. This is possible
in \om\ by using the anonymous existential constructor {\tt Ex}.
Because {\tt Ex} encodes anonymous existential types, the user is
required to use function prototypes that propagate the
intended existential type into the use of {\tt Ex}. For example:
\begin{verbatim}
existsA :: exists t . (t,t->String)
existsA = Ex (5,\ x -> show(x+1))

testf :: (exists t . (t,t-> String)) -> String
testf (Ex (a,f)) = f a
\end{verbatim}

\noindent The keyword {\tt Ex} is the ``pack" operator of Cardelli and
Wegner~\cite{Cardelli-Wegner85}. Its use turns a normal type 
{\small \verb+(Seq a p,Plus n m p)+} into an existential type 
{\small \verb+exists p.(Seq a p,Plus n m p)+}. The \om\ compiler
uses a bidirectional type checking algorithm to propagate
the existential type in the signature inwards to the
{\tt Ex} tagged expressions. This allows it to
abstract over the correct existentially quantified variables.




\section{New Types with Different Kinds} \label{kinds}

Kinds are similar to types in that, while types classify values, kinds classify
types. We indicate this by the overloaded {\em classifies} relation ({\tt ::}). For example:
{\tt 5::Int}, and {\tt Int::*0 }. We say {\tt 5} is classified by {\tt Int}, and {\tt Int}
is classified by {\tt *0} (star-zero). The kind {\tt *0} classifies all
types that classify values (things we actually can compute). A {\tt kind}
declaration introduces new types and their associated kinds (just as a {\tt data}
declaration introduces new values (the constructors) and their associated types).
Types introduced by a {\tt kind} declaration have kinds other than {\tt *0}.
For example, the {\tt Nat} declaration introduces two new
type constructors {\tt Z} and {\tt S} which encode the natural
numbers at the type level:  
\begin{verbatim}
kind Nat = Z | S Nat
\end{verbatim}
The type {\tt Z} has kind {\tt Nat}, and {\tt S} has kind {\tt Nat \verb+~+>
Nat}. The type {\tt S} is a type constructor, so it has a higher-order kind. We
indicate this using the classifies relation as follows:
\begin{verbatim}
Z :: Nat
S :: Nat ~> Nat
Nat :: *1
\end{verbatim}

The classification {\tt Nat::*1} indicates that {\tt Nat} is at the same
``level" as {\tt *0} --- they are both classified by {\tt *1}.
Just as every ADT can be expressed as a GADT, every {\tt kind}
declaration can also be expressed as an GADT. For example we could write
the declaration of {\tt Nat} as a GADT as follows:
\begin{verbatim}
data Nat:: *1 where
  Z:: Nat
  S:: Nat ~> Nat
\end{verbatim}

The system infers that this is a {\em kind} declaration by observing
that {\tt Nat} is kinded by \verb+*1+ (GADTs kinded by {\tt *0} would
be types, and those kinded by {\tt *2} would be sorts). 
There is
an infinite hierarchy of classifications, and users
can populate each level of the classification using GADT style
declarations. Each level is associated with one of the kinds
{\tt *}$n$. For example, at the kind level {\tt *0} and {\tt Nat} are classified by {\tt
*1}, {\tt *1} is classified by {\tt *2}, etc.  We call this hierarchy
the {\em strata}. In fact this infinite hierarchy is why we chose the
name \om. The first few strata are: values and expressions that are
classified by types, types that are classified by kinds, and kinds that
are classified by sorts, etc. We illustrate
the relationship between the values, types, kinds, and sorts in
Figure~\ref{Kindtree}.

\begin{figure}
\hrule
\begin{center}
 \scalebox{1.2}{\psinput{kinds2.eps}}
\end{center}
\vspace*{-.25in}
\caption{The classification hierarchy. 
An arrow from {\it a} to {\tt b} means {\it b}::{\it a}.
Note how only values are classified by types that are classified by
{\tt *0}, and how type constructors (like {\tt []} and {\tt S})
have higher order kinds.}
\hrule
\label{Kindtree}
\end{figure}

In general, a GADT declaration mediates between two levels of the hierarchy.
Introducing the type constructor at level $n+1$, and constructor functions at
level $n$. The ADT {\tt data} declaration is a special case of the GADT
introducing a type constructor at the type level, and constructor functions at the
value level (as it does in Haskell). The {\tt kind} declaration is another
special case introducing a type constructor at the kind $2$, and constructor
functions at the type level.



Kinds (sorts, etc.) are useful because we can use them to index new types. For example
the type constructor {\tt List} is indexed by {\tt Nat}, and the {\tt Nat} valued
index is a static indication of the length of the list.
\begin{verbatim}
data List:: Nat ~> *0 ~> *0 where 
  Nil:: List Z a
  Cons:: a -> List m a -> List (S m) a
\end{verbatim}
We can also use kinds to define other interesting types. For example
we can define our own kind of ``tuples" using the {\tt Prod} kind.
\begin{verbatim}
data Prod:: *1 ~> *1 where
  PNil:: Prod a
  PCons:: k ~> Prod k ~> Prod k
 
data Tuple:: Prod *0 ~> *0 where 
   Tnil:: Tuple PNil
   Tcons:: t -> (Tuple r) -> Tuple (PCons t r) 
\end{verbatim}
The type \verb+(PCons Int PNil)+ is classified
by \verb+(Prod *0)+, but the type \verb+(PCons Z (PCons (S Z) PNil))+ is classified
by \verb+(Prod Nat)+. The value level term \verb+(Tcons 5 (Tcons True Tnil)))+ is
classified by the type \verb+(Tuple (Rcons Int (Rcons Bool Rnil)))+.

\subsection{Polymorphic Kinds}
In \om\ it is possible to give a type constructor
introduced using the GADT form a polymorphic classification.
This allows users to specify kinds
for type constructors that cannot be inferred. In the example
below we specify a polymorphic kind for the type {\tt TRep}:
\begin{verbatim} 
data TRep:: forall (k :: *1) . (k ~> *0) where
  Int:: TRep Int   
  Char:: TRep Char
  Unit:: TRep ()
  Prod:: TRep (,)
  Sum:: TRep (+)
  Arr:: TRep (->) 
  Ap:: TRep f -> TRep x -> TRep (f x)
\end{verbatim}    
This allows a single GADT to represent both type constructors like
\verb+(->)+ and \verb+(,)+, as well as types like \verb+Int+ and \verb+Char+.
The term \verb+(Ap(Ap Prod Int) Unit)+ is classified
by the type \verb+(TRep (Int,()))+.

\section{Separate Value and Type Name Spaces}

In \om, as in Haskell, the name-space for the value level is separate from the
name-space for the type level.  In addition, in \om, the type name-space
includes all levels above the value level. This includes types, kinds, sorts
etc. This allows a sort of punning, where objects at the value level can have
the same name as different objects at the type level and above. This punning
becomes particularly useful when the objects at the two levels are different,
but related. The classic pun is the singleton type {\tt Nat'}

\begin{verbatim}
data Nat':: Nat ~> *0 where
  Z:: Nat' Z
  S:: Nat' n -> Nat' (S n)
  
three' = (S(S(S Z))):: Nat'(S(S(S Z)))
\end{verbatim}

The value constructors {\tt (Z:: Nat' Z)} and {\tt (S:: Nat' n ->
Nat' (S n))} are ordinary values whose types mention the type
constructors they pun. In {\tt Nat'}, the singleton relationship
between a {\tt Nat'} value and its type is emphasized 
strongly, as witnessed by the example {\tt three'}. Here the
shape of the value, and the type index appears isomorphic. The table
below illustrates the separate name spaces and the levels within the type
hierarchy where each constructor resides.


\vspace*{.15in}
{\tt
\begin{tabular}{|lclclcl|} \hline
  $\leftarrow$     {\it {\tiny value name space}}&|& {\it {\tiny type name space $\rightarrow$}} & &  && \\ \hline
{\em value}&|& {\em type} &|&  {\em kind} &|& {\em sort} \\ \hline
     & |& Z        &::& Nat &::& *1 \\
     & |& S        &::& Nat \verb+~>+ Nat &::& *1 \\   \hline  
Z &::& Nat' Z &::& *0 &::& *1 \\
S &::& Nat' m -> Nat' (S m) &::& *0 &::& *1 \\
\hline
\end{tabular}}
\vspace*{.15in}


In \om, we further exploit this pun, by providing syntactic sugar for writing
natural numbers at both the value and the type level. We write {\tt 0t} (for
{\tt Z}), {\tt 1t} (for {\tt S Z}), {\tt 2t} (for {\tt S(S Z)}), etc. for
{\tt Nat} terms at the type level. We write {\tt 0v} (for {\tt Z}), {\tt 1v} 
(for {\tt S Z}), {\tt 2v}, (for {\tt S(S Z)}) etc. for {\tt Nat'} 
terms at the value level. This syntactic sugar is an example of 
syntactic extension. See Section \ref{synext} for further examples.
For backward compatibility reasons one may also write {\tt \#0},
{\tt \#1}, {\tt \#2}, etc.

\section {Tags and Labels} \label{tag}

Many object languages have a notion of name. To make representing names in the
type system easy we introduce the notion of Tags and Labels. As a {\em first
approximation}, consider the finite kind {\tt Tag} and its singleton type {\tt
Label}:

\begin{verbatim}
data Tag:: *1 where
  A:: Tag
  B:: Tag
  C:: Tag

data Label:: Tag ~> *0 where
  A:: Label A
  B:: Label B
  C:: Label C
\end{verbatim}

Here, we again deliberately use the value-name space, type-name space
overloading. The names {\tt A}, {\tt B}, and {\tt C} name different,
but related, objects at both the value and type level.
At the value level, every {\tt Label} has a type index
that reflects its value. I.e. {\tt A::Label A}, and {\tt B::Label B}, and {\tt
C::Label C}. Now consider a countably infinite set of tags and labels. We can't
define this explicitly, but we can build such a type as a primitive inside of
\om. At the type level, every legal identifier whose name is preceded by a
back-tick ({\tt `}) is a type classified by the kind {\tt Tag}. For example the type {\tt
`abc} is classified by {\tt Tag}. At the value level, every such symbol {\tt `x} is classified
by the type {\tt (Label `x)}.

There are several functions that operate on labels. The first is
{\tt sameLabel} which compares two labels for equality. Since labels
are singletons, a simple true or false answer would be useless.
Instead {\tt sameLabel} returns either a Leibniz proof of equality 
(see Section \ref{equal}) that the {\tt Tag}
indexes of identical labels are themselves equal, or a proof of
inequality with an ordering hint.
\begin{verbatim}
sameLabel :: forall (a:Tag) (b:Tag).Label a -> Label b
                 -> Equal a b + DiffLabel a b

prompt> sameLabel `w `w
(L Eq) : ((Equal `w `w) + (DiffLabel `w `w))

prompt> sameLabel `w `s
(R (LabelNotEq GT)) : ((Equal `w `s) + (DiffLabel `w `s))
\end{verbatim}

Fresh labels can be generated by the function {\tt freshLabel}.
Since the {\tt Tag} index for such a label is unknown, the generator
must return a structure where the
{\tt Tag} indexing the label is existentially quantified. Since every call
to {\tt freshLabel} generates a different label, the {\tt freshLabel}
operation must be an action in the {\tt IO} monad. The function
{\tt newLabel} coerces a string into a label. It too, must
existentially hide the {\tt Tag} indexing the returned label. But,
because it always returns the same label when given the same input
it can be a pure function.

\begin{verbatim}
freshLabel :: IO HiddenLabel
newLabel:: String -> HiddenLabel

data HiddenLabel :: *0 where 
 HideLabel:: Label t -> HiddenLabel
\end{verbatim}
We illustrate this at the top-level loop. The \om\ top-level loop executes
{\tt IO} actions (see Section \ref{command}), and evaluates and prints
out the value of expressions with other types.
\begin{verbatim}
prompt> freshLabel
Executing IO action               -- An IO action
(HideLabel `#cbp) : IO HiddenLabel


prompt> temp <- freshLabel        -- An IO action
Executing IO action
(HideLabel `#sbq) : HiddenLabel
prompt> temp
(HideLabel `#sbq) : HiddenLabel

prompt> newLabel "a"              -- A pure value
(HideLabel `a) : HiddenLabel
\end{verbatim}

\section{Staging}

\om\ includes two staging annotations and brackets: (\verb+[| _ |]+),
escape (\verb+$( _ )+), and two functions: 
{\tt lift::(forall a . a -> Code a)} and {\tt run::(forall a . (Code a) -> a)}
for building and manipulating code. \om\ uses
the Template Haskell\cite{Sheard02} conventions for creating code. For example:
\begin{verbatim}
inc x = x + 1
c1a = [| 4 + 3 |]
c2a = [| \ x -> x + $c1a |]
c3 = [| let f x = y - 1 where y = 3 * x in f 4 + 3 |]
c4 = [| inc 3 |]

c5 = [| [| 3 |] |]
c6 = [| \ x -> x |]
\end{verbatim}

The purpose of the staging mechanism is to have finer control over
evaluation order. \om\ supports many of the features of
MetaML\cite{Sheard:1999:UMS,TS00}. The staging in \om\ is experimental and
may change in future releases.

An example staged term is:
\begin{verbatim}
c18 = let h 0 z = z
          h n z = [| let y = n in $(h (n-1) [| $z + y |]) |]
      in h 3 [| 99 |]
\end{verbatim}
This term evaluates to the piece of code below:
\begin{verbatim}
[| let y297 = 3
       y299 = 2
       y301 = 1
   in 99 + y297 + y299 + y301 
|] : Code Int
\end{verbatim}

\section{Command Level Prompts}  \label{command}

\input{commands.tex}

Once inside the \om\ interpreter the user interacts
with \om\ using command-level prompts. Most
commands begin with the letter {\tt ':'}
In Figure \ref{commands}, the command level prompts are defined and their
usage is explained. Commands are modelled after the Hugs command
level prompts. There are far fewer commands in \om\ than in Hugs.
The {\tt :set} and {\tt :clear} command are discussed
in much more detail in Section \ref{tracing} where we discuss 
user controlled tracing inside the type-checker.

\subsection{The Command Line Editor}
Starting with \om\ version 1.2, the read eval print loop comes
equipped with a command line editor. The editor is implemented
by the {\em GNU Readline Library}. This is the same
library used to implement the command line editor
in many Linux distributions, so it should be familiar to
many users. For a short introduction
one can consult any of the online introductions
to the library. One that I used
was \verb+http://www.ugcs.caltech.edu/info/readline/rlman_1.html+.

Amongst many other features, the command line editor allows users
to cycle through previous commands by using the up ($\uparrow$)
and down ($\downarrow$) arrow keys, to move the cursor forward and
backward using the right ($\rightarrow$) and left ($\leftarrow$)
arrow keys, and to edit text using backspace and delete. The
command editor also implements tab completion by looking up
prefixes in the \om\ symbol table. I'd like to thank 
Nils Anders Danielsson for the initial implementation.

The command line editor is also installed in the read-typecheck-print
loop described in the next section.


\vspace*{.2in}

\section{The Type Checking Read-Typecheck-Print Loop} \label{typechecker}
\om\ allows the user to investigate the types of expressions while type
checking them. By placing the language construct ``{\tt check \_ }" before a
term the type checker enters a read-typecheck-print loop in the scope where
the {\tt check} occurs. The check construct has the lowest parsing
precedence, so its scope (like the lambda expression), extends to as far to
the right as possible. One can delimit its scope by using parenthesis.
For example consider:

\begin{verbatim}
data LE:: Nat ~> Nat ~> *0 where
   LeBase:: LE n n
   LeStep:: LE n m -> LE n (S m)

compare :: Nat' a -> Nat' b -> ((LE a b)+(LE b a))
compare Z Z = L LeBase
compare Z (S x) = 
  case compare Z x of L w -> L(LeStep w)
compare (S x) Z =
  case compare Z x of L w -> R(LeStep w)
compare (S x) (S y) = check mapP g g (compare x y)
  where mapP f g (L x) = L(f x)
        mapP f g (R x) = R(g x)
        g :: LE x y -> LE (S x) (S y)
        g LeBase = LeBase
        g (LeStep x) = LeStep(g x)
\end{verbatim}
In the 4th clause of {\tt compare}, the {\tt check} construct
extends to the end of the line, encompassing the whole {\tt mapP}
application. The purpose of the {\tt check} is to allow the user to investigate the type
of some expressions in a rather large complex function definition. The {\tt
check} in the scope of the fourth clause of {\tt compare} directs the type
checker to pause and enter an interactive loop. The following is a script
of one such interaction. It places the user in the scope of the fourth
clause to {\tt compare} above, where the {\tt check} keyword appears. Note
that, in the transcript below, the user enters the code after the {\tt
check>} prompt.

\begin{verbatim}
*** Checking: mapP g g (compare x y)
*** expected type: ((LE (1+_c)t (1+_d)t)+(LE (1+_d)t (1+_c)t))
***    refinement: {_a=(1+_c)t, _b=(1+_d)t}
***   assumptions:
***      theorems:
check> x
x :: Nat' _c

check> y
y :: Nat' _d

check> mapP
mapP :: (e -> f) -> (g -> h) -> (e+g) -> (f+h)

check> g
g :: LE i j -> LE (1+i)t (1+j)t

check> :h
Hint = ((LE (1+_c)t (1+_d)t)+(LE (1+_d)t (1+_c)t))

check> compare x y
compare x y :: ((LE _c _d)+(LE _d _c))

check> mapP g g
mapP g g :: ((LE k l)+(LE m n)) -> ((LE (1+k)t (1+l)t)+(LE (1+m)t (1+n)t))

check> mapP g g (compare x y)
mapP g g (compare x y) :: ((LE (1+_c)t (1+_d)t)+(LE (1+_d)t (1+_c)t))
\end{verbatim}
Inside the read-typecheck-print loop the user may enter expressions whose
types are then printed, or they may use one of the following commands.

\vspace*{.2in}

\begin{tabular}{|l|l|}\hline
{\tt :q} &  Quit checking and let the type checker continue. \\ \hline
{\tt :e} & Show the assumptions list in the current environment. \\ \hline
{\tt :k t} & Print the kind of type {\tt t}. \\ \hline
{\tt :h} & Show the {\em hint} the type the type checker expects the checked expression to have. \\ \hline
{\tt :t f} & Print the type scheme for the variable {\tt f}.\\ \hline
{\em exp} & Type check and print the type of the expression {\em exp}.\\ \hline
{\tt :set m} & Set the mode m, the same modes are available here as in the interactive loop. \\ \hline
{\tt :try exp} & Compare the type of {\tt exp} with the expected type, and display generated constraints. \\ \hline
\end{tabular}

\vspace*{.2in}

\section{Explicit Laziness} \label{lazy}
\om\ is a strict, but pure, language. All side-effects are captured
in the {\tt IO} monad. We have included an experimental feature in \om.
That feature is the introduction of explicit laziness. It is best
to read the paper 
{\em A Pure Language with Default Strict Evaluation Order and Explicit Laziness}\cite{SheardStrict}
for a complete description. We list the interface to this feature 
here for completeness.

\begin{verbatim}
lazy :: e -> e       -- lazy is a language construct not a function
strict :: e -> e
mimic :: (f -> g) -> f -> g
\end{verbatim}
We can use this interface to build infinite streams
\begin{verbatim}
(twos,junk) = (2:(lazy twos),lazy(head twos))
(ms,ns) = (1:(lazy ns),2:(lazy ms)) 
\end{verbatim}
Expressions labelled with {\tt lazy} are not evaluated until
they are pulled on. Printing in the read-eval-print loop, does not
print {\tt lazy} thunks, but prints them as {\tt ...}.
\begin{verbatim}
prompt> twos
[2 ...] : [Int]
prompt> take 5 twos
[2,2,2,2,2] : [Int]
\end{verbatim}

A classic example is the infinite list of fibonacci numbers.
\begin{verbatim}
-- fibonacci
zipWith f (x:xs) (y:ys) = 
  f x y : mimic (mimic (zipWith f) xs) ys

fibs = 0 : 1 : (lazy (zipWith (+) fibs (tail fibs)))
\end{verbatim}

We can observe a finite prefix of fibs by defining a take function

\begin{verbatim}
take 0 xs = []
take n [] = []
take n (x:xs) = x :(take (n-1) xs)

prompt> take 10 fibs
[0,1,1,2,3,5,8,13,21,34] : [Int]
\end{verbatim}

\section{Type Functions} \label{typefun}

\om\ allows programmers to write arbitrary type functions 
as first order equations, which are interpreted as left to right
rewrite rules. In a future release we expect
to limit such definitions to confluent
and terminating sets of rewrite rules. A type function ``computes" over
types of a particular kind, and can be used in a prototype declaration
of a function that mentions a type of that kind. We can illustrate this
by the \om\ program
\begin{verbatim}
data List:: Nat ~> *0 ~> *0 where 
  Nil:: List Z a
  Cons:: a -> List m a -> List (S m) a
  
plus :: Nat ~> Nat ~> Nat
{plus Z y} = y
{plus (S x) y} = S{plus x y}
  
app:: List n a -> List m a -> List {plus n m} a   
app Nil ys = ys
app (Cons x xs) ys = Cons x (app xs ys)

\end{verbatim}

The code introduces the type of sequences, {\tt Seq}, with static length
(a {\tt Nat}), the new {\em type-function}, {\tt plus} (over {\tt Nat}),
and the definition of {\tt app} the append function
over sequences. Note how the length of the result is a function
of the length of the input sequences. Since the length of a sequence is
a {\tt Nat} which is a type, a function over types is required
to give a type to {\tt app}. 

Type-functions are functions at the type level. We define them by
writing a set of equations. We distinguish type-function application
from type-constructor (i.e. {\tt Tree} or {\tt Term}) by enclosing them
in squiggly brackets. Type functions must be preceded by prototype
declaration stating their kind. They consist of a set of exhaustive
(over their kind) and confluent rewriting rules.
This is not currently
enforced, but failure to do this may cause the \om\ type checker to
diverge.

Type checking {\tt app}, generates and propagates equalities, and requires
solving the equations like {\tt (S\{plus t m\} = \{plus n m\})} and {\tt (n =
S t)}. As of version 1.2.2, the \om\ typechecker solves such equations using a
form of narrowing.

\subsection{Solving Type-Checking Equations Using Narrowing}

\om\ uses a number of techniques to solve equations which arise from
type checking. One of these is Narrowing.
Narrowing is a well studied computational mechanism, and is the primary means of
computation in the functional logic language Curry
\cite{Hanus06Curry,journals/jflp/HanusS99}. Narrowing combines the power of reduction
and unification. Narrowing can be used to compare
two terms (each containing function calls and variables)
and decide if they are equal. Unlike unification, the terms being compared can contain functions,
and unlike reduction, the terms being simplified can contain variables. Narrowing
finds bindings for some free variables in a term, such that the term reduces to a
normal form. Narrowing is a special purpose constraint solving system, 
that can answer some of the equivalence questions asked when there are functions at the type
level. In order to implement narrowing with a sound and complete strategy, we require
type functions to be written in inductively sequential form (see Section \ref{ISEQ}).
In earlier versions of \om, narrowing played a larger role
in solving type-checking equations. With the advent of the
{\tt theorem} declaration (Section \ref{theorem}), this role has
diminished, but it is still used to some degree.


\subsubsection{How Narrowing Works}

Narrowing finds bindings for some
free variables in the term being narrowed, once
instantiated, these bindings allow the term to reduce to a
normal form.

If a term contains constructors, function symbols, and
variables, it often cannot be reduced. Usually, because
function calls within the term do not match any left-hand
side of any of their definitions. The failure to match is
caused by either variables or other function calls in
positions where the function definitions have only
constructor patterns. Consider narrowing {\tt ({plus a Z} ==
Z)} (checking whether \verb+{plus a Z}+ is equal to \verb+Z+). 

This cannot be reduced because {\tt plus} inducts over
its first argument with the patterns {\tt Z} and {\tt (S n)}.
But in {\tt ({plus a Z} == Z)}, the first argument position
is a variable ({\tt a}).  Narrowing proceeds by
guessing instantiations for the variable {\tt a} -- 
either \verb+{a -> Z}+ or \verb+{a -> (S m)}+,
and following both paths.

\vspace*{.20in}
\begin{tabular}{l|l|l}
\begin{minipage}[t]{2.25in}
{ %\small
\begin{verbatim}
plus:: Nat ~> Nat ~> Nat
{plus Z m} = m
{plus (S n) m}= S {plus n m}
\end{verbatim}}
\end{minipage}
&
\begin{minipage}[t]{1.5in}
{ %\small
guess \verb+{a -> Z}+
\begin{verbatim}
({plus Z Z} == Z)
(Z == Z)
\end{verbatim}}
Success !
\end{minipage}
&
\begin{minipage}[t]{1.6in}
{ %\small
guess \verb+{a -> (S m)}+
\begin{verbatim}
({plus (S m) Z} == Z)
(S {plus m Z} == Z)
\end{verbatim}}
Failure. 
\end{minipage}
\end{tabular}

\vspace*{0.15in}

The returned solutions are the bindings obtained in every
successful path. In the example above we get a list of one
solution: \verb+[{a -> Z}]+. Some problems have no
solutions, some have multiple, and some even have infinite
solutions. Consider  \verb+{plus x 2t}+, we get
\verb+2t+ when \verb+{x -> 0t}+, and 
\verb+3t+ when \verb+{x -> 1t}+, and
\verb+4t+ when \verb+{x -> 2t}+ etc.

Narrowing works best when we have a problem with many constraints. The
constraints prune the search path resulting in few solutions.
If we're type checking with narrowing we hope there is exactly one solution. 
Consider narrowing \verb+({plus x 3t} == 5t)+,
Guessing \verb+{x -> Z}+ and \verb+{x -> S z1}+ we get the two
paths:
{ %\small
\begin{verbatim}
   1)   { 3t == 5t }
   2)   { (1+{plus z1 3t})t == 5t }
\end{verbatim}}
The first path fails, on the second path we take a single reduction step
leaving:
{ %\small
\begin{verbatim}
   { {plus z1 3t} == 4t }
\end{verbatim}}
Guessing \verb+{z1 -> 0}+ and \verb+{z1 -> S z2}+ we get the two
paths:
{ %\small
\begin{verbatim}
   1)   { 3t == 4t }
   2)   { (1+{plus z2 3t})t == 4t }
\end{verbatim}}
The first path fails, on the second path we again
take a single reduction step leaving:
{ %\small
\begin{verbatim}
   { {plus z2 3t} == 3t }
\end{verbatim}}
Guessing \verb+{z2 -> 0}+ and \verb+{z2 -> S z3}+ we get the two
paths:
{ %\small
\begin{verbatim}
   1)   { 3t == 3t }
   2)   { (1+{plus z3 3t})t == 3t }
\end{verbatim}}   
The first succeeds, and the second eventually fails, leaving us
with only one solution \verb+{ x -> 2t }+. We have found
narrowing to be a efficient and understandable mechanism
for directing computation at the type level.


\subsection{Narrowing and Type-checking}

Narrowing plays a role in type checking value-level functions that mention
type functions in their type. For example consider:

{\small
\begin{verbatim}
app:: Seq a n -> Seq a m -> Seq a {plus n m}
app Snil ys = ys
app (Scons x xs) ys = Scons x (app xs ys)
\end{verbatim}}

To see that the {\tt app} is well typed, the type checker does the following.
The expected type is the type given in the function prototype. We
compute the type of both the left- and right-hand-side of the equation
defining a clause. We compare the expected type with the computed type
for both the left- and right-hand-sides. This comparison generates
some necessary equalities (for each side) to make the expected and computed
types equal. We assume the left-hand-side
equalities to prove the right-hand-side equalities. To see this in
action, consider the second clause of the definition of \verb+app+.

\vspace*{.1in}
\begin{tabular}{|l|rlclcl|} \hline
{\small expected type} & &{\small{{Seq a n}}} & $\rightarrow$ & {\small{{Seq a m}}} & $\rightarrow$ &  {\small{{Seq a \plus{n}{m}}}}\\ \hline
{\small equation} & {\small{app}}&  {\small{{(Scons x xs)}}} & &  {\small{{ys}}} & = & {\small{{Scons x (app xs ys)}}} \\ \hline
{\small computed type} & & {\small{{Seq a (S b)}}} & $\rightarrow$ & {\small{{Seq a m}}} & $\rightarrow$ & {\small{{Seq a (S \plus{b}{m})}}}  \\ \hline
{\small equalities}    & & \multicolumn{3}{r}{\small{{n = (S b)}}} & $\Rightarrow$ & {\small{{\plus{n}{m}= S(\plus{b}{m})}}} \\ \hline 
\end{tabular}
\vspace*{.1in}

The left-hand-side equalities let us
assume \verb+n+ = \verb+(S b)+. The right-hand-side equalities, require us
to establish that \verb+{plus n m}+ = \verb+(S{plus b m})+.   Using the assumption that 
\verb+n+ = \verb+(S b)+, we are left with the requirement that \verb+{plus (S b) m}+ = \verb+(S{plus b m})+,
which is easy to prove using the definition of \verb+plus+. Narrowing is one
mechanism that could solve this kind of equation. We are currently using
narrowing, but are evaluating its effectiveness for future releases of \om.

\subsubsection{Narrowing Strategies} \label{ISEQ}

While narrowing is non-deterministic, it is both sound and complete with an
appropriate strategy \cite{Antoy:2005:ESF}. All answers found are real answers, and if
there exists an answer, good strategies will find it. When a question has an infinite
number of answers, a good implementation will produce these answers lazily.  In our
type-checking context, finding 2 or more answers is a sign that a program being type
checked has an ambiguous type and needs to be adjusted. In the rare occurrence that
narrowing appears to diverge on a particular question, we can safely put resource
bounds on the narrowing process, declaring failure if the resource bounds are
exceeded. The consequence of such a declaration, is the possibility of declaring a 
well-typed function ill-typed. In our experience this rarely happens.

We restrict the form of function definitions at the type level to be inductively
sequential\cite{conf/alp/Antoy92}.
This ensures a sound and complete narrowing strategy for answering type-checking 
time questions. The class of inductively sequential functions is a large
one, in fact every Haskell function has an inductively sequential definition. The
inductively sequential restriction affects the form of the equations, and not the
functions that can be expressed. Informally, a function definition is
inductively sequential if all its clauses are non-overlapping. For example
the definition of {\tt zip1} is not-inductively sequential, but the equivalent
program {\tt zip2} is.

{ %\small
\begin{verbatim}
zip1 (x:xs) (y:ys) = (x,y): (zip1 xs ys)
zip1 xs ys = []

zip2 (x:xs) (y:ys) = (x,y): (zip2 xs ys)
zip2 (x:xs) []     = []
zip2 []     ys     = []
\end{verbatim}}

The definition for {\tt zip1} is not inductively sequential, since its two clauses overlap. In general
any non-inductively sequential definition can be turned into an inductively
sequential definition by duplicating some of its clauses, instantiating variable patterns
with constructor based patterns. This will make the new clauses non-overlapping.
We do not think this burden is to much of a burden to pay, since
it is applied only to functions at the type level, and it supports
sound and complete narrowing strategies.

We pay for the generality of narrowing over unification and reduction
by a modest increase in overhead. Narrowing uses a general purpose
search algorithm rather than a special purpose unification or
reduction engine. Narrowing is Turing complete, so we
can solve any problem that can be solved by reduction, and many more.


\section{The {\tt theorem} Declaration} \label{theorem}
The theorem declaration is a primary way of directing the type
checker. It uses ordinary terms, with types that can be read logically,
as specifcations for new  type checking strategies.  There are currently
three types of theorems. Each is used to direct the type
checker to do things it might not ordinarily do. This is
best seen by example.

\subsection{{\tt Equal} Types as Rewrite Rules}\label{rewrite}

Narrowing alone is quite weak. Suppose we
had declared the type of {\tt app} as:
\begin{verbatim}
app:: List n a -> List m a -> List {plus m n} a 
\end{verbatim}

I.e. we switched the order of {\tt m} and {\tt n} in the range of {\tt app},
writing \verb+(List {plus m n} a)+, rather than \verb+(List {plus n m} a)+.
Semantically, this is a valid type for {\tt app}, it is just too hard for the
current mechanism to check its validity. The type checker
has to solve {\tt (S\{plus m t\} = \{plus m (S t)\})}, which matches none of the
rewrite rules defining {\tt plus}, and which leads to an infinite set of solutions
if narrowing is used.

The solution is to augment the narrowing system with a set of semantically valid
rewrite rules, and to apply these rules at the proper time. Rewrite rules are
validated by exhibiting a terminating term with type: \verb+(Equal lhs rhs)+ and using
this type as a left to right rewrite rule.

We use {\tt Equal} terms (see Section \ref{equal}) in the {\tt theorem} declaration. The declaration:

\begin{verbatim}
theorem name = term
\end{verbatim}
checks that {\tt term} is terminating, and its type is of the form: 
{\tt conditions -> (Equal lhs rhs)}.
If this is so, and the conditions are met, the rewrite-rule \verb+ lhs --> rhs + is enabled
in the scope of the declaration. The declaration has no run-time meaning.
The term is not evaluated at runtime, and only its type is used
in the scope of the declaration, and only at type-checking time. An example use of the {\tt theorem} declaration
is in the definition of a term with type: \verb+(Nat' x -> Equal {plus x (S y)} (S{plus x y}))+
\begin{verbatim}
plusS :: Nat' n -> Equal {plus n (S m)} (S{plus n m})
plusS Z  = Eq
plusS (S x)  = check Eq
  where theorem indHyp = plusS x
\end{verbatim}
Because this term is recursive, but terminating, we can consider it
a proof by induction of the theorem embodied in its type.
To illustrate how this knowledge is used, consider the information
in the type-checking debugger break-point, caused by the {\tt check} clause in
the definition of {\tt plusS} above:

\begin{verbatim}
*** Checking: Eq
*** expected type: Equal {plus (1+_a)t (1+_m)t} (1+{plus (1+_a)t _m})t
***    refinement: {_n=(1+_a)t}
***   assumptions: Nat' _a
***      theorems: Rewrite indHyp: [] => {plus _a (1+'b)t} --> (1+{plus _a 'b})t
\end{verbatim}

Note we need to show that \verb|{plus (1+_a)t (1+_m)t}|
is equal to \verb|(1+{plus (1+_a)t _m})t|. Reduction leads to the equality
\verb|(1+{plus _a (1+_m)t})t == (2+{plus a _m})t|. Applying the rewrite
we get the expected result: \verb|(2+{plus _a _m})t == (2+{plus a _m})t|.

\section{Types as Propositions}

A {\tt data}type can be declared to be a proposition by the use
of the {\tt prop} declaration. A {\tt prop} declaration is 
identical in form to a {\tt data} declaration. It introduces
a new type constructor and it's associated (value) constructor functions.
But it also informs the compiler that this type can be used as
a static level {\tt prop}osition which can be used as a constraint.
For example consider the {\tt prop}osition {\tt Le} (which is a slight
variant of the {\tt LE} witness from section \ref{typechecker}).
\begin{verbatim}
prop Le :: Nat ~> Nat ~> *0 where
  Base:: Le Z a
  Step:: Le a b -> Le (S a) (S b)
\end{verbatim}
The type {\tt Le} is introduced as a witness type with constructor
functions {\tt Base} and {\tt Step}. These construct ordinary values.
But the type {\tt Le} can now also be used as static constraint.
For example, we might define statically ordered lists as follows:

\begin{verbatim}
data SSeq:: Nat ~> *0 where
  Snil:: SSeq Z
  Scons:: Le b a => Nat' a -> SSeq b -> SSeq a
\end{verbatim}
Note that the type of {\tt Scons} contains an {\tt Le}
proposition as a constraint:

\begin{verbatim}
Scons :: forall (a:Nat) (b:Nat) . Le b a => Nat' a -> SSeq b
\end{verbatim}

These constraints are propagated like equality constraints
(or like class constraints in Haskell). For example, the
term: \verb+\ x y z -> Scons x (Scons y z)+ is assigned the 
constrained type:
\begin{verbatim}
(Le a b,Le b c) => Nat' c -> Nat' b -> SSeq a -> SSeq c
\end{verbatim}

The compiler uses the type of the constructor functions
for {\tt Le} to build the following constraint solving rules.

\begin{verbatim}
Base: Le Z a -->

Step: Le (S a) (S b) --> Le a b
\end{verbatim}
These rules can be used to satisfy obligations introduced
by propositional types. 


\subsection{Types as Back-chaining Rules}\label{back}

The user can introduce
new propositional facts by writing functions over types
introduced by the {\tt prop} declaration. For example
we can show that {\tt Le} is transitive by exhibiting
the total function {\tt trans} with the following type.

\begin{verbatim}
trans :: Le a b -> Le b c -> Le a c
trans Base Base = Base
trans (Step z) Base = unreachable
trans Base (Step z) = Base
trans (Step x) (Step y) = (Step(trans x y))  
\end{verbatim}

Since this function is total, its type becomes a new rule
that can be used to satisfy propositional obligations. We
can activate this rule in some scope by using the {\tt theorem} declaration.
For example:
\begin{verbatim}
f x = body
  where theorem trans
\end{verbatim}
Observes that the type of {\tt trans} can be used
as a back-chaining predicate solver and adds the following rule
in the scope of the {\tt where} clause. The rule:

\begin{verbatim}
trans: (exists d) [Le b d, Le d c] => Le b c --> []
\end{verbatim}   
says that we can satisfy {\tt (Le b c)} if we can find
a concrete {\tt d} such that (both) {\tt (Le b d)} and {\tt (Le d c)}
are satisfied.


\section{Unreachable Clauses}

Type indexes to GADTs allow the user to make finer distinctions
than when using ordinary algebraic datatypes. Sometimes such
distinctions cause a clause in a function definition to become
unreachable. For example consider the second clause in the definition of 
{\tt transP} below:

\begin{verbatim}
trans :: Le a b -> Le b c -> Le a c
trans Base Base = Base
trans (Step z) Base = unreachable
trans Base (Step z) = Base
trans (Step x) (Step y) = (Step(trans x y))  
\end{verbatim}
The pattern \verb+(Step z)+ has type \verb+(Le (S i) (S j))+
when we know \verb+a = (S i)+ and \verb+b = (S j)+. 
The pattern \verb+Base+ has type \verb+(Le Z k)+ when we
know \verb+b = Z+ and \verb+c = k+. These sets of assumptions
are inconsistent, since \verb+b+ can't simultaneously
be equal to \verb+Z+ and \verb+(S i)+. So the clause in the
scope of these patterns is unreachable. There are no well-typed
arguments, to which we could apply {\tt trans}, that would
exercise the second clause. The keyword {\tt unreachable}
indicates to the compiler that we recognize this fact. 
The reachability of all unreachable clauses is tested.
If they are in fact reachable, an error is raised. An
unreachable clause, without the {\tt unreachable} keyword
also raises an error.

The point of the unreachable clause is to document that the
author of the code knows that this clause is unreachable, and to
help document that the clauses exhaustively cover all
possible cases.


\subsection{Types as Refinement Lemmas} \label{refine}

Consider the function {\tt half} defined below. Given a natural
number whose size is expressed as the sum of a number with itself,
it returns another number with size exactly half of the original. The
type tells us the function is undefined on numbers whose
size is not expressible as the sum of a number with itself.

\begin{verbatim}
plus :: Nat ~> Nat ~> Nat
{plus Z y} = y
{plus (S x) y} = S{plus x y}

half:: Nat' {plus n n} -> Nat' n
half Z = check Z
half (S Z) = unreachable
half (S (S x)) = S(half x)
\end{verbatim}

\noindent
The first and third equations generate the following type checking equations.

\vspace*{.1in}
\begin{tabular}{|l|rcl|} \hline
{\small expected type} & {\small{\tt Nat' \plus{n}{n}}} & $\rightarrow$ & {\small Nat' n} \\ \hline
{\small equation} & {\small half Z} & =             & {\small Z} \\ \hline
{\small computed type} & {\small Nat' Z} & $\rightarrow$ & {\small Nat' Z} \\ \hline
{\small equalities}    & {\small (Equal \plus{n}{n} Z)} & $\Rightarrow$ & {\small (Equal n Z)} \\ \hline 
\end{tabular}
\vspace*{.1in}


\vspace*{.1in}
\begin{tabular}{|l|rcl|} \hline
{\small expected type} & {\small{\tt Nat' \plus{n}{n}}} & $\rightarrow$ & {\small Nat' n} \\ \hline
{\small equation} & {\small half (S (S x))} & =             & {\small (S(half x))} \\ \hline
{\small computed type} & {\small Nat' (S(S a))} & $\rightarrow$ & {\small Nat' (S c)} \\ \hline
{\small equalities}    & {\small (Equal \plus{n}{n} (S(S a)))} & $\Rightarrow$ & {\small (Equal n (S c),Equal a \plus{c}{c})} \\ \hline 
\end{tabular}
\vspace*{.1in}

The current system cannot solve the given constraints.  The hypothesis are in terms
of the type-function call \verb+{plus n n}+. What we need is to direct
the type checker to take these facts and discover facts about \verb+n+, instead
of facts about \verb+{plus n n}+. Such facts are exhibited in the types of the following
two functions.

\begin{verbatim}
nPlusN2:: Nat' n -> Equal (S (S m)) {plus n n} -> 
                    exists k . (Equal n (S k),Equal m {plus k k})
nPlusN2 Z Eq = unreachable
nPlusN2 (S y) Eq = Ex(Eq,Eq)  
  where theorem plusS
  
nPlusN0:: Nat' n -> Equal Z {plus n n} -> Equal n Z
nPlusN0 Z Eq = Eq
nPlusN0 (S y) Eq = unreachable
\end{verbatim}

These functions are typechecked by a case analysis. In each
function one of the clauses is unreachable, because the expected type is inconsistent
with the type of the pattern in the definition. By introducing these
terms as theorems in the definition of {\tt half} we gain extra information.
Consider

\begin{verbatim}
half:: Nat' {plus n n} -> Nat' n
half Z = Z                     where theorem nPlusN0  -- introduces (Equal n Z)
half (S Z) = unreachable
half (S (S x)) = S(half x)     where theorem nPlusN2  
                                     -- introduces (Equal n (S c),Equal a {plus c c})
\end{verbatim}

\subsection{General Use of the {\tt theorem} Declaration}

Theorems are added by the {\tt theorem} declaration. 
The general form of a {\tt theorem} declaration is the keyword
{\tt theorem} followed by 1 or more (comma separated) theorems.
Each theorem is either an {\it identifier} or an {\it identifier} \verb+=+ {\it term}.
For example one may write:
\begin{verbatim}
f:: Nat' n -> Int
f m = 5
  where theorem trans, indHyp = plusS m, nPlusN0
\end{verbatim}
Here, three theorems are introduced. They are named {\tt trans}, {\tt indHyp}, and
{\tt nPlusN0}. The body of the theorem is taken from the types
of the terms {\tt trans}, {\tt (plusS m)} and {\tt S}. If a theorem
does not have an associated term (as is the case for {\tt trans}), the
variable in scope with the same name as the theorem is used.
In the scope of the body of {\tt f} the theorems are:
\begin{verbatim}
BackChain trans: (exists 'd) [Le 'b 'd, Le 'd 'c] => Le 'b 'c --> []
Rewrite indHyp: [] => {plus _n (1+'e)t} --> (1+{plus _n 'e})t
Refinement nPlusN0: [Nat' 'a] => Equal 0t {plus 'a 'a} --> [Equal 'a 0t]
\end{verbatim}

Note how the body of the theorem derives from the type of the term in the {\tt
theorem} declaration. As discussed in Sections \ref{rewrite}, \ref{back}, 
and \ref{refine}, there are currently three uses for theorems:

\begin{enumerate} \item As left to right rewrite rules. The type has the form ({\tt
Equal} {\it lhs} {\it rhs}) the {\it lhs} must be a Type constructor call, or a type
function call. The {\it rhs} is used to replace a term matching the {\it lhs} when narrowing.

\item As a backchaining rule: \verb+P x -> Q y -> S x y+   where {\tt P}, {\tt Q}, 
and {\tt S} are propositions. The intended semantics of a back chaining theorem
is if one is trying to establish \verb+(S x y)+ as a predicate
one may establish the set of predicates \verb+{ (P x), (Q y) }+ instead.

\item As a refinement lemma:  \verb+cond -> Equal f g -> (Equal x t,Equal y s)+ where
{\tt f}, {\tt g}, {\tt s}, and {\tt t} are arbitrary terms, but {\tt x} and {\tt y},
are variables.  The intended semantics of a refinement lemma is to add
additional facts to the set of assumed truths. These new facts are
simpler in form (they equate variables to terms) than the old fact they are
derived from.

\end{enumerate}


\section{Syntax Extensions} \label{synext}

Many languages supply syntactic sugar for constructing homogeneous sequences and
heterogeneous tuples. For example, in Haskell lists are often
written with bracketed syntax, \verb+[1,2,3]+, rather than a constructor function syntax, \verb+(Cons 1 (Cons 2 (Cons 3 Nil)))+, and
tuples are often written as \verb+(5,"abc")+ and \verb+(2,True,[])+
rather than \verb+(Pair 5 "abc")+ and \verb+(Triple 2 True [])+. In \om\
we supply special syntax for five different kinds of data, and allow users to
use this syntax for data they define themselves. \om\ has
special syntax for list-like, natural-number-like, pair-like, record-like, and unary-increment types.
Some examples in the supported syntax are: \verb+[4,5]i+, \verb|(2+n)j|
\verb+(4,True)k+, \verb+{"a"=5, "b"=6}h+, and \verb+(x'3)w+. In general, the syntax starts 
with list-like, natural-number-like, record-like, pair-like, or unary-increment syntax,
and is terminated by a tag. A user may specify that 
a user defined type should be displayed using the special syntax with a given tag. Each
tag is associated with a set of functions (a different set for
list-like, natural-number-like, record-like, pair-like, and unary-increment types). 
Each type of syntax, has an associated tag-table, so the same tag can be used once for
each kind of syntax. Each
term expands into a call of the functions specified by the tag
in the special syntax.

The list-like syntax associates two functions with each
tag. These functions play the role of {\tt Nil} and {\tt Cons}.
For example if the tag "{\tt i}" is associated with
the functions {\tt (C,N)}, then the expansion is as follows.
\begin{verbatim}
[]i         ---> N
[x,y,z]i    ---> C x (C y (C z N))
[x;xs]i     ---> (C x xs)
[x,y ; zs]i ---> C x (C y zs)
\end{verbatim}
\noindent
The semicolon may only appear before the last element in the square brackets.
In this case, the last element stands for the tail of the resulting list.

The natural-number-like syntax associates two functions with each
tag. These functions play the role of {\tt Zero} and {\tt Succ}.
For example if the tag "{\tt i}" is associated with
the functions {\tt (Z,S)}, then the expansion is as follows:
\begin{verbatim}
4i     ---> S(S(S(S Z)))
0i     ---> Z
(2+x)i ---> S(S x)
\end{verbatim}
For backward compatibility reasons, to represent the
built in types {\tt Nat} and {\tt Nat'}, the syntax \verb+4t+ is equivalent
to either \verb+4t+ ({\tt  S(S(S(S Z)))}) in the type name space, and \verb+4v+ 
({\tt  S(S(S(S Z)))}) in the value name space.


The pair-like syntax associates one function with each
tag. This function plays the role of a binary constructor.
For example if the tag "{\tt i}" is associated with
the function {\tt (P)}, then the expansion is as follows:
\begin{verbatim}
(a,b)i      ---> P a b
(a,b,c)i    ---> P a (P b c)
(a,b,c,d)i  ---> P a (P b (P c d))
\end{verbatim}

The record-like syntax associates two functions with each
tag. These functions play the role of the constant {\tt RowNil} and 
the ternary function {\tt RowCons}.
For example if the tag "{\tt i}" is associated with
the functions {\tt (RC,RN)}, then the expansion is as follows:
\begin{verbatim}
{}i             ---> RN
{a=x,b=y}i      ---> RC a x (RC b y RN)
{a=x;xs}i       ---> (RC a x xs)
{a=x,b=y ; zs}i ---> RC a x (RC b y zs)
\end{verbatim}

The unary-increment syntax associates one function with each tag. This function
plays the role of an increment function {\tt Tick}. 
For example if the tag "{\tt w}" is associated with
the function {\tt Tick}, then the expansion is as follows.
\begin{verbatim}
(x`0)w     ---> x
(x`1)w     ---> Tick x
(x`2)w     ---> Tick(Tick x)
(x`3)w     ---> Tick(Tick(Tick x))
\end{verbatim}



Syntactic extension can be applied to any GADT, at either the value or type level. The
new syntax can be used by the programmer for terms, types, or patterns. \om\ uses the
new syntax to display such terms. The constructor based mechanism can also still
be used. The tags are specified using a deriving clause in a GADT. As of the March 2010
release there are two styles of syntax derivations: old-style, and new-style. Both are
supported. The new-style is more expressive, and all old-style derivations
can be replaced by equivalent new-style ones. Eventually the old-style will
be deprecated and removed from the system.

\subsection{Old-style Derivations}

Old-style derivations specify a single tag to be associated with a single syntax
extension for a particular datatype. Using the old-style, each datatype can can be associated with a
single syntax extension, and the number of constructor functions of the datatype
must agree with the number of functions associated with that extension. For example
to use the List extension the datatype must have two constructors, and to use the Pair
extension the datatype must have one constructor.


\subsubsection{Old-style List example: Tuples are Lists that Reflect the Type of Their Components}

An example where the
list-like extension is used both at the value and type level follows.
It generalizes the {\tt Prod}-{\tt Tuple} example from Section \ref{kinds}.



\begin{verbatim}
data Prod ::  *1 ~> *1 where
   Nil  :: Prod a
   Cons :: a ~> Prod a ~> Prod a
 deriving List(a)   -- the List tag "a" is associated with (Nil,Cons) of Prod

data Tuple :: forall (a:: *1). Nat ~> Prod a ~> *0 where
   NIL  :: Tuple Z Nil
   CONS :: a -> Tuple n l -> Tuple (S n) (Cons a l)
 deriving List(b)   -- the List tag "b" is associated with (NIL,CONS) of Tuple
\end{verbatim} 
We can construct value-level {\tt Tuple} and type-level {\tt Prod} using
the special list-like syntax and the tags ``{\tt a}" and ``{\tt b}". A {\tt Tuple}
is a heterogenous list with indexed length. The type of the elements in
the list are reflected in the type of the list as a {\tt Prod} at the type
level. For example we evaluate a value level-list, and ask the system
to compute the kind of a type-level list.

\begin{verbatim}
prompt> [3,True,3.4]b
[3,True,3.4]b : Tuple 3t [Int,Bool,Float]a

prompt> :k [Int,Bool]a
[Int,Bool]a :: Prod *0 
\end{verbatim} 

Note how the lists are both entered and displayed with
the list-syntax with the appropriate tags. We can even use the list-sytax
as a pattern in a function definition.

\begin{verbatim}
testfun :: Tuple (n+2)t [Int,Bool; w]a -> (Int,Bool,Tuple n w)
testfun [x,y;zs]b = (x+1,not y,zs) 
\end{verbatim} 

\subsubsection{Old-style Nat example: A $n$-ary Summing Function}

An example of a type that makes good
use of the natural-number like syntax is the type {\tt SumSpec}.

\begin{verbatim}
data SumSpec :: *0 ~> *0  where
   Zero :: SumSpec Int 
   Succ :: SumSpec a -> SumSpec (Int -> a)
 deriving Nat(s)   -- associates the Nat tag "s" with (Zero,Succ)
\end{verbatim} 
The idea behind {\tt SumSpec} is that types of the
terms in the series {\tt Zero},
{\tt (Succ Zero)}, {\tt (Succ(Succ Zero))}, is each
of the form {\tt SumSpec i}, i.e. {\tt SumSpec Int}, {\tt SumSpec(Int -> Int)},
{\tt SumSpec(Int -> Int -> Int)}, etc. Each {\tt i} in the series
is the type of the corresponding function in the series of functions below.
\begin{verbatim}
0
\ x -> x
\ x -> \ y -> x+y
\ x -> \ y -> \ z -> x+y+z
\end{verbatim}
This series of terms are the functions that sum 0, 1, 2, 3, etc. integers.
We can can observe the relationship between the types of the terms and
the type indexes of {\tt SumSpec}, by typing a series of {\tt SumSpec} examples into the \om\ interactive loop.
\begin{verbatim}
prompt> 0s
0s : SumSpec Int
prompt> 1s
1s : SumSpec (Int -> Int)
prompt> 2s
2s : SumSpec (Int -> Int -> Int)
prompt> 3s
3s : SumSpec (Int -> Int -> Int -> Int)
\end{verbatim}
We can now write a generic function that sums any number of integers.

\begin{verbatim}
sum:: SumSpec a -> a
sum x = sumhelp x 0
  where sumhelp:: SumSpec a -> Int -> a
        sumhelp Zero n = n
        sumhelp (Succ x) n = \ m -> sumhelp x (n+m)
\end{verbatim}        
By using the natural-number like syntax we get a pleasant interface to
the use of {\tt sum}. Observe the example use in the interactive session
below:
\begin{verbatim}
prompt> sum 0s
0 : Int
prompt> sum 1s 3
3 : Int
prompt> sum 2s 4 7
11 : Int
prompt> sum 3s 3 5 1
9 : Int
prompt> sum 3s 3 5
<fn> : Int -> Int
\end{verbatim}

\subsubsection{Old-style Record example: Records and Rows}
Finally, we demonstrate how using Tags and Labels (see Section \ref{tag})
we can roll our own record structures. 

\begin{verbatim}
data Row :: a ~> b ~> *1 where
   RNil :: Row x y
   RCons :: x ~> y ~> Row x y ~> Row x y
 deriving Record(r)

data Record :: Row Tag *0 ~> *0 where
    RecNil :: Record RNil
    RecCons :: Label a -> b -> Record r -> Record (RCons a b r)
  deriving Record()
\end{verbatim} 
The {\tt Row} and {\tt Record} type are like the {\tt Prod} and
{\tt Tuple} type, but they {\it cons} a {\it pair} of things on to
linear sequence, rather than a single thing. By specializing
the first part of the pair to a {\tt Label} we can build labeled
tuples or records.

\begin{verbatim}
prompt> { `name="tim", `age=21}
{`name="tim",`age=21} : Record {`name=[Char],`age=Int}r
\end{verbatim} 

Here we demonstrate that the tag for {\tt Records} is the empty tag.
The empty tag for {\tt Nat} is reserved for ordinary {\tt Int},
the empty tag for {\tt List} is reserved for ordinary lists,
and the empty tag for {\tt Pair} is reserved for \om's builtin binary
product.

\subsection{New-style Derivations}

The old-style syntax extension allows the user to choose only one style of syntactic extension
per datatype declaration, and limits the number of constructors of the datatype to exactly the number
of syntax functions associated with the extension. The new-style of derivation lifts both these restrictions.

\subsubsection{New-style Unary increment example: Cdr}

This example illustrates why it is often desireable to allow more constructors than those supported by a
single extension. It is inspired by the family of Lisp functions {\it cdr}, {\it cddr}, {\it cdddr}, {\it
cddddr}, etc. Which select the {\it n}th second component of a set of nested pairs, where {\it n}
corresponds to the number of {\it d}s. The idea is to write \verb+(x`n)c+ as specification
for the function (c$d^n$r y), where {\it c}
is the syntax extension tag for the datatype. The idea is to support a {\tt Projection}
specification type, such that {\tt Projection a b} specifies a projection from the type {\tt a}
to the type {\tt b}, and the function {\tt cdr:: (Projection a b) -> a -> b}. For
example the partial applications of {\tt cdr} will have the following types:

\begin{verbatim}
cdr (Id`0)p :: a -> a
cdr (Id`1)p :: (a,b) -> b
cdr (Id`2)p :: (a,(b,c)) -> c
cdr (Id`3)p :: (a,(b,(c,d))) -> d
\end{verbatim}

We define this by using the following datatype declaration with a new-style {\tt Tick}
extension.

\begin{verbatim}
data Projection :: *0 ~> *0 ~> *0 where
  Car:: Projection e t -> Projection (e,s) t
  Cdr:: Projection e t -> Projection (s,e) t
  Id:: Projection t t
 deriving syntax(p) Tick(Cdr) 
\end{verbatim} 

In a new-style extension, the keyword {\tt syntax} is followed by the tag, and the
types of extensions are followed by the names of the constructors that will be associated
with those extension functions. Thus {\tt (x`2)p} stands for {\tt (Cdr(Cdr x))}.
Note the types of a few terms constructed this way:

\begin{verbatim}
zero = (Id`0)p :: Projection a a
one  = (Id`1)p :: Projection (a,b) b
two  = (Id`2)p :: Projection (a,(b,c)) c
\end{verbatim} 

Of course the purpose of this type is to serve as the specification for a family
of {\tt cdr} functions. We write this below, illustrating that the syntax extension
can also be used in the pattern matching language.

\begin{verbatim} 
cdr :: Projection a b -> a -> b
cdr Id x = x
cdr (Car x) (a,b) = cdr x a
cdr (x`1)p  (a,b) = cdr x b
\end{verbatim} 

\subsubsection{New-style multiple syntax example: Test}

Finally we illustrate that one datatype can support multiple syntax extensions.


\begin{verbatim}
data Test:: *0 where
  Nil :: Test
  Cons :: Test -> Test -> Test
  RNil :: Test
  RCons :: String -> Test -> Test -> Test
  Zero :: Test
  Succ :: Test -> Test
  Pair :: Test -> Test -> Test
  Next :: Test -> Test
  A:: Test
 deriving syntax(w) List(Nil,Cons) Nat(Zero,Succ) 
                    Pair(Pair) Tick(Next) Record(RNil,RCons)
\end{verbatim}
This allows one to write things like the following:

\begin{verbatim}
test = { "name" = A
       , "age"= 2w
       , "xs" = [ (A`1)w ]w
       , "ps" = (0w,1w)w }w
\end{verbatim}
rather than the much more verbose
\begin{verbatim}
verbose = RCons "name" A (
          RCons "age" (Succ (Succ Zero)) (
          RCons "xs" (Cons (Next A) Nil) (
          RCons "ps" (Pair Zero (Succ Zero)) RNil)))
\end{verbatim}




\section{Level Polymorphism}
Some times we wish to use the same structure at both the value and type level.
One way to do this is to build isomorphic, but different, data structures
at different levels. In \om, we can define a structure to live
at many levels. We call this {\it level polymorphism}. For example
a {\tt Tree} type that lives at all levels can be defined by:

\begin{verbatim}
data Tree :: level n . *n ~> *n where
  Tip :: a ~> Tree a
  Fork :: Tree a ~> Tree a ~> Tree a
\end{verbatim}
\noindent
Levels are {\it not} types. A level variable can only be used
as an argument to the {\tt *} operator. Level abstraction can only
be introduced in the kind part of a {\tt data} declaration, but level polymorphic
functions can be inferred from their use of constructor functions
introduced in level polymorphic {\tt data} declarations.

In the example above,
\om\ adds the type constructor {\tt Tree} at all type levels,
and the constructors {\tt Tip} and {\tt Fork} at the value level
as well at all type levels. We can illustrate this by evaluating
a tree at the value level, and by asking \om\ for the kind of
a simliar term at the type level.

\begin{verbatim}
prompt> Fork (Tip 3) (Tip 1)
(Fork (Tip 3) (Tip 1)) : Tree Int

prompt> :k Tip Int
Tip Int :: Tree *0 
\end{verbatim}

Another useful pattern is to define normal ({\tt *0}) types indexable
by types at all levels. For example consider the kind of the type constructor
{\tt Equal} and the type of its constructor {\tt Eq}.

\begin{verbatim}
Equal :: level b . forall (a:*(1+b)).a ~> a ~> *0

Eq :: level b . forall (a:*(1+b)) (c:a:*(1+b)).Equal c c
\end{verbatim}
Without level polymorphism, the {\tt Equal} type constructor could only
witness equality between types a single level, i.e. types classified by
{\tt *0} but not {\tt *1}. So {\tt (Equal Int Bool)} is well formed
but {\tt (Equal Nat (Prod *0))} would not be, since both {\tt Nat}
and {\tt (Prod *0)} are classified by {\tt *1}. For a useful
example, the type of {\tt sameLabel} could not be expressed
using a level-monomorphic {\tt Equal} datatype.

\begin{verbatim}
sameLabel :: forall (a:Tag) (b:Tag).Label a -> Label b
              -> Equal a b + DiffLabel a b
\end{verbatim}
\noindent
This is because the {\tt a} and {\tt b} are classified by {\tt Tag}, and are not classified by {\tt *0}. A similar restriction would make
{\tt Row} kinds less useful without level polymorphism.


\begin{verbatim}
Row :: level d b . forall (a:*(2+b)) (c:*(2+d)).a ~> c ~> *1
\end{verbatim} 


\section{Tracing}\label{tracing}

The \om\ system includes a number of type-checking time tracing mechanisms that work
well with the type-checking interactive loop (Section \ref{typechecker}). The tracing 
mechanisms can be controlled by using the {\tt :set} and {\tt :clear} command
on the system modes. These commands are accessed as one of the
command level prompts (Section \ref{command}).

\subsection{System Modes for Tracing}
The commands level {\tt :set} and {\tt :clear} control the
\om\ system mode variables.
The mode variables control the tracing behavior of the system. The modes
allow the user to display internal and intermediate type
information for debugging, or understanding purposes.
The available modes are:

\input{modes}

To illustrate the use of the tracing modes, we will trace type checking the function
{\tt app}. Note the insertion of the {\tt check} keyword
in the second clause.

\begin{verbatim}
app::List n a -> List m a -> List {plus n m} a   
app Nil ys = ys
app (Cons x xs) ys = check Cons x (app xs ys)
\end{verbatim}

This allows the user to selectively control when the tracing is to be in effect.
Note in the transcript below, the use of the {\tt :set narrow} command, to turn
narrowing tracing on, and then, the use of the {\tt :q} command, to quit the
interactive debugger. Now as type-checking continues from this point, narrowing
trace information is printed. While tracing, the trace output is paused, and the
user is asked to {\tt step} or {\tt continue}.

\begin{verbatim}
*** Checking: Cons x (app xs ys)
*** expected type: List {plus (1+_c)t _m} _a
***    refinement: {_b=_a, _n=(1+_c)t}
***   assumptions:
***      theorems:
check> :set narrow
check> :q
Norm {plus (1+_c)t _m} ---> (1+{plus _c _m})t

####################c
Solve By Narrowing: Equal {plus (1+_c)t _m} (1+{plus _c _m})t
Collected by type checking in scope case 9.
line: 219 column: 1
app (Cons x xs) ys = (Check Cons x (app x ...
Normal form: Equal (1+{plus _c _m})t (1+{plus _c _m})t
Assumptions:
   Theorems:

-------------------------------------
25 Narrowing the list (looking for 3 solutions) found 0
   Equal (1+{plus _c _m})t (1+{plus _c _m})t

with truths:
   and()

press return to step, 'c' to continue:

-------------------------------------
24 Narrowing the list (looking for 3 solutions) found 0
   Equal {plus _c _m} {plus _c _m}

with truths:
   and()

press return to step, 'c' to continue:

*********************
Found a solution for:
  Equal (1+{plus _c _m})t (1+{plus _c _m})t

Answers = {}
\end{verbatim}


\section{Freshness} \label{fresh}

\om\ includes an experimental implementation of Pitts and Gabbay's
fresh types\cite{gabbay-pitts-02}. The interface to this mechanism
is the type {\tt Symbol}, and the functions:
\begin{verbatim}
fresh:: Char -> Symbol
swap:: Symbol -> Symbol -> a -> a
symbolEq:: Symbol -> Symbol -> Bool
freshen:: a -> (a,[(Symbol,Symbol)])
\end{verbatim}
See the paper for how these might be used.


\section{Resource Bounds}\label{bounds}

In \om, there are several resource bounds that can be controlled by
the user. The current resource bounds control the number of
steps taken while narrowing, and the number of times a backchaining
theorem can be applied. The narrowing bound is important
because narrowing may find an inifinite number of solutions
to a certain problem, and the resource bound cuts off such search.
The backchaining bound is important because some theorems
(especially commutative and associative theorems) can
lead to infinite rewriting sequences. These bounds are controlled
using the {\tt :bounds} command in the toplevel loop. The
command {\tt :bounds} with no argument lists the current resource bounds
and their values. The command ({\tt :bounds} {\it bound} {\it n}),
sets the bound {\it bound} to {\it n}. This is illustrated
in the transcript below:

\begin{verbatim}
prompt> :bounds
narrow = 25   Number of steps to take when narrowing.
backchain = 4 Number of times a backChain lemma can be applied.

prompt> :bounds narrow 30

prompt> :bounds
narrow = 30   Number of steps to take when narrowing.
backchain = 4 Number of times a backChain lemma can be applied.
\end{verbatim}

\section{More Examples}\label{examples}

In appendices \ref{induction} and \ref{quick} are two
short \om\ programs that illustrate the use of the {\tt theorem}
declaration and the use of static constraints. In addition,
there are many more examples of use of \om\ in the papers listed below:
\begin{itemize}

\item Meta-Programming with Typed Object-Language Representations\cite{PasalicLingerGpce}
\item Meta-programming with Built-in Type Equality\cite{SheardLogFrWks04}
\item Languages of the Future\cite{Sheard:2004:LF}
\item Programming with Static Invariants in Omega\cite{SheardLinger}
\item GADTs, Refinement Types, and Dependent Programming\cite{SheardHookLinger}
\item Putting Curry-Howard to Work\cite{CurryHoward}
\item Playing with Types\cite{Playing}
\item Type-Level Computation Using Narrowing in Omega.

All these papers are available at \verb+http://web.cecs.pdx.edu/~sheard+

\end{itemize}
 
\nocite{*}
\bibliographystyle{alpha}
\bibliography{final}

\appendix

\section{Proofs by Induction over the Natural Numbers} \label{induction}

\input{EqualProofsByInduction.prg}

\section{Quicksort Algorithm in \om}\label{quick}

\input{qsort.prg}

\printindex

\end{document}
