\documentclass[11pt]{article}

\usepackage{multicol}
\usepackage{algorithmic}[5]
\usepackage{algorithm}
\usepackage{amsmath, amsthm, amssymb}
\usepackage{cite}
\usepackage{url}

\setlength{\topmargin}{0in}
\setlength{\headheight}{0cm}
\setlength{\headsep}{0cm}
\setlength{\textheight}{9in}
\setlength{\oddsidemargin}{0cm}
\setlength{\evensidemargin}{0cm}
\setlength{\textwidth}{6.5in}

\renewcommand{\algorithmiccomment}[1]{// #1}
\renewcommand{\algorithmicensure}{\textbf{define}}

\begin{document}

\title{Design and Implementation of a\\
Hybrid-Typed Programming Language:\\
Review of Existing Work}

\author{Michael M. Vitousek\\
Willamette University\\
\texttt{mmvitousek@gmail.com}}
\date{May 5, 2010}
\maketitle


\begin{abstract}
Numerous approaches to the problem of determining the correctness of computer programs have been proposed and implemented, such as dynamic and static type checking. These methods involve various advantages and drawbacks in the programming languages in which they are applied, and frequently inform the rest of the  structure, syntax, and interpretation of the language. This paper considers a number of important issues surrounding programming language design in general and implementation of type systems in particular, with a view towards the development of a programming language and type checking system that integrate some of the advantages of both static and dynamic methods. \end{abstract}

	\section{Introduction}
	Programming languages are classified in a number of ways and one important way is how they treat the types of a program. Types classify the information within a program and arise from the need for a system to reject nonsensical operations. They are handled by a number of different techniques in different systems, but most current type systems may be broadly classified into two categories. Static type systems analyze type data in a program at compile time and reject the program if any type errors (situations where the provided type does not match the expected type of an operation or identifier) are detected. Otherwise, if no type errors are found, static type systems allow the program to assume at runtime that all type information in the program is accurate and no further checking is required\cite{pierce}. Dynamic type systems operate on a very different principle. They reject bad programs at runtime rather than compile time, by tagging program data with type information and using that information to detect type errors. Static systems give a measure of security to the programmer and allow  abstract analysis of a program's behavior, but because  not all possible programs are expressible in a static language, static type systems reject some programs that are in fact computable\cite{pierce}. Dynamic languages allow for more flexibility because of their lack of conservative compile time analysis, but this same flexibility allows for type errors that are not caught until runtime.  

	This project, therefore, aims to develop a type system, to be referred to as ``hybrid typing,'' that allows for the expressiveness of dynamic type systems and  but preserves some of the safeties of static typing. This project also will entail the development of a programming language that implements this type system. The form of the language (e.g.\ interpreted vs.\ compiled, syntax, etc) will be decided based upon timing concerns, but at least a prototype system will be an interpreter for a small-to-medium sized functional language. The type system be of a form resembling a static type system, but will leave a way for the programmer to explicitly state that some variable, operation, or function will be considered dynamic and thereby ``escape'' from static program analysis. By this method the language will still have the ability to determine correctness at compile time for all sections of code that are not explicitly or implicitly dynamic, but will give the programmer the option to perform operations and write programs that a purely static system would reject.  

	\section{Review of Existing Work}
	This paper provides a broad survey of work in programming language design and theory, with a focus on type systems and type theory, in order to lay a foundation for future work in the development of a hybrid typing system and an associated language. It examines numerous aspects of the design and implementation of programming languages, some of which may not be directly associated with hybrid typing or type systems in general, but which remain crucial to the development of the programming language itself.
	\subsection{Type Systems}
	The fundamental purpose of programming languages is the manipulation of data, and therefore the ability to reason about the behavior of data is an important concern in programming languages. The categorization of data is one way of enabling the ability to reason about a system, and it guarantees a certain amount of correctness by forbidding operations that are not judged to make sense within the system's abstractions. \textbf{Type systems} are the most commonly used approach to categorization and analysis in programming languages, and rely on assigning every computed value in a program to a particular class of information, such as natural numbers or Boolean values.\footnote{This process is also referred to as type checking. As type theory is a large field with a long history, some differences in terminology have arisen. This paper uses the terms ``type checker'' and ``type system'' interchangeably, without restricting ``type system'' to imply static type checking.} Type systems aid in ensuring program correctness by preventing the performance of actions which the programming language disallows. For example, in a programming language that does not allow the addition of numbers and Boolean values, it is the type system that actively prevents such an event from occurring. How and when it does so varies widely depending on the particular implementation however, and several implementations, with varying strengths and weaknesses, are considered here.

	Benjamin Pierce\cite{pierce} defines a type system as ``a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute,''  but this definition encapsulates a number of different approaches while excluding others. The most straightforward and to most programmers familiar approach is simple typing, which is itself a subset of the discipline of \textbf{static typing}. A large subset of static type systems employ simple typing, which is implemented in languages such as Java and C and makes use of type annotations written by the programmer to classify data. Syntactically, in a simply typed language certain segments of code (such as the declaration of a variable or method) will explicitly include a type annotation, which informs the type system as to the intended type of the code segment. Statically typed languages in general do not necessarily rely on type annotation, but must have some method of determining type information before a program is actually executed. Such a type system uses this information to analyze programs at \emph{compile time}. The type checker performs a conservative analysis of the program to ensure that no situations could occur when a \textbf{type error} may exist --- that is, when the type of the computed value of a code segment does not correspond with its expected type, or when an operation is applied to data whose types do not correspond with the operation's expected input type. The abstraction provided by a statically typed language allows its type checker to perform this analysis without actually executing the code, a process referred to as \textbf{static interpretation}. The additional information provided by these type annotations further allows other types of abstract analysis or abstract interpretation of the code, aiding in debugging and reasoning about the program\cite{cousot77}.

	A common alternative to static type checking is \textbf{dynamic typing}. Dynamic type systems, seen in languages such as Scheme and Python, reject the explicit programmer use of type annotations, and instead implicitly apply type information to data at \emph{runtime}. Thus any particular segment of code is not \emph{a priori} restricted or guaranteed to compute to a value of any particular type, and checking for type errors occurs during execution of the program.

	Dynamic typing and static typing, though they are intended to resolve the same issues, approach the problem of detecting type errors in fundamentally different ways, and their approaches each introduce both advantages and problems for the language. By analyzing a program ahead of time, static languages allow greater simplicity and decreased overhead at runtime --- since the type checker has already proven that there are no type errors, the program may be executed without monitoring type information.\footnote{In practice, however, many large static languages such as Java retain type information for use in checking casting operations.} Furthermore, a program that passes static interpretation without detection of type errors can be assumed to, in fact, not have any type errors, because of the conservative nature of the analysis. This greatly enhances confidence in the correctness of a compiled program, which may be critically important in large, commercial programs. However the conservative nature of this analysis means that the type checker will reject some programs which could never actually resolve to a type error. A trivial example could be the program:
\begin{algorithmic}[1]
\label{alg:trivial}
\IF {$1 \ge 2$ }
\STATE \textsf{integer} $x := 17.434$ \COMMENT{Type error --- $x$ is an \textsf{integer}, but its value is a real number}
\ELSE
\PRINT \texttt{"No problems here!"}
\ENDIF
\end{algorithmic} 
Static interpretation will reject this program because of the type error at line 2, but since the truth branch of the if-statement can never be reached, a dynamic interpretation (i.e.\ execution) of the program will never encounter errors. Although this particular example is improbable in practice and bad coding regardless, it serves to demonstrate that statically typed languages inherently cannot express a subset of computable programs, but in exchange refuse to allow many impossible, erroneous ``programs'' which dynamically typed languages will accept. Less contrived and more useful examples include the ``simulation'' of classes (in the object-oriented sense) in dynamic and functional languages using nested function definitions (see for example Abelson and Sussman, section 3.1\cite{abelson}), and the \emph{eval} function, which takes a string as its argument and then executes the string as if it were source code. Since the result and type of the code contained within the string cannot be known before its execution via \emph{eval}, the function can only be implemented fully in dynamic systems\cite{abelson}.

	Generally, programming languages accept the limitations placed on them by their typing discipline, though they often include features allowing them to ``simulate'' other type systems (for example, type casting in some object-oriented languages). However, a number of methods of integrating these systems have appeared in literature. The most frequently seen example thereof is \textbf{inferred typing}, the family of type systems used in languages like Haskell and ML\cite{krishnamurthi}. These systems, often called ``type inference engines,'' allow programmers to forgo the use of explicit type annotations, and instead the inference engine deduces type information at compile time based on what types the operations in an expression support. If the type checker deduces that the set of types that can ``work'' within a given expression is empty, then a type error has occurred. Because this all occurs statically, without execution of the program, type inference is a subset of static typing, but it does attempt to take advantage of one of the benefits of dynamic typing, that is, the ability to write code without explicit type annotations and may allow more flexibility than a simply typed system.

	An approach that more thoroughly melds static and dynamic type systems is offered by Satish Thatte\cite{thatte90}. This system, referred to as \textbf{quasi-static typing}, is able to statically classify a program, or a portion of a program, as either well-typed, ill-typed, or ambivalent, instead of the usual static approach of simply distinguishing between well-typed and ill-typed, without any way to consider a portion of a program to be ambivalent. It then allows for run time type checking in ambivalent programs by inserting dynamic checks as needed in such programs. The \textbf{soft typing} system developed by Robert Cartwright and Mike Fagan\cite{cartwright91} is similar in that it  bridges the gap between static and dynamic systems by operating as a static system and then inserting run time checks when required rather than dismissing possibly ill typed programs out of hand. Unlike quasi-static typing, however, it does not rely on explicitly labeling certain segments of code as dynamic, and it uses a variant of type inference to alert the user as to the location of possible type errors. A more recent method for integrating static and dynamic approaches is given in Jeremy Siek and Walid Taha's article on \textbf{gradual typing}\cite{siek06}. The authors argue that their system is superior to Thatte's approach in particular due to its ability to catch many static errors that quasi-static typing would fail to detect, and claim that it is equivalent to simple typing when all types are explicitly declared and equivalent to dynamic typing when none are. Lastly, Cormac Flanagan has developed a method referred to as \textbf{hybrid type checking}\footnote{Flanagan's hybrid typing is the inspiration for the name of the system developed in this project, although the implementations and features will not be identical.}\cite{flanagan06}, which seeks to strike a balance between the full expressiveness of dynamic checking and the reliability and runtime efficiency by performing typecasting on possibly ill typed code during static interpretation to see if there exists a situation where the code could be well typed, and if so inserting runtime checks. This system is similar in principle to quasi-static typing but with a focus on retaining some of the efficiency of static typing at the expense of the ability to express some dynamic programs.
	\subsection{Typing Rule Schemas}
	Proving properties of type systems in programming languages is accomplished by use of \textbf{typing rule schemas}\cite{pierce}. Type rules are frameworks for proofs that individual statements in a program are well-typed. An example is the rule for when function application is well typed. In a simply typed system, the ``type'' of a function can be considered to be a transformation from the type of its parameters to the type of its return values; a program that takes an \textsf{integer} as its type and returns a \textsf{boolean} has the type $\textsf{integer}\rightarrow \textsf{boolean}$. The character $\Gamma$ is used as a identifier for the environment of the program at any given point. The environment, in this sense, consists of a set of mappings from identifiers to types, such as $x \leftarrow \textsf{integer}$, which states that the identifier or variable $x$ is of type \textsf{integer}. The symbol $\vdash$ is read ``proves that,'' and is usually used after the environment symbol and before a judgment about the type of a segment of a program, to indicate that ``the environment of the program at this point proves that\ldots'' An example rule using this terminology is as follows:
$$\frac{\Gamma \vdash f : \tau_1 \rightarrow \tau_2\;\;\;\Gamma \vdash x:\tau_1}{\Gamma \vdash f(x):\tau_2}$$
The judgments above the bar are the antecedents of the judgment being proven below, and $\tau_n$ is a meta-variable for types; that is, it can represent any type. In this case, the rule may be read as ``if the content of the program's environment proves that the symbol $f$ represents a function with input of type $\tau_1$ and output of type $\tau_2$, and that the identifier $x$ is of type $\tau_1$, then the environment proves that function application $f(x)$ is well typed and of type $\tau_2$.''  

	This method for proofs can be extended to include other relationships between types, environments, and identifiers, and is widely used for specifying and validating the properties of type systems. Furthermore, type judgments and typing rule schemas are merely a particular case of the more general category of formal deductions, which can also be used to specify other concerns such as the rules that govern the process of evaluation of program expressions\cite{harper}.

	\subsection{Syntax}
	The \textbf{concrete syntax} of a programming language is its surface, textual form, and it is the interface through which programmers utilize the language. Concrete syntax contrasts with \textbf{abstract syntax}, which is a simpler, internal representation of the data of a program in the form of a tree, which is then utilized by the compiler or interpreter\cite{pierce}. In simple languages, the concrete syntax may be identical to the abstract syntax, obviating the need for translation between them. Both forms of syntax often vary tremendously between different languages, but nonetheless they are in some sense effectively irrelevant to the meaning of a program, even if they are important in practice. Even though a value may be expressed in a number of different ways syntactically, it will still have the same semantic properties or meaning. Shriram Krishnamurthi\cite{krishnamurthi} supplies the example of simple arithmetic expressions in different languages:
\begin{center}
$(3-4)+7$ \\
3 4 -- 7 +\\
(+ (-- 3 4) 7)\\
III -- IV + VII\\
(\emph{add} (\emph{sub} (\emph{num} 3) (\emph{num} 4)) (\emph{num} 7))\\
``the difference of three and four, plus seven''
\end{center}
All of these statements are equal to the number six (though that answer may not always be presented as the Arabic numeral ``6'') and can be analyzed as such. The meaning of a program, including its types and type system, is therefore mostly independent from its textual representation, and all of the type systems described above may be used with different syntaxes equally as long as they support some basic common features. Though there is still some degree of dependence on syntax by the type system of a language --- a simply typed language must have a syntax standard that includes type annotations --- here too a number of different syntactic approaches to the same thing are in evidence. The C style syntax of ``[\textsf{type}] [\emph{identifier}];'' is equivalent to the style used in type judgments, ``$\text{[\textit{identifier}]}:\text{[\textsf{type}]}$''.

	However, syntax is naturally highly important in the use of a language, as in order to actually execute a program, a parser must read its textual form and either convert it to the abstract syntax or directly interpret it. For this purpose, language syntax is formalized by \textbf{grammars}. Grammars, usually expressed in schema such as the Backus-Naur Form, specify the syntactic form of a term in a programming language, and such terms can be checked for their syntactic correctness by comparing them to the grammar, a process that occurs in parsing. A grammar for a language of simple arithmetic expressions such as $(3-4)+7$, shown below in a notation similar to Backus-Naur Form,\footnote{This schema is essentially Backus-Naur Form with the addition of set inclusion in the last rule, simply for ease of demonstration.} might be
\begin{center}
\begin{tabular}{lll}
\texttt{<expression>} & ::= &  \texttt{<number>} $|$ \texttt{<term>} $|$ \texttt{<paren-term>} \\
\texttt{<paren-term>} & ::= & ( \texttt{<term>} ) \\
\texttt{<term>} & ::= & \texttt{<expression>} \texttt{<operator>} \texttt{<expression>}\\
\texttt{<operator>} & ::= & $+ | - | \times | \div$\\ 
\texttt{<number>} & $\in$ & $\mathbb{Z}$\\
\end{tabular}
\end{center}
This grammar is capable of representing any arithmetic expression involving only integers, addition, subtraction, multiplication, division, and parentheses. In BNF, the vertical bar divides different forms that a term may take. For example, the above grammar specifies that an \texttt{<operator>} must be exactly one of $+,\;-,\;\times,$ or $\div$. The grammar is recursively defined, with the term \texttt{<expression>}, called a \emph{nonterminal}, consisting of other terms which themselves are partially defined in terms of \texttt{<expression>}. More complex languages of course have much more complex grammars, with numerous nonterminals with many clauses each. Nonetheless, the basic approach of Backus-Naur Form and its variants is sufficient to describe the formal syntaxes of most programming languages.
	\subsection{Semantics} 
	While a program's syntax is how its terms are formulated, the \textbf{semantics} of that program are how these terms are evaluated --- the ``meaning'' of the program. In order for a compiler or interpreter to execute a program, it must be able to systematically compute a value from each each term of the program. Therefore it can be very beneficial to compiler writers and others seeking a robust understanding of a language to formalize its semantics, and there are a number of ways of doing so\cite{aaby}. \textbf{Axiomatic semantics} defines laws about the behavior of a program and uses these to deduce its meaning --- ``the meaning of a term is just what can be proved about it''\cite{pierce} using rules that describe the relationships between different states in the term's execution. The form of these rules may vary depending on the form of the programming language, but a typical axiomatic formalization of an imperative language might assign to each command or control structure pre-conditions and post-conditions\cite{aaby}. These are sets of statements about the state of a program before and after the execution of the term.  For each of these rules, if the term executes successfully and all of the associated pre-conditions are met, then it must be the case that after the term is executed, all the associated post-conditions will be met. A single term may have multiple pairs of pre- and post-conditions associated with it, to allow for different results (post-conditions) for different initial states (pre-conditions). Axiomatic definitions, by reducing the operations of a program to mathematical statements, are useful for reasoning directly about the behavior of a program.

	\textbf{Denotational semantics}, on the other hand, operates by transforming the statements of the language into a different algebra or language. This secondary language is referred to as the semantic algebra, and terms in the syntactic primary language are mapped to terms in the semantic algebra by a valuation function. Since the behavior of the semantic algebra is assumed to be known and well defined, programs in the primary language are understood by analyzing the equivalent program in the algebra. For example, a language of arithmetic expressions may be converted to the denotational form of Peano arithmetic (a simple recursive method of evaluating expressions). Language definitions in denotational terms frequently occur in compiler development, as the process of compilation is roughly analogous to a denotational valuation function. Denotational semantics is also used in comparing different programming languages, since, by using appropriate valuation functions,  different languages may be reduced to the same semantic algebra \cite{harper}.
	
	Finally, \textbf{operational semantics} treats the terms of a programming language as instructions within an abstract machine, and the meaning of a program as the set of changes it makes to the state of that machine. A ``first cousin'' of operational semantics is \textbf{interpreter semantics}, which understands a programming language and deduces the meaning of programs within it by directly constructing an interpreter for it in a programming language (usually, but not necessarily, a different one). The definition of the language, then, is the program which interprets it, rather than the interpreter using a separate definition to execute terms in the language\cite{krishnamurthi}. 
	\subsection{Type Safety}
	A property that some, but not all, programming languages have is that of \textbf{type safety}. Like the definition of a type system, the definition of type safety is not rigidly defined. Benjamin Pierce identifies a type-safe language as “one that protects its own abstractions”\cite{pierce} --- that is, it does not allow  any operations to be performed in an unexpected or unspecified way, such as a function receiving data that it is not intended to utilize. Robert Harper provides a more rigorous definition\cite{harper}, reducing type safety to two statements (using operational semantics). Here, $e$ is a term in a language and the symbol $\mapsto$ represents one step of computation or evaluation, so $e \mapsto e^\prime$ means ``The expression $e$ may be computed to the statement $e^\prime$.''\footnote{For example: $(3+4)\mapsto7$, $(4\times 5+9)\mapsto(20+9)$}
\begin{enumerate}
\item If  $e : \tau$ and $e \mapsto e^\prime$, then $e^\prime : \tau$.
\item If $e : \tau$, then either $e$ is a value or there exists some $e^\prime$ such that $e \mapsto e^\prime$.
\end{enumerate}
These two statements are referred to as the principle of \emph{preservation} and the principle of \emph{progress} respectively. Preservation states that if an expression has a type (that is, $e:\tau$), then the resulting expression after evaluation, $e^\prime$, has the same type --- in other words, type information is not created or destroyed by evaluation.
	
	 In the second rule, for the principle of progress, the word ``value'' is used to mean a term that cannot be any further evaluated, such as a number, as opposed to a term that more ``work'' can be done on, such as an arithmetic expression. Therefore, the principle of progress states that any well typed expression is either a value, or it can be reduced to another expression. Put another way, the process of evaluation always terminates with a value, never with anything else.

	Type safety is a desirable feature for a programming language to have because it ensures that the program will never ``go wrong,'' and undesired results will only occur if the programmer has erred in such a way as to create a well-typed but semantically unsound term. If a program in a type-safe language is correct syntactically and  semantically, then it will execute correctly and not get ``stuck'' --- a state in which some term is neither a basic value, nor does it reduce to another term of the same type. However, in practice numerous languages are not actually type-safe, as type safety is mutually exclusive with certain other language features that may be desirable, such as C's pointer arithmetic. Furthermore, in large languages it is not trivial to prove that a language is type-safe --- if the progress/preservation definition of type safety is used, for example, then progress and preservation must be proved for \emph{every} evaluation rule in the language. Such proofs usually rely on induction on the evaluation ($\mapsto$) relation, and aside from the sheer number of proofs required to show that a large language is type-safe, this relation may be very complex in such languages as well. Therefore some languages, such as Standard ML, are believed to be type-safe, but have not been proven so. Type safety is not a property exclusive to statically typed languages, although some possible definitions of it only apply to static languages. Dynamic languages may be safe if they prevent semantically nonsensical operations at runtime (for example, by raising exceptions)\cite{pierce}.
	\subsection{The Lambda Calculus}
	When developing or experimenting with many features of programming languages, it is often unnecessary and overly complex to use a fully or even moderately featured programming language as a testbed or proof of concept. Most of the papers dealing with mixed/hybrid type systems, for example, do not construct them in the context of a ``real'' programming language, but rather in variants of the \textbf{lambda calculus}, a very small formal system that has only three forms of syntactic term but nonetheless is computationally complete --- it is capable of computing any problem computable by any other language. The basic, untyped lambda calculus has a syntax defined by the following grammar:
\begin{center}
\begin{tabular}{lll}
\texttt{<expression>} & ::= & \texttt{<ident>} $|$ $\lambda$\texttt{<ident>}.\texttt{<expression>} $|$\\
&& \texttt{<expression> <expression>} $|$ ( \texttt{<expression>} )\\
\texttt{<ident>} & $\in$ & identifiers\\
\end{tabular}
\end{center}
The lambda calculus therefore consists only of (respectively to the order in the grammar above) identifiers for variables, lambda abstractions (which may be thought of as functions, with the variable immediately following the $\lambda$ being the input and the remainder after the period being the body of the function), and applications (the second expression being supplied as an argument to the first), plus any of these expressions within parentheses to determine associativity (the language is otherwise left associative). 

	The lambda calculus is evaluated, or \emph{reduced}, by applying a set of reduction rules\cite{barendregt-lc}. The most prominent is $\beta$-reduction, which reduces lambda application expressions (such as $($[$\lambda$\texttt{<ident>}.\texttt{<expression>}] [\texttt{<expression>}]$)$, using brackets to separate the two top-level terms of ease of reading), for example $((\lambda x.x\;y)\;z$).  This form of evaluation substitutes the second expression (in functional terms, the argument or input) for the abstracted variable in the first expression (the function itself). Thus for example $((\lambda x.x\;y)\;z) \mapsto_\beta (z\;y)$, using the symbol $\mapsto_\beta$ to represent one step of evaluation using $\beta$-reduction. Here, the argument $z$ has been substituted in for the abstracted variable $x$ in the initial expression, and the abstraction has been removed. Therefore this process is directly analogous to function application. The other basic reduction rules for the untyped lambda calculus are $\alpha$-reduction, which allows the renaming of variables when such renaming does not alter the meaning of a term, and $\eta$-reduction, which allows for the removal of abstraction if an abstraction will always evaluate to the same term regardless of its input. For example, $\lambda x.x \mapsto_\alpha \lambda y.y$ and, if $x$ can never appear as a free variable in $z$, $\lambda x. z\;x \mapsto_\eta z$.

	Note that a number of things traditionally expected from programming languages are missing, most obviously any natural notion of values --- there is no provision for elements like numbers or strings in the above grammar, and the only thing that variables can represent are other lambda terms. However, it is possible to use lambda terms themselves to represent such values. For example, while most languages represent the number one by the Arabic glyph ``1'', the number one may be represented in the lambda calculus by a function being applied to a variable exactly once. Two is represented by a function applied twice, and zero is a function being applied not at all, and so on. This representation of numbers is called the Church numerals, or $c_n$, after the developer of the lambda calculus, Alonzo Church\cite{pierce}. 
\begin{center}\begin{tabular}{l}
$c_0 = \lambda s. \lambda z. z$\\$ c_1=\lambda s. \lambda z. s\: z$\\$c_2=\lambda s. \lambda z. s\: (s\:z)$\\$c_3=\lambda s. \lambda z. s\: (s\: (s\:z))$\\etc.\\
\end{tabular}\end{center}
Using these as numerals, it is possible to define a successor function, $\textit{succ} = \lambda n.\lambda s.\lambda z. s\:(n\:s\:z)$. When supplied with a Church numeral as argument, the terms reduce to the next Church numeral in the sequence of natural numbers, as in the expression $(\textit{succ}\;c_2)$ which reduces to $c_3$.\footnote{A more familiar definition of the successor function in other languages could be $\textit{succ}(n)=n+1$.} While the meaning and derivation of this function is not easily apparent based on its syntax (something common to most ``programs'' in the calculus), the ability to generate the successor of any natural number does enable creation of addition and multiplication functions, and from there more complex operations --- including ones involving nontrivial data structures such as trees and lists, which themselves have lambda calculus representations.

	The exceptionally simple properties of the lambda calculus make it attractive as a kind of testing ground for type systems. Simple typing may be implemented in the lambda calculus by adding type annotations to variables, such as $\lambda (x:\textsf{int}).x\:y$\cite{barendregt-lcwt}, and altering the reduction rules to support them. Other type systems such as type inference may be developed by creating typing rules for the language to act as a type inference engine, and systems such as quasi-static typing and Flanagan's hybrid typing use their own variants of the lambda calculus as well\cite{flanagan06,thatte90}. As the development of a number system above shows, the lambda calculus can represent much more complex data than it would initially appear, and once enough such data is developed it begins to make sense to speak of applying type information to that data. In the other direction, a type system developed and verified in the lambda calculus has therefore been proven to work in a ``real'' programming language, and may therefore be applied with confidence to a larger system.
	\subsection{Paradigms}
	As well as their type systems, a fundamental way that programming languages differ is in their \textbf{programming paradigms}. Contained within the concept of programming paradigms are issues like how the syntax represents data, what the basic objects of manipulation are, and how programming problems are approached. The choice of a programming paradigm (or several paradigms; multiparadigm languages such as Python are prominent) is a crucial issue in the design of a language because all paradigms naturally lend themselves to use for a particular set of problems. For example, an object oriented language may excel in situations where simulation or representation of a number of discrete entities is desired, while a logic programming language such as Prolog may be strongest at representing the relationships between entities. Others, such as imperative programming, which mainly operates by using expressions to directly alter the program environment's state, have other uses.

	\textbf{Functional programming} is another such paradigm. It is distinguished by its use of mathematical functions and their evaluation as the primary method of computation, and is fundamentally based on the lambda calculus, so languages that use it be seen as very large extensions of the lambda calculus\cite{hudak89}. Functional languages avoid side effects (i.e.\ program effects like printing; anything beyond simply outputting results of function evaluations) and use of state in favor of composition of functions and recursion. Functional languages also almost always use \textbf{first-class functions}, that is, they treat functions as data that may be passed as an input or returned as output from another function. 

	First-class functions are a very useful feature to have in particular in systems that allow large degree of type polymorphism --- that is, systems where an identifier may represent a range of types, such as in dynamic typing, inferred typing, or object oriented languages.\footnote{In object oriented languages like Java, type polymorphism be accomplished by using \textsf{Object} (or in more restrictive cases, the lowest common supertype of all types that need to be representable) as the type of identifiers that are intended to handle multiple types. However, to utilize the return value of such an identifier in non-generic ways, casting is required.} Certain operations, such as the \emph{accumulate} function, may only be straightforwardly written in such systems. This function takes as an argument a function (\emph{fun}) that takes two arguments and returns a value, such as numerical addition\footnote{For the purposes of this program, addition must have a syntax that appears like ``\emph{add}(\emph{firstNum}, \emph{secondNum})'' rather than ``\emph{firstNum} + \emph{secondNum}''} or  string concatenation. It also takes a list (\emph{lst}), which may contain any number of elements of any type, as long as the type is valid input for \emph{fun}, and a default value (\emph{def}), which the program returns when the list is empty. Accumulate then applies the given function to all of the elements of the list, returning a value that represents the combination of those elements and the default element --- for example, in the case of addition, a sum. This implementation, designed for an imaginary language using either dynamic or inferred typing, assumes the presence of list operators \emph{length}, which gives the length of the list, \emph{car}, which returns the first element of the list, and \emph{cdr}, which returns the list sans its first element.
\begin{algorithmic}[1]
\ENSURE{\emph{accumulate}(\emph{fun}, \emph{lst}, \emph{def}):}
\IF {\emph{length}$(\textit{lst}) = 0$}
\RETURN \emph{def}
\ELSE
\RETURN \emph{fun}(\emph{car}(\emph{lst}), \emph{accumulate}(\emph{fun}, \emph{cdr}(\emph{lst}), \emph{def}))
\ENDIF
\end{algorithmic}
In contrast to the rather trivial ``program'' shown in section \ref{alg:trivial} to demonstrate the use of dynamically typed languages, the above program is very useful in many contexts, and it demonstrates the power of flexible type systems when paired with first-class functions. This function may be even stronger in a dynamically typed language than in one utilizing other forms of polymorphic typing, as a dynamically typed language may have multiple types easily contained in the same list. If \emph{fun} is capable of operating on both types, then a logical result may be returned. The program also uses the ability to treat another function as data that may be taken as input and applied to the list, and it exemplifies the functional style of programming --- it uses recursion instead of loops, and at no point mutates or assigns any variables. Functional programming languages often rely heavily on the ability to treat types in a flexible manner, and so while it is by no means strictly necessary that such languages use dynamic or inferred typing --- as noted, \emph{accumulate} could be expressed in a hypothetical version of Java with first-class functions, for example --- in practice, the two very often go together.

	\section{Conclusions}
	The form and design of type systems have extensive repercussions for the syntax and semantics of programming languages, and contribute to the specialization of languages for certain purposes. Static type systems provide a measure of confidence that a program is correct and type-safe at compile time, before it is distributed or activated, and allow for optimization techniques that may substantially improve a program's runtime efficiency. Dynamic languages are more flexible and expressive, and may free the programmer from concerns about explicit type annotations where they may not be useful or worthwhile. Thus it is often the case that static languages such as C and Java are commonly used for large, commercial, production software, while dynamic languages such as Python, Perl, and Ruby are likely to be used in academic contexts or in situations where flexibility is highly important, such as Web applications. 

	The differences between the two styles is not unbridgeable, however, and a variety of methods have been introduced for integrating the features of both styles. Type inference, while contained wholly within the wider sphere of static typing, is one way of allowing programs to be written in a more flexible style, and other methods such as quasi-static typing and gradual typing allow for a richer expression of both dynamic and static type systems within the same framework. Such integrated systems may allow for programs to be written with the strengths and assurances of static typing, except when the flexibility of dynamic typing becomes advantageous, or allow for prototyping in dynamic languages with easy conversion to a static form once assumptions about what data types will be used become more certain.

	Type systems do not operate in a void, and they impact many other portions of the design of programming languages. The syntax of a language is directly affected by any type system which requires or allows explicit type annotation, and such a system may introduce additional checks that need to be performed to ensure that the syntax conforms to the standard expected by the compiler. At the same time, such a requirement may simplify the semantic rules governing a language, as static, explicit types may reduce the necessity of tracking type data at runtime and of designing tools such as exceptions to report when nonconforming operations occur. The utility of the features of different type systems also affects --- and is affected by --- the other high level features of the language such as its paradigms. Dynamic type systems may greatly benefit from the existence of first-class functions, as is common in functional languages, and imperative or object-oriented languages may be enhanced when types are directly and explicitly used, since type annotations can clarify both to the system and to the programmer the meanings of class usage and variable assignment.

	Programming language design is an issue of importance to many people, many of whom may not even consider the problems involved. Almost any programmer who has used more than one language has features that he or she prefers in each one, and he or she often greatly prefers one over the others. Some of these features have been considered in this paper, though the entire set of features a programming language may support is far too vast for any paper, or even book, to cover. In particular, however, the use of any variety of type checking system is a choice that will substantially affect the usage of the language. Therefore, a language which integrates some of the useful benefits of the main two species of type system may, if designed so as to be intuitive and unobtrusive to the programmer, be useful in a broader variety of contexts and to a broader set of people than would a system simply using one or the other.
\bibliographystyle{plain}
\bibliography{latex/SelectedBibliography}
\end{document}
