Systems based on proposition-as-types provide an elegant approach to interactive 
proof assistants: the problem of proof checking is reduced to type checking and 
these systems combine in a natural way deduction and computation. A well understood 
formulation relies on lambda calculus with dependent types, \cite{nordstrom:book,barendregt92lambda,deBruijn:WLF}. The main problem is then checking the judgement $M:A$ expressing that a given term (proof), $M$, is of type (is a correct proof of the proposition) $A$.





% A central typing rule in any dependent type theory, is the
% equality rule for types:
% 
% \[  \infer{\HasTypeC \Gamma M A}
%     { \HasTypeC \Gamma M B
%     & \EqualTypeC \Gamma A B
%     }
% \]

A type checking algorithm can be naturally divided in two 
stages\cite{deBruijn:WLF}. In the first stage we go through the terms and
whenever we type check a term $M$ of type $A$ against a type $B$ we collect the
equality constraint $A = B$.
 In the second phase we check these constraints by 
verifying that the terms are convertible with each other. With dependent types it is important to check the 
constraints in the right order, and to fail as soon as an equality is invalid, since well typedness of a 
constraint may dependend on previous constraints being satisfied. 


%The correctness for this strategy for 
%checking equality hence depends on that we as invariant has that the two terms 
%in the constrains have the same type. 

For representing proof search in these frameworks it is convenient to extend the notion 
of terms with {\it meta-variables} that stands for yet undetermined terms (proofs). Meta variables
are also useful for structure editing, as placeholders for information to be filled in by
the user. In this paper we will however focus on type reconstruction where
meta-variables are used for representing omitted 
information that can be recovered from typing constraints through unification. 

\input{exintro}

This problem has some negative consequence for the typechecking algorithm. With dependent types,
verifying convertibility between two 
terms relies on normalising these terms, which is only safe if these terms are well typed. But, as we
have seen, in presence of meta-variables, we may not be sure that these terms are well typed, and the
typechecker may loop. Furthermore, producing ill-typed terms is not very elegant.
It is still the case however that if all constraints can be solved, then we have a correct solution; so we
have some form of ``partial correctness'' and this is indeed the approach taken in 
\cite{magnussonnordstrom:alf,coquand:stt-lfm99}. In \cite{elliot:unification}, a similar problem of generating
ill-typed terms occur. This is however less problematic in his context, since these terms can still be shown
to be normalisable in the logical framework he uses, which is more restricted than the one we consider.
Another problem is that when we get a constraint of the form $? = M$, we cannot be sure that $M$ is a solution
for $?$, since we are not sure that $M$ is well-typed. In \cite{magnussonnordstrom:alf,coquand:stt-lfm99,munoz:synthesis}
this difficulty is avoided by retypechecking $M$ at this point, which is costly.


 The main contribution of this paper is to present a type checking algorithm
which produces only well-typed constraints for a logical framework extended
with meta-variables. The main idea is,
for a type-checking problem $N:C$,
to produce an optimal well-typed approximation $N'$ of $N$. 
Whenever we need to verify $M:B$, for a subterm $M:A$ of $N$, where we cannot
yet solve $A=B$,
we replace the subterm $M$
by a {\em guarded constant} $p$ of type $B$. This constant $p$ will compute to $M$ only when the
constraint $A=B$ has been solved. The approximated term $N'$ is in a trivial
way well-typed the logical framework without meta-variables. In the
example above the type $\PI x {F ~?} {F~ (not~x)}$ will be replaced by $\PI x
{F ~?} {F~ (p ~ x)}$ where $p~x : Bool$ will compute to $not ~ x$ when the
meta-variable is replaced with the term $true$.


One interesting application of our work is implicit syntax which allows for a
more compact and readable representation of terms. In
\cite{necula:representation} they show that terms where type information is
omitted is  more efficient to validate than type checking the complete proof
term. This is only possible if constraints are known to be well typed. Their
work differs from ours in that they consider a weaker framework where the
constraint solving is guaranteed to succeed.  The algorithm that we present has
been implemented and we have made experiments with examples with several
hundreds of meta-variables, which shows that our algorithm scales up to at
least medium sized problems.

