In this paper we have demonstrated that optimal decoding of IBM Model
4 is more practical than previously suggested.  Our results and
analysis show that exact decoding has a practical purpose.  It has
allowed us to investigate and validate the performance of the ReWrite
decoder through comparison of the outputs and model scores from the
two decoders.  Exact inference also provides an improvement in
translation quality as measured by BLEU score.

During the course of this research we have encountered numerous
challenges that were not apparent at the start.  These challenges
raise some interesting research questions and practical issues one
must consider when embarking on exact inference using ILP.  The first
issue is that the generation of the ILP programs can take a long time.
This leads us to wonder if there may be a way to provide tighter
integration of program generation and solving.  Such an integration
would avoid the need to query the models in advance for \emph{all}
possible model components the solver may require.

Related to this issue is how to tackle the incorporation of higher
order language models. Currently we use our bigram language model in a
brute-force manner: in order to generate the ILP we evaluate the
probability of all possible bigrams of English candidate tokens in
advance. It seems clear that with higher order models this process
will become prohibitively expensive. Moreover, even if the ILP could
be generated efficiently, they will obviously be larger and harder to
solve than our current ILPs. One possible solution may be the use of
so-called delayed column generation strategies which incrementally add
parts of the objective function (and hence the language model), but
only when required by the ILP solver.\footnote{Note that delayed
  column generation is dual to performing cutting planes.}

The use of ILP in other NLP tasks has provided a principled and
declarative manner to incorporate global linguistic constraints on the
system output.  This work lays the foundations for incorporating
similar global constraints for translation.  We are currently
investigating linguistic constraints for IBM Model~4 and other
word-based models in general.  A further extension is to reformulate
higher-level MT models (phrase- and syntax-based) within the ILP
framework.  These representations could be more desirable from a
linguistic constraint perspective as the formulation of constraints
may be more intuitive.


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "ilp-mt"
%%% End: 
