%
% File eacl2014.tex
%
% Contact g.bouma@rug.nl yannick.parmentier@univ-orleans.fr
%
% Based on the instruction file for ACL 2013 
% which in turns was based on the instruction files for previous 
% ACL and EACL conferences

%% Based on the instruction file for EACL 2006 by Eneko Agirre and Sergi Balari
%% and that of ACL 2008 by Joakim Nivre and Noah Smith

\documentclass[11pt]{article}
\usepackage{arabtex}
\usepackage{eacl2014}
\usepackage{times}
\usepackage{url}
\usepackage{latexsym}
%\usepackage{algorithm2e}
\usepackage[linesnumbered,ruled,vlined]{algorithm2e}
\usepackage{amsmath}
\usepackage{graphicx}
%\usepackage[font=small]{caption}
\special{papersize=210mm,297mm} % to avoid having to use "-t a4" with dvips 
%\setlength\titlebox{6.5cm}  % You can expand the title box if you really have to


\title{Lattice Desegmentation for English-Arabic SMT}

\author{First Author \\
  Affiliation / Address line 1 \\
  Affiliation / Address line 2 \\
  Affiliation / Address line 3 \\
  {\tt email@domain} \\\And
  Second Author \\
  Affiliation / Address line 1 \\
  Affiliation / Address line 2 \\
  Affiliation / Address line 3 \\
  {\tt email@domain} \\}

\date{}

\begin{document}
\maketitle
\begin{abstract}
\label{sect:abstract}
%.Long established that when translating from morph complex, tokenization is helpful in reducing sparsity
%.Also shown that this holds when translating into morph complex, though this incurs extra detok step
%.Previous studies have focused on the tok and/or detok strategy, but have always worked on 1-best output
%.This paper detokenizes n-best lists or lattices, allowing the system to consider alternate translations and detokenizations simultaneously.
%.Through a novel lattice detokenization algorithm, we show how a large space of translation options can be explored while considering detokenization features, as well as two language models: one for the tokenized target, and another for the detokenized target.
%.Significant improvements are shown on a NIST English-to-Arabic translation task.

\end{abstract}

\setarab
\novocalize


\section{Introduction}
% das: removed reference to PostScript


%.Discuss previous methods that translate into morph complex (Toutanova, ElKholy, etc)
Translating into morphologically complex languages is a challenging task. It involves careful treatment of the target language due to numerous inflections of words and  morpho-syntactc agreement constraints. Several papers have discussed translation into morphologically complex languages. Most of them rely on preprocessing techniques are presented that changes the format of the target language. These techniques falls under 2 categories: either applying morphological segmentation on surface words or changing their form to lemma+morphological features. \newcite{goldwater.mcclosky:improving} improved Czech to English translation by converting surface forms into Lemma+Pseudowords. \newcite{Badr08segmentationfor} segmented arabic surface words into morphemes to improve English to Arabic SMT. Such techniques improves translation through decreasing the lexical sparsity of words and improving alignment.
%MS: I can also mention factored models if this is necessary

%.Establish importance of tokenization / detokenization
Morphological segmentation is considered an indispensable task for Arabic preprocessing in SMT tasks. It involves both splitting surface forms into meaningful morphemes as well as orthographic character transformations. For example, the noun ``\textit{lldwl}'' <laldwl> ``to the countries'' is tokenized as \textit{''l+ Aldwl''} ( \textit{l+} ``to''  \textit{Aldwl} ``the president''). Detokenization is the process of converting tokenized words into their original orthographically
and morphologically correct surface form. The process includes token concatenation and character adjustments to retrieve the original surface form.

Different preprocessing schemes for Arabic to English SMT is presented  \newcite{Sadat:2006:CAP:1220175.1220176}. They show the positive influence on translation by applying segmentation, decliticization and normalization techniques. \newcite{Badr08segmentationfor} show that tokenizing Arabic improves translation and they present two desegmentation schemes: table-based and rule-based.
\newcite{ElKholy:2012:OMP:2159073.2159149} provide extensive study on segmentation and desegmentation techniques with their influence on English to Arabic statistical machine translation. Their compare segmentation approaches that differ on ``where to segment'' and they provide a detokenization an addition desegmentation technique that uses lookup mapping table and untokenized Language model. \newcite{minkov2007generating} incorporates morpho-syntactic features from source and target sentences to predict word forms in English-Arabic and English-Russian SMT. \newcite{el2012translate} represent Arabic surface words in the form of Lemma+Morphological features(gender, number and determiner) on English-Arabic SMT. They reconstruct Arabic surface from from their representation by presenting morphological prediction and morphological  generation techniques.

%..Point out how all have worked on n-best at most
Recent literature on English-Arabic SMT have all worked on nbest while tuning the features. When training an English-Segmented Arabic model, Badr et al shows that tuning on unsegmented Arabic by desegmenting the nbest list before each tuning iterations performs better than using segmented Arabic as reference. We are not aware of any English-Arabic study that explores the output search graph to improve SMT, or to tune on the lattice.
%..No Arabic study even on n-best??
%.Motivate our approach
%..Discuss situation where 1-best tokenized output is bad
Several situations arises where the 1-best tokenized output is might not be the best translation. 
%...prefix or suffix does not have a stem that matches it
First, prefixes and suffixes appear with conflicting stems. For example, the future particle \textit{``s+/<s>''/will} in ``s+ hzymp'' appears in from of \textit{``hzymp/<hzymT>/defeat''} which is a noun, while it is supposed to preceed verbs.
%...Output starts with a suffix, ends with a prefix
Another situation is when prefixes and suffixes appear in front of unknown words at the beginning and end of the sentence. When these unknown words are dropped in the final translation, the output appears to start with a suffix or end with a prefix. For example, the conjunction \textit{''w+/<w>/and''} in  \textit{''w+ maldives''}  appears in front of the unknown proper noun \textit{``maldives''}, which when dropped, the sentence appears to be ending with a prefix. 
%...tokenized language model can help with this, but in the situations we really care about, where we compose a token that was infrequent or missing at training time, signal will be weak
Also, in cases where the composed token that is infrequent at training time, the tokenized language model can not help much. The need of an untokenized Language model becomes a must as it the format what we aim for.
%...Other studies show that detok is improved by context - would like to have signal travel the other direction, by seeing the final detok, system could do better in selecting surrounding words, ragardless of whether they also require detokenization
Moreover, when there are multiple detokenization options that represent different inflections of the same word, such cases can affect and be affected by the choice of words in context. For example, \textit{''AbA' +km/<aba-' km>/fathers their''} can be detokenized to either \textit{''AbA'km/<aba'| km>''} when it appears in an indicative mood or \textit{''AbAWkm/<aba'wkm>''} when it appears in a subjunctive mood. 
%.High level over-view of our lattice detokenization
%..Detect maximally detokenizable sequences in lattice, substitute with either 1-best detokenation or all options
%..Substitutions can change lattice structure
%..Allude to finite-state analogy
%.Overview of paper
%(If there is a lot of related work, then we could consider splitting this into an Introduction and a Related Word section)




\begin{algorithm}
\AlgoDontDisplayBlockMarkers\SetAlgoNoEnd\SetAlgoNoLine
\KwIn{lattice l}
\KwOut{ListOfChain \textit{ValidChains}}
\DontPrintSemicolon

ListOfChains ValidChains
\tcp{ List of detokenizeable edge chains}
StackOfNodes starters  
\tcp{Nodes that can start a chain}
ListOfNodes closed
\tcp{Nodes that have already been searched, should be a bitvector}
Node s = l.startNode\\
starters.push(s)\\
closed.add(s)\\
\While{ChainStarters is not empty}{
  Node n=ChainStarters.pop()
  Initialize c 
  EnumerateChains(n,c,starters,closed)
 }
\Return {ValidChains}
\caption{Enumerate Chains in lattice}
\label{algo:algo1}
\end{algorithm}

\begin{algorithm}
%\SetAlgoLined
\AlgoDontDisplayBlockMarkers\SetAlgoNoEnd\SetAlgoNoLine
\KwIn{Node \textit{n}, Chain \textit{c}, StackOfNodes \textit{starters}, ListOfNodes \textit{closed}}
\KwOut{ListOfChains \textit{ValidChains}}
\DontPrintSemicolon
\For{edge \textit{e} in \textit{n}.outgoingedges}{
  \eIf{e.canExtend(c)}{
   \tcp{Extend chain n with edge e, and recurse on e's destination node}
   EnumerateChains(e.to(), c+e); 
   }{
   \If{c is not empty and c not added to ValidChains}{
    ValidChains.add(c)
   }
 }
}
\Return {ValidChains}
\caption{Get Chains from node n}
\label{algo:algo2}
\end{algorithm}


\section{Method}
\label{method}
%.Detokenization
Detokenization approaches mainly fall under 3 categories: Rule based, Table based, and string transduction approach.
The rule based approach utilizes manually crafted rules that apply string subsitutions to character sequences in the segmented words to generate their surface form.
%..Explain table approach, cite appropriately

The Table based approach uses a lookup table that saves mappings of segmented/desegmented instances with a count indicating number of times this mapping occured in text. The table is generated during the segmentation phase where each surface word is saved with its segmented form in the table. Segmented forms can have more than one desegmented option, thus the one with the higher count is applied. The Table based approach can be augmented with an untokenized language model as proposed by Kholy et al \cite{ElKholy:2012:OMP:2159073.2159149}. The language model disambiguates between different desegmentation options retrieved from the mapping table based on the context the word appears in.

The string transduction detokenization provided by \newcite{salameh-cherry-kondrak:2013:SRW} is a language independent approach trains a discriminative transducer on pairs of segmented/unsegmented words that are aligned on character level. HMM and character ngram based features are tuned using MIRA. Given a segmented input, the decoder will learn to transduce it into its surface from

%..Explain 1-best detokenization baseline, name it
We setup a baseline system in which we compare it to the nbest feature enriched model and lattice feature enriched model.
The baseline system is trained and tuned on segmented Arabic. Thus the output of the system is in a segmented format that has to go through the desegmentation process.  The desegmentation in this system is a postprocessing step that is applied on the best translation. We adhere to the Table based and Rule based approach proposed by \newcite{ElKholy:2012:OMP:2159073.2159149}. In case the segmented instance is not in the mapping table, we revert back to the rule based approach. 

%.Goal: present detokenization and translation options simultaneously.
Our goal is to present the translation and desegmentation options simultaneously. We apply detokenization on all available translation options in the search space or nbest list. Hence, the decoder builds up a translations that are desegmented. An advantage of this approach is that the translation options in the search space will be in the final unsegmented format that we aim for. In addition, we introduce additional features related to the unsegmented language representation rather than limiting the feature set to properties related to the segmented representation of the Arabic target language. 
Detokenization allows us to introduce additional features that can disambiguate the several options in translations.

%.Features enabled by detokenization: ..D,..T,..S,..L,

We introduce the following features which are enabled by detokenization. 
\begin{enumerate}
\item
Detokenizations score : for a mapping of $X \rightarrow Y$ where $X$ is segmented Arabic and $Y$ is its surface form, detokenization score is defined as the log probability of 
\begin{center}
$\frac{\text{number of instance Y generated from X}} {\text{total number of instances X}}$
\end{center}

\item
isTable: binary feature that is true if Table approach is used for detokenizations, and false if Rule approach is used or no Detokenization required
\item
Token Span: is four binary features which indicate the number of tokens the desegmented word was generated from. For example,``\textit{ltElymhA }''  <lt`lym hA>  ``to teach her'' which was generated from the 3 tokens ``\textit{l+ tElym +hA }'' will receive the following value set ``$0 0 1 0$''

\item
Untokenized Language Model score: is a 5-gram language model score trained on untokenized Arabic that is calculated for each hypothesis or sentence. Using this model will capture a larger context of the generated words that would enhance the agreement and give lower score for the incorrectly inflected desegmentations.
\end{enumerate}

%Solution
We provide two solutions to detokenization during translation: n-best list detokenization and lattice detokenization.
%.Solution for N-best..Detokenize n-best list..Annotate with new features..Retrain MERT or MIRA
For the \textit{n-best} solution, we generate the nbest list on the tuning and test set from the segmented model. Then we desegment the nbest for each sentence using the Table+Rule approach. For each desegmented translation, we annotate with the new set of features enabled by desegmentation. The total value for each new feature is the sum of feature values of desegmented tokens in the sentence. We augment the new features to the basic feature vector generated from the segmented model, and we retune with MERT or MIRA using weights from the first tune pass as initial weights.

%.Solution for Lattice
For the lattice solution, we generate the lattice for each translated sentence for the tuning and test set. The lattice first goes through initial transformation that maintains the lattice properties while changing its shape. We transform phrasal edges in the lattice to a word based edges. An edge that outputs a phrase of \textit{n} tokens will be converted to a sequence of n consecutive edges with one word on each edge. The properties of the phrasal edge are passed to the first edge in the transformed sequence. Based on the current lattice structure, we tried 2 methods for the lattice detokenization. 

%..Provide finite-state solution first
%...Use lots of diagrams!
%...Explain why existing finite-state tools are a bad fit (difficult to maintain features through transformations)
In the first method, the lattice is transformed to a finite state acceptor machine using OpenFST library \newcite{openfst}. Then, we generate a list of all prefixes, suffixes and stems and build a definition of a word as a regular expression of [prefix* stem suffix*]. Using these definitions, we print all paths in the acyclic FSA that abides by the word regular expression. The problem with the existing finite-state tools is that it becomes difficult to maintain features with the output search graph transformations. Also, adding new features such has an unsegmented LM scores is exacerbated by the huge size of LM generated through Finite State transducers.

%..Provide programmatic solution
%...Intuition: at each node that could start a chain, launch a depth-first-search
%...While searching, track any potential chain start points, save on stack to explore later
%...Psuedo-code block
%...Briefly highlight and discuss corner cases (suffix initial, prefix final)
The second method is a programmatic solution where we traverse the graph and collect valid segmentable chains. We define a \textit{chain} as a sequence of edges where each edge holds a single token. A valid desegmentable chain is a chain that has the following form:
\begin{center}
[prefix+] [stem] [+suffix]
\end{center}

A valid chain starts with a prefix (one or more) or one stem, and ends with a suffix (one or none). We check each node if it can be a chain starter i.e a prefix or a stem. If a node is a chain starter, a depth first search is launched from this node, and any potential chain starter nodes are tracked and saved on stack for further exploration. From each chain starter, we track all attachable edges and add them to the chain. When we reach an edge with a token that can not be attached to the last visited token in the chain , we consider the current chain a valid one and add it to the chain list. Then we desegment  all collected chains using Table+Rule desegmentation scheme. We transform the original lattice structure into a desegmented one by producing a single edge from each chain labelled with the desegmented word. For corner cases where the lattice starts with a suffix edge or ends with a prefix edge , we leave these tokens unsegmented.


\begin{figure}

\centering
\includegraphics[scale=0.6]{example2A.jpg}
\caption{\label{img:segLattice}Before Desegmentation: 3 Valid Desegmentable chains can be extracted from this lattice\newline chain A: b+ lEbp +hm $\rightarrow$ blEbthm/<bl`bthm>/with their game \newline chain B: b+ lEbp +hA $\rightarrow$ blEbthA/<bl`btha>/with her game \newline chain C: b+ lEbp  $\rightarrow$ blEbp/<bl`bT>/with a game}
\end{figure}

\begin{figure}
\centering
\includegraphics[scale=0.6]{example3A.jpg}
\caption{\label{img:desegLattice}After Desegmentation: 3 new edges with desegmented labels are generated}
\end{figure}




\section{Experimental Setup}
\label{setup}
%.Data
To examine the effect of retuning with additional features on the translation quality, we use 4 Arabic-English parallel datasets from the Linguistic Data Consortium as training data to train an English-Arabic SMT system. The data sets used are: Arabic News (LDC2004T17), eTIRR (LDC2004E72), English translation of Arabic Treebank (LDC2005E46), and Ummah (LDC2004T18). The training data has 109K sentences. The Arabic part of the training data
constitutes around 2.8 million words before segmentation and 3.3 million tokens after segmentation.
We train a 5-gram language model of 200 Million words from LDC Arabic Gigaword corpus (LDC2011T11) to using using SRILM toolkit \cite{Stolcke02srilm-}. 

We use 2 different development sets for tuning our system. For the first experiment, we tune on NIST MT 2004 evaluation set (1075 sentences) and test on and NIST MT 2005 evaluations (1056 sentences). For the second experiment, we tune on NIST MT06 and test on both NOIST MT08 ad NIST MT09 separately.We have separatese these data sets since  MT06, MT08 and MT09 has webdata in addition to newswire data(unlike MT04 adn MT05 which only has newswire data). As all of the NIST data sets have multiple english references, we use the first English translation as the source for tuning and testing. 

The  preprocessing of the English text in the parallel corpus is limited to lower-casing and tokenizing by  stripping punctuation marks. For all the Arabic text used in the training, development, testing
set and language model, we use MADA 3.2 tool \cite{Habash:2009} for segmentation with the Penn Arabic
Treebank segmentation scheme. We generate the mapping table used for the table desegmentation scheme from the segmentation of the Giga word using MADA tool.

%.Baseline translation system
For the baseline SMT system, we train the system on english anad segmented Arabic and align the parallel corpus using GIZA++. We use Moses phrasebased SMT system \cite{Hoang07moses:open}, with a maximum phrase length of 5. We tune using the development set and use the tuned weights to generate the nbest list of size 100 and 1000, as well as the output search graph for the development and test sets. 
Finally, we evaluate our system using the BLEU score \cite{Papineni02bleu:a}.

%.Tuning methods (MERT, MIRA, Lattice MIRA)
%MS: I am not sure what more to add here or which papers I should cite
We use 3 tuning methods on the development sets. On the nbest list, we used MIRA and MERT while on the output search graph, we use lattice MIRA. Generated latticed are all pruned with 50 as pruning density.

%.5 replications for stability (this must be done on both n-best (once we have a stable n-best system) and lattice)

\section{Experiments}
\label{sect:experiment}
.Main experiments: BLEU improvements for n-best or lattice on MT05
.Ablation: importance of features
.Other datasets

\section{Conclusion}
\label{sect:conclusion}


\section*{Acknowledgments}

Do not number the acknowledgment section. Do not include this section
when submitting your paper for review.

% If you use BibTeX with a bib file named eacl2014.bib, 
% you should add the following two lines:
\bibliographystyle{acl}
\bibliography{eacl2014}

% Otherwise you can include your references as follows:
%% \begin{thebibliography}{}

%% \bibitem[\protect\citename{Aho and Ullman}1972]{Aho:72}
%% Alfred~V. Aho and Jeffrey~D. Ullman.
%% \newblock 1972.
%% \newblock {\em The Theory of Parsing, Translation and Compiling}, volume~1.
%% \newblock Prentice-{Hall}, Englewood Cliffs, NJ.

%% \bibitem[\protect\citename{{American Psychological Association}}1983]{APA:83}
%% {American Psychological Association}.
%% \newblock 1983.
%% \newblock {\em Publications Manual}.
%% \newblock American Psychological Association, Washington, DC.

%% \bibitem[\protect\citename{{Association for Computing Machinery}}1983]{ACM:83}
%% {Association for Computing Machinery}.
%% \newblock 1983.
%% \newblock {\em Computing Reviews}, 24(11):503--512.

%% \bibitem[\protect\citename{Chandra \bgroup et al.\egroup }1981]{Chandra:81}
%% Ashok~K. Chandra, Dexter~C. Kozen, and Larry~J. Stockmeyer.
%% \newblock 1981.
%% \newblock Alternation.
%% \newblock {\em Journal of the Association for Computing Machinery},
%%   28(1):114--133.

%% \bibitem[\protect\citename{Gusfield}1997]{Gusfield:97}
%% Dan Gusfield.
%% \newblock 1997.
%% \newblock {\em Algorithms on Strings, Trees and Sequences}.
%% \newblock Cambridge University Press, Cambridge, UK.

%% \end{thebibliography}

\end{document}
