% 
% File naaclhlt2012.tex
%

\documentclass[11pt,letterpaper]{article}
\usepackage{arabtex}
\usepackage{naaclhlt2012}
\usepackage{times}
\usepackage{latexsym}
\usepackage{graphicx}
\usepackage[T2A,T1]{fontenc}

\setlength\titlebox{6.5cm}    % Expanding the titlebox


\newcommand{\she}{\mbox{\usefont{T2A}{\rmdefault}{m}{n}\cyrtshe}}
\newcommand{\DTL}{\textsc{DirecTL+}}
\newcommand{\MM}{\textsc{M2M-Aligner}}

\title{Reversing Morphological Tokenization in English-to-Arabic SMT}

\author{Mohammad Salameh$^{\dag}$ \and  Colin
Cherry$^{\ddag}$ \and Grzegorz Kondrak$^{\dag}$ \\
\begin{tabular}{cc}
 & \\
$^{\dag}$Department of Computing Science  & $^{\ddag}$National Research Council Canada\\
University of Alberta                   &1200 Montreal Road\\
Edmonton, AB, T6G 2E8, Canada           &Ottawa, ON, K1A 0R6, Canada\\
{\tt \{msalameh,kondrak\}@cs.ualberta.ca}     &{\tt Colin.Cherry@nrc-cnrc.gc.ca}
\end{tabular}
}

\date{}

\begin{document}
\maketitle
\begin{abstract}
Morphological tokenization has been used in machine translation for morphologically complex languages to reduce lexical sparsity. 
Unfortunately, when translating into a morphologically complex language, recombining segmented tokens to generate original word forms is not a trivial task,
due to morphological, phonological and orthographic adjustments that occur during tokenization. 
We review a number of detokenization schemes for Arabic, such as rule-based and table-based approaches and show their limitations.
We then propose a novel detokenization scheme that uses a character-level discriminative string transducer to predict the original form of a segmented word. 
%GK: I removed the following sentence about DTL from the abstract
%Our method uses {\DTL}, a language independent tool that is able to
%learn complex character transformations in context.
% and in which contexts they should be applied.
%We apply our scheme on Arabic tokenized text.
%CC Camera: Surely this isn't true any more
%We demonstrate a 70\% relative reduction
%in both word and sentence error rate
%
%CC: Attempting a softer sell
In a comparison to a state-of-the-art approach,
%that uses $n$-gram context to disambiguate a look-up table, 
we demonstrate slightly better detokenization error rates
without needing any handcrafted rules.
We also demonstrate the effectiveness of our approach in an
English-to-Arabic translation task.
%decrease in sentence error rate of 5.68\%
%(equivalent to 72.3 \% relative error reduction)
%in comparison to a state-of-the-art % table-based
%approach that incorporates a word $n$-gram language model.
%10.7\% compared to table based scheme and 12.2\% compared to
%rule based approaches.
\end{abstract}

\setarab
\novocalize

\section{Introduction}
%- what is the problem
%Different Natural Language Processing and Information retrieval tasks 
Statistical machine translation (SMT) relies on tokenization to split sentences into meaningful units for easy processing.
For morphologically complex languages, such as Arabic or Turkish,
this may involve splitting words into morphemes. 
%This morphological tokenization can include both
%orthographic and phonological transformations.
%where some characters forms are changed.
Throughout this paper, we adopt the definition of tokenization
proposed by \newcite{DBLP:series/synthesis/2010Habash},
which incorporates both morphological segmentation
as well as orthographic character transformations.
%Tokenization is a process that involves
%choosing which morphemes to segment and to decide whether after 
%separating some morphemes, we regularize the orthography of the resulting 
%segments. 
To use an English example, the word \textit{tries} would be
morphologically tokenized as \textit{``try + s''},
which involves orthographic changes at morpheme boundaries
to match the lexical form of each token.
%For example, tokenizing the Arabic word 
%\textit{lgtnA}\footnote{We use Buckwalter transliteration 
%scheme for all Arabic text examples provided in his paper} 
%<l.gtnA> 'our language' into 
%\begin{center}
% <+nA> <l.gT> (English as \textit{lgp}/language \textit{+nA}/our)
%\end{center}
%requires changing the \textit{'t'} <t> character to  \textit{'p'} <T>,
%since the letter \textit{'t'} 
%has to take the form of \textit{'p'} character when it appears at 
%the end of a noun. 
When translating into a tokenized language, the tokenization must be reversed to make the generated text readable and evaluable.
Detokenization is the process of converting tokenized words into their original orthographically and morphologically correct surface form. 
%Thus, resolving the ambiguity of original character form is not trivial as several adjustments has to take place.
This includes concatenating tokens into complete words and reversing any character transformations that may have taken place.

%- why is the problem important
For languages like Arabic,
tokenization can facilitate SMT by reducing lexical sparsity. 
Figure~\ref{img:alignment} shows how the morphological tokenization
of the Arabic word
%\textit{``wsymn$\varsigma$hm''}
<wsymn`hm> ``and he will prevent them''
simplifies the correspondence between Arabic and English tokens,
which in turn can improve the quality of word alignment, rule extraction and decoding. 
%can learn more about reordering and alignment after tokenizing the Arabic text.
%Alignment is improved when each token of 
%is aligned with 
%each word of its translation \textit{``and he will prevent them''} 
%rather than aligning it to the whole phrase.
%This improves the SMT performance since tokenizing the single 
%word into its morphemes enhances finding accurate correspondences 
%in its translation by the aligner. 
When translating from Arabic into English, the tokenization is a form of preprocessing, 
and the output translation is readable, space-separated English.
However, when translating from English to Arabic, the output will be in a tokenized form,
which cannot be compared to the original reference without detokenization. 
Simply concatenating the tokenized morphemes cannot fully reverse this process, 
because of character transformations that occurred during tokenization.

\begin{figure}
\centering
\includegraphics[scale=0.65]{example.jpg}
\caption{Alignment between tokenized form of \textit{``wsymn$\varsigma$hm''} <wsymn`hm> and its English translation}
\label{img:alignment}
\end{figure}

% how other people deal with the problem
The techniques that have been proposed for the detokenization task fall into three categories \cite{Badr08segmentationfor}. 
The simplest detokenization approach concatenates morphemes based on token markers without any adjustment. 
Table-based detokenization maps tokenized words into their surface form with a look-up table
built by observing the tokenizer's input and output on large amounts of text.
Rule-based detokenization relies on hand-built rules or regular expressions to convert the segmented form into the original surface form. 
Other techniques use combinations of these approaches. 
Each approach has its limitations: 
rule-based approaches are language specific and brittle, 
while table-based approaches fail to deal with sequences outside of their tables.

%- what is our great idea of dealing with the problem
We present a new detokenization approach that
applies a discriminative sequence model
to predict the original form of the tokenized word. 
%GK: I moved the sentences about DTL to a later section
Like table-based approaches, our sequence model requires large amounts of tokenizer input-output pairs; but instead of building a table, we use these pairs as training data.
By using features that consider large windows of within-word input context, we are able to intelligently transition between rule-like and table-like behavior.

%- what are our results briefly 
% CC: Updating to reflect the new numbers and to soften claims
We test our approach on Arabic text, 
and we obtain 11.9 sentence error rate 
(SER\footnote{SER is the percentage of sentences containing at least one error after detokenization.}) 
points improvement over a rule-based approach, and 1.1 points over a table-based approach that backs off to rules.
More importantly, we achieve a  slight improvement over the state-of-the-art approach of \newcite{ElKholy:2012:OMP:2159073.2159149}, 
which combines rules and tables, using a 5-gram language model to disambiguate conflicting table entries. 
In addition,
our detokenization method results in a small BLEU improvement over the simpler detokenizers when applied to English-to-Arabic SMT.

%GK: I commented out this synopsis paragraph to save space
%GK: the synopsis is not that informative in any case
%In section~\ref{ar-morphology}, we provide background on Arabic SMT. 
%Section~\ref{sect:relWork} presents previous research on detokenization.
%Section~\ref{detok} discusses the existing detokenization schemes and their limitations.
%Section~\ref{dtl} describes {\DTL} and its application to detokenization. 
%Section~\ref{experiments} is devoted to the evaluation.

\section{Arabic Morphology}
\label{ar-morphology}
%The 2nd section should provide some background on Arabic SMT because the audience is general.
Compared to English, Arabic has rich and complex morphology. 
Arabic base words inflect to eight features. 
Verbs inflect for aspect, mood, person and voice. 
Nouns and adjectives inflect for case and state. 
Verbs, nouns and adjectives inflect for both gender and number. 
Furthermore, inflected base words can attract various optional clitics. 
Clitical prefixes include determiners, particle proclitics, conjunctions and question particles in strict order. 
Clitical suffixes include pronominal modifiers. 
As a result of clitic attachment, morpho-syntactic interactions 
sometimes cause changes in spelling or pronunciations. 

%CC: New topic (tokenization), so new paragraph
Several tokenization schemes can be generated for Arabic, based on which clitical level the tokenization is performed. 
In this paper, we use Penn Arabic Treebank (PATB) tokenization scheme, 
as it was shown by \newcite{ElKholy:2012:OMP:2159073.2159149} to produce the best results for SMT.
%CC: Mohommad - we need a one or two sentence description here of what the PATB scheme does
The PATB scheme detaches all clitics except for the definite article \textit{Al} <al>.
Multiple prefix clitics are treated as one token. %CC: Probably wrong, but we need something like that

%CC: Another new topic: normalization
Some Arabic letters present further ambiguity in text. 
For example, the different forms of Hamzated Alif ``<'i 'a>''  are usually written without the Hamza ``<-'>''. 
Likewise, when the letter Ya \textit{'Y'} <y> is present at the end of the word, 
it is sometimes written in the form of ``Alif Maqsura'' letter \textit{'\'{y}'} <Y>. 
Also, short vowels in Arabic are represented using diacritics, which are usually absent in written text.
In order to deal with these ambiguities in SMT, Arabic text is often normalized as a preprocessing step. 
Normalization usually includes normalizing different forms of Alif and Ya to a single form. 
This decreases Arabic's lexical sparsity and improves SMT performance.

\section{Related Work}
\label{sect:relWork}

%The 3rd section should discuss related papers (previous work).
\newcite{Sadat:2006:CAP:1220175.1220176} address the issue of lexical sparsity by presenting different preprocessing schemes for Arabic to English SMT. 
The schemes include simple tokenization, orthographic normalization, and decliticization. 
The combination of these schemes results in improved translation output.
%CC: Covering our backs
This is one of many studies on normalization and tokenization for translation from Arabic, which we will not attempt to review completely here.
%MS: Dr Greg Should I leave this or ignore mentioning this paper ?

\newcite{Badr08segmentationfor} show that tokenizing 
% the term segmenting is used in their paper rather than tokenizing
% CC: that's okay, we use ``tokenizing''
Arabic also has a positive influence on English to Arabic SMT. 
They apply two tokenization schemes on Arabic text and introduce detokenization schemes through a rule-based approach,
a table-based approach,
and a combination of both. 
The combination approach detokenizes words first using the table, falling back on rules for sequences not found in the table.

\newcite{ElKholy:2012:OMP:2159073.2159149} extend Badr's work by presenting a larger number of tokenization and detokenization schemes 
and comparing their effects on SMT. 
They introduce an additional detokenization schemes based on the SRILM \textit{disambig} utility~\cite{Stolcke02srilm-},
which utilizes a 5-gram untokenized language model to decide among different alternatives found in the table.
%They show the effect of different schemes on reduced and enriched Arabic. 
They test their schemes on naturally occurring Arabic text and SMT output. 
Their newly introduced detokenization scheme outperforms
the rule-based and table-based approaches introduced by \newcite{Badr08segmentationfor},
establishing the current state-of-the-art.

\subsection{Detokenization Schemes in Detail}
\label{detok} 

%the 4th section should describe the cometing approaches: simple, table, and rule.
Rule-based detokenization involves manually defining a set of transformation rules to convert a sequence of segmented tokens into their surface form. 
For example, the noun ``\textit{llr\^{y}ys}'' <lalr'iys> ``to the president'' 
is tokenized as \textit{''l+ Alr\^{y}ys''} ( \textit{l+} ``to'' 
 \textit{Alr\^{y}ys} ``the president'')
in the PATB tokenization scheme.\footnote{We use Habash-Soudi-Buckwalter
transliteration scheme for all Arabic text examples provided in this paper~\cite{nizarBookChapter}} 
Note that the definite article ``\textit{Al}'' <al> is kept attached to the noun.
In this case,
detokenization requires a character-level transformation after concatenation,
which we can generalize using the rule:
\begin{center}
\textit{l+Al $\rightarrow$ ll}.
\end{center}
\begin{table}[tb]
\begin{center}
\begin{tabular}{lll}
\hline
%\multicolumn{2}{r}{Example} \\
%\cline{2-3}
Rule    & Segmented & Desegmented \\
\hline
%CC: I don't understand the top line in the table, why isn't it just l+Al on the LHS?
%MS: this is written in a regular expression notation where an extra 'l' can be present  l+Al(l)
%MS: so the rules can be l+Al-> ll   or l+All -> ll
%CC: I don't see anything corresponding to +l? anywhere in the example
%GK: I put that '?' back for now
l+Al+l? $\rightarrow$ ll     & l+ Alr\^{y}ys    & llr\^{y}ys   \\
\she+(pron) $\rightarrow$ t(pron)      & Abn\she+hA      & AbnthA   \\
y+(pron) $\rightarrow$ A(pron)      & Alqy+h      & AlqAh   \\
'+(pron) $\rightarrow$ \^{y}      & AntmA'+hm      & AntmA\^{y}hm   \\
y+y $\rightarrow$ y      &  $\varsigma$yny+y      & $\varsigma$yny   \\
n+n $\rightarrow$ n      &  mn+nA      & mnA   \\
mn+m $\rightarrow$ mm      &  mn+mA      & mmA   \\
$\varsigma$n+m $\rightarrow$ $\varsigma$m      &  $\varsigma$n+mA      & $\varsigma$mA   \\
An+lA $\rightarrow$ AlA      &  An+lA      & AlA   \\ %CC: Example was AnA, why?
\hline
\end{tabular}
\end{center}
\caption{\label{tab:rules-table} Detokenization rules used by \newcite{ElKholy:2012:OMP:2159073.2159149}, with examples of use. 
(pron) is an abbreviation for pronominal clitic}
\end{table}
Table~\ref{tab:rules-table} shows the rules provided by \newcite{ElKholy:2012:OMP:2159073.2159149}, which we use throughout this paper.

There are two principal problems with 
the rule-based approach.
First, rules fail to account for unusual cases.
For example, the above rule mishandles cases where ``\textit{Al}'' <al> is a basic part of the stem and not the definite article \textit{``the''}. 
Thus, 'l+ Al$\varsigma$Ab' (\textit{l+} ``to'' \textit{Al$\varsigma$Ab} ``games'') is erroneously detokenized to
\textit{llEAb} <lal`Ab > %when the \textit{''l+Al''} rule is used. 
instead of the correct form is ``\textit{lAl$\varsigma$Ab}'' <lAl`Ab>.
Second, rules may fail to handle sequences produced by tokenization errors.
For example, the word ``\textit{bslT\she}'' <bsl.tT> ``with power'' can be erroneously tokenized as \textit{''b+slT+h''}, 
while the correct tokenizations is \textit{``b+slT\she''}. 
%MS: there is no rule to deal with this case, so a simple concatenation is applied which leads to an erroneous detokanization
%MS: Using the rules in Table~\ref{tab:rules-table}, 
The erroneous tokenization will be incorrectly detokenized as \textit{''bslTh''}.

The table-based approach memorizes mappings between words and their tokenized form. 
Such a table is easily constructed by running the tokenizer on a large amount of Arabic text, and observing the input and output.
The detokenization process consults this table to retrieve surface forms of tokenized words. 
In the case where a tokenized word has several observed surface forms, the most frequent form is selected. 
The main weakness of this approach is that it fails when the sequence of tokenized words is not in the table.  
In morphologically complex languages like Arabic, an inflected base word can attrract many optional clitics.
Such tables have difficulty in incorporating all different forms and inflections of a word.

%CC: Deleted T+R paragraph, handled in last sentence of section
%MS: T+R Added paragraph
%Table based and Rule based approaches can be combined together (denoted as ``T+R''). The table is used first to map all tokenized words into their surface form. In case a tokenized word form is% not found in table, the system backs off to a set of manual rules instead of applying a simple concatenation of tokens.
%MS

The SRILM-disambig scheme introduced by \newcite{ElKholy:2012:OMP:2159073.2159149}
extends the table based approach to use an untokenized Arabic language model to disambiguate among the different alternatives. 
Hence, this scheme can make context-dependent detokenization decisions, rather than always producing the most frequent surface form.
%CC: T+R needs to be mentioned somewhere
Both the SRILM-disambig scheme and the table-based scheme have the option to fall back on either rules or simple concatenation for sequences missing from the table.

\section{Detokenization as String Transduction}
\label{dtl}

We propose to approach dekotenization as a string transduction task.
We train a discriminative transducer 
on a set of tokenized-detokenized word pairs.
The set of pairs is intially aligned on the character level,
and the alignment pairs become the operations that are applied
during transduction.
For detokenization, most operations simply copy over characters,
but more complex rules such as $\textit{l+ Al}\rightarrow \textit{ll}$
are learned from the training data as well.

The tool that we use to
perform the transduction is {\DTL},
a discriminative, character-level string transducer,
which was orignally designed for 
letter-to-phoneme conversion~\cite{jiampojamarn2008}.
%has been previously applied to other character transduction problems,
%such as letter-to-phoneme prediction~\cite{jiampojamarn2008} and
%transliteration~\cite{Jiampojamarn2009:NEWS}.
%each time producing state-of-the-art results.
To align the characters in each training example. 
{\DTL} uses an EM-based {\MM} % many-to-many alignment algorithm
\cite{jiampojamarn2007}.
%GK: a citation for MIRA?
After alignment is complete, MIRA training
repeatedly decodes the training set
%The word pairs feature set is used to train a linear model where feature weights are tuned using Margin Infused Relaxed Algorithm. 
to tune the features that determine when each operation should be applied.
The features include both $n$-gram source context
and HMM-style target transitions. 
{\DTL} employs a fully discriminative decoder
to learn character transformations and when they should be applied.
The decoder resembles a monotone phrase-based SMT decoder,
but is built to allow for hundreds of thousands of features.
%Finally, the phrasal decoder uses the generated model to search for
%n-best predicted output using Viterbi algorithm \cite{jiampojamarn2008}.

The following example illustrates
how string transduction applies to detokenization.
%GK: the following sentence is unclear
%Like the other detokenizers, {\DTL} is only applied to valid tokenized
%sequences, as indicated by the prefix/suffix marker `+'.
The segmented and surface forms of
\textit{bbrA$\varsigma$thm} <bibrA`thm> ``with their skill''
constitute a training instance:
\begin{center}
\textit{b+\_brA$\varsigma$\she\_+hm} $\rightarrow$ \textit{bbrA$\varsigma$thm}
\end{center}
The instance is aligned during the training phase as:
\begin{center}
\begin{tabular}{ccccccccc}
b+ & \_b & r & A & $\varsigma$ & \she\_ &  + & h& m\\
$|$ &  $|$    & $|$ & $|$ & $|$ &  $|$  & $|$ &  $|$  &  $|$  \\
b   &   b   &  r  & A &  $\varsigma$ & t & $\epsilon$ & h & m \\
\end{tabular}
\end{center}
The underscore ``\_'' indicates a space, while ``$\epsilon$'' denotes
an empty string. 
The following operations are extracted from the alignment:
\begin{center}
b+ $\rightarrow$ b, \_b $\rightarrow$ b, r $\rightarrow$ r, A $\rightarrow$ A, E $\rightarrow$ E, p\_ $\rightarrow$ t, \mbox{+ $\rightarrow \epsilon$}, h $\rightarrow$ h, m $\rightarrow$ m
\end{center}
During training, weights are assigned to features,
that associate operations with context.
In our running example,
the weight assigned to the b+ $\rightarrow$ b operation
accounts for the operation itself,
for the fact that the operation appears at the beginning of a word,
and for the fact that it is followed by an underscore;
in fact, we employ a context window of 5 characters
to the left or right of the source substring ``b+'',
creating a feature for each $n$-gram in that window. 

%CC: Not sure we need this anymore with the example
%
%We adopt the {\DTL} framework for detokenization task to transform segmented word input to word surface form output.
%Based on the alignments, {\DTL} generates a model that can predict the character original form before they went through orthographic or phonological normalizations during tokenization.
%We start by colecting tokens of tokenized words. In a tokenized text, prefixes and suffixes tokens ends or begins with token marker respectively while word stems do not. We consider any sequence of tokens of the form optional prefix,stem, or optional suffix as a segmented form of a word. These sequences are collected and given as input to {\DTL}. For example, the following sequence of tokens is given as an input to {\DTL}: \textit{AdArp +km}   (your management). Again , We also treat spaces as characters for {\DTL}.  {\DTL} removes all spaces and prefix and suffix markers in its detokenization process. It also intelligently replaces characters that have changed due to tokenization process into their original form.

%{\MM} generates a 2-1 alignment where \textbar separates between sequences and :  separates between subsequences. \_ indicates that the substring \textit{+}is aligned to null
%\begin{center}
%b:+\textbar F:b\textbar r\textbar A\textbar E\textbar p:F\textbar +\textbar h\textbar m\textbar 
%$\rightarrow$ b\textbar b\textbar r\textbar A\textbar E\textbar t\textbar \_\textbar h\textbar m\textbar
%\end{center}

%CC: Seems redundant
%{\DTL} generates a model that can predict the character original form before they went through 
%orthographic or phonological normalizations during tokenization.

Modeling the tokenization problem as string transduction
has several advantages.
The approach is completely language-independent.
The context-sensitive rules are learned automatically from examples,
without human intervention.
%Furthermore, the rules are associated with character context. 
The rules and features can be represented in a more compact way
than the full mapping table required by table-based approaches, 
while still elegantly handling words that were not seen during training.
%CC: We don't test this claim
Also, since the training data is generalized more efficiently
than in simple memorization of complete tokenized-detokenized pairs,
less training data should be needed to achieve good accuracy.
%that is the case with the table-based approaches.
%CC: Redundant
%Having a language independent tool eliminates the manual effort needed to extract detokenization transformation rules.

\section{Experiments}
\label{experiments}

%then experiments and results; discuss the evaluation metrics and put the table of results in the paper 
This section presents two experiments that evaluate the effect of the detokenization schemes on both naturally occurring Arabic and SMT output.
	
\subsection{Data}

%data used to train the detokenizers and data used to train the SMT system
%For training, %the {\DTL} system,
To build our data-driven detokenizers,
we use the Arabic part of 4 Arabic-English parallel  datasets from the Linguistic Data Consortium as training data. 
The data sets are: Arabic News (LDC2004T17), eTIRR (LDC2004E72), English translation of Arabic Treebank (LDC2005E46), and Ummah (LDC2004T18). 
The training data has 107K sentences. 
The Arabic part of the training data constitutes around 2.8 million words, 3.3 million tokens after tokenization, 
and 122K word types after filtering punctuation marks, Latin words and numbers (Refer to Table~\ref{tab:datasets} for detailed counts).

For training the SMT system's translation and re-ordering models, we use the same 4 datasets from LDC.
We also use 200 Million words from LDC Arabic Gigaword corpus (LDC2011T11) to generate a 5-gram language model using SRILM toolkit \cite{Stolcke02srilm-}. 

We use NIST MT 2004 evaluation set for tuning (1075 sentences) and NIST MT 2005 evaluations set for testing (1056 sentences). 
Both MT04 and MT05 have multiple English references in order to evaluate Arabic to English translation. 
As we are translating into Arabic, we take the first English translation to be our source in each case.
%CC: I don't think we actually spell this out anywhere before now
We also use the Arabic halves of MT04 and MT05 as development and test sets for our experiments on naturally occurring Arabic.
The tokenized Arabic is our input, with the original Arabic as our gold-standard detokenization.

The Arabic text of the training, development, testing set and language model are all tokenized using MADA 3.2 \cite{Habash:2009} 
with the Penn Arabic Treebank tokenization scheme. 
The English text in the parallel corpus is lower-cased and tokenized in the traditional sense to strip punctuation marks.

\subsection{Experimental Setup}

To train the detokenization systems, we generate a table of mappings from tokenized forms to surface forms based on %37 million words from Arabic Gigaword and
the Arabic part of our 4 parallel datasets, giving us complete coverage of the output vocabulary of our SMT system.
In the table-based approached, if a tokenized form is mapped to more than one surface form, 
we use the most frequent surface form.
For out-of-table words, we can either fall back on concatenation (Table) or rules (T+R).
For SRILM-Disambig detokenization, we maintain ambiguous table entries along with their frequencies, and we introduce
%CC: No longer sure where this 5-gram LM was trained
a 5-gram language model to disambiguate detokenization choices in context. % trained on the untokenized form of same 37 Million word corpus. 
%We conduct 2 experiments with the Disambig utility.  
%The first uses the 5-gram language model and the mapping table(T+LM),
%and the second is the same but backs-off to the set of rules(T+R+LM). 
Like the table-based approaches, the Disambig approach can back off to either simple concatenation (T+LM) or rules (T+R+LM) for missing entries.
The latter is a re-implementation of the state-of-the-art system presented by \newcite{ElKholy:2012:OMP:2159073.2159149}.

%For %our {\DTL}
%the detokenizaton experiment,
We train our discriminative string transducer using word types from the 4 LDC catalogs. 
%CC: How do we handle cases where the same input maps to multiple outputs?
We use {\MM} to generate a 2-to-1 character alignments between tokenized forms and surface forms. 
For the decoder, we set Markov order to one, joint $n$-gram features
to 5, $n$-gram size to 11 and context size to 5. % for {\DTL}. 
This means the decoder can uses features up to 11 characters long, allowing it to effectively memorize many words.
We found these settings using grid search on the development set, NIST MT04.

For the SMT experiment, we use GIZA++ for the alignment between English and tokenized Arabic,
and perform the translation using Moses phrase-based SMT system \cite{Hoang07moses:open},
with a maximum phrase length of 5. 
We apply each detokenization scheme on the SMT tokenized Arabic output test set,
and evaluate using the BLEU score \cite{Papineni02bleu:a}.

\begin{table}[tb]
\begin{center}
\begin{tabular}{|l|c|c|}
\hline \bf Data set & \bf types & \bf segmented types \\ \hline
training set & 122,720 & 61,943 \\
MT04 & 8,201 & 2,542 \\
MT05 & 7,719 & 2,429\\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:datasets} Type counts by data set, before and after tokenization.}
\end{table}

%\begin{table}
%\begin{center}
%\begin{tabular}{|l|c|c|}
%\hline \bf Detokenization & \bf WER & \bf SER\\ 
%\hline
%Simple & 1.869\%  & 36\%\\
%Rule & 0.227\%  & 5.4\%\\
%Table & 0.511\%  & 12.5\%\\
%Disambig & 0.301\%  & 12.5\%\\
%{\DTL} & 0.077\%  & 1.8\%\\
%\hline
%\end{tabular}
%\end{center}
%\caption{\label{tab:ser} Comparison of different Detokenization 
%schemes on naturally occurring Arabic text of MT05 using word 
%error rate and sentence error rate measures}
%\end{table}



%\begin{table}
%\begin{center}
%\begin{tabular}{|l|c|}
%\hline \bf Detokenization & \bf BLEU SCORE \\ 
%\hline
%Simple & 26.68\\
%Rule & 27.66\\
%Table & 27.51\\
%Disambig & 27.71\\
%{\DTL} & 27.75\\
%\hline
%\end{tabular}
%\end{center}
%\caption{\label{tab:smt} Comparison of different Detokenization schemes when applied on a segmented Arabic SMT output using BLEU scores}
%\end{table}

%CC: Updating table to use 4LDC results from Mohammad's Mar 11 e-mail
\begin{table}[tb]
\begin{center}
\begin{tabular}{lrrr}
\hline
\bf Detokenization & \bf WER & \bf SER & \bf BLEU\\
\hline
Baseline & 1.710  & 34.3  & 26.30\\
Rule & 0.590  & 14.0  & 28.32\\
Table & 0.192  & 4.9   & 28.54\\
T+R & 0.122  & 3.2   & 28.55\\
Disambig(T+LM) & 0.164  & 4.1   & 28.53\\
Disambig(T+R+LM)& 0.094  & 2.4   & 28.54\\
{\DTL} & 0.087  & 2.1  & 28.55\\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:ser}Word error rate and sentence error rate measure the accuracy of desegmenting the Arabic reference text of NIST MT05. 
BLEU score is based on English-Arabic SMT output, also for MT05.}
\end{table}

\subsection{Results and Analysis}

In this section, we measure the performance of several detokenization schemes,
and analyze the errors caused by each detokenization scheme. 
We use the sentence error rate and word error rate metrics to measure performance on naturally occurring Arabic text, 
and BLEU score metric to measure performance on tokenized Arabic output the SMT system.  

Looking at the sentence and word error rates, we see that
the baseline scheme, which is a simple concatenation of morphemes, performs the worst by a large margin.
The table based approach performs much better than rule based approach, indicating that there are frequent exceptions to the rules in Table~\ref{tab:rules-table} that require memorization.
Their combination (T+R) yields further improvement, leveraging the strengths of both approaches. 
%CC: I still think the following setence feels very inconsistent given our results. If disambiguation was important, DirecTL would be failing.
%CC: In fact, disambiguation cannot be necessary for more than 1.8% of the sentences, as DTL is able to get all but 1.8% right without disambiguation.
%CC: So, how is Disambig able to fix 11.6-6.5=5.1% of sentences with respect to T+R? I still don't feel that we have a good answer to this question.
SRILM-Disambig yields better results over all previous approaches
as it uses a language model context to disambiguate the correct detokenized word form. 
Our system %{\DTL}
outperforms SRILM-Disambig by a very slight margin, indicating that the two systems are roughly equal.
This is interesting, as it is able to do so by using only features derived from the tokenized word itself; 
unlike SRILM-Disambig, it does not use surrounding words to inform its decisions.
Also, it is able to achieve this level of performance without any manually constructed rules.
%CC: All this discussion becomes irrelevant in light of the new experiments where everyone is trained on the same data
%Our system{\DTL} performs the best overall with a significant decrease in WER and SER.
%This is remarkable, as it does so despite using only features derived from the tokenized word itself; unlike SRILM-Disambig, it does not use surrounding words to inform its decisions.
%%CC: What follows is my best attempt at an explanation
%We are still studying this result to understand it better, but it is possible that our training regime, which determines operation weights based on types 
%({\DTL}'s training examples are not weighted by their frequency in the text),
%may be better suited to detokenization than methods based on token-level frequencies, such as the table-based approaches.
%This would be consistent with some results in the unsupervised segmentation literature~\cite{Goldwater2006}.

Improvements in detokenization do contribute to the BLEU score of our SMT system, but only to a point.
Table~\ref{tab:ser} shows three tiers of performance, with no detokenization being the worst, the rules being better,
and the various data-driven approaches performing best.
After WER dips below 0.2, further improvements seem to no longer affect SMT quality.
Note that BLEU scores are much lower overall than one would expect for the translation in the reverse direction,
because of the morphological complexity of Arabic, and the use of one (as opposed to four) references for evaluation.
%Using SRILM-Disambig and {\DTL} results with at least 0.2 BLEU points improvement over the other systems.

%CC: Camera Read concern - what are we going to do about these error analysis paragraphs????
%CC: Just had a whole paragraph about this
%Table ~\ref{tab:ser} shows that {\DTL} detokenization outperforms the other approaches.
{\DTL}'s SER of 2.1 represents only 21 errors.
Among those 21, 11 errors are caused by changing \textit{p} to \textit{h} and vice versa. 
This is due to writing \textit{p} and \textit{h} interchangeably. 
%CC: I still don't know how this Hamza issue affects detokenization
For example, \textit{``AjmAly+h''} was detokenized as \textit{''AjmAly\she''} <ajmAlyT> instead of {''AjmAlyh''} <ajmAlyh>.
Another 4 errors are caused because of the lack of diacritization, which affects the choice of the Hamza form. 
For example,\textit{''bnA\^{w}h'' <bnA'uh>, ``bnA\^{y}h'' <bnA'ih> } and \textit{''bnA'h'' <bnA'h>} (''its building'') 
are 3 different forms of the same word where the choice of Hamza <-'> is dependent on the its diacritical mark or the mark of the character preceding it.
Another 3 errors are caused because of the case of the nominal which it inflects for.
The case can be known by checking the context of the noun which {\DTL} has no access to.
For example, \textit{``mfkry+hm''} (''thinkers/Dual-Accusative'') was detokenized as \textit{''mfkrAhm''} <mfkrAhm> (Dual-Nominative) instead of {''mfkryhm''} <mfkryhm>.
The last 3 errors are special cases of \textit{``An +y''} which can be detokenized correctly as either \textit{``Any'' <any>} or \textit{''Anny'' <anany>}. 

%CC: Not sure this paragraph adds anything, I would consider cutting it
%MS: the below is cut by me
%The 148 errors of table-based approaches can be roughly divided into two categories. 
%The first category are errors occurring where there is no match for the tokenized word in the table,
%and the back-off to simple concatenation results in an incorrect surface form.
%The second category involves choosing the frequent surface word among the different choices,
%while the correct one is among the less frequent.

The table detokenization scheme fails in 54 cases.
Among these
instances, 44 cases are not in the mapping table, hence resolving back
to simple concatenation ended with an error.
Our transduction approach %{\DTL}
succeeds in detokenizing 42 cases out of the 54.
The majority of these cases involves changing \textit{p} to \textit{h} and vice versa and changing \textit{l+Al} to \textit{ll}.
The only 2 instances where the tokenized word is in the mapping table but {\DTL} incorrectly detokenize it are due to hamza case and p to h case described above.
There are 4 instances of the same word/case where both the table scheme and {\DTL} fails due to error of tokenization by MADA, where the proper name \textit{qwh} <qwh> is erroneously tokenized as \textit{qw+p}.
This shows that {\DTL} handles the OOV correctly.
%CC Discussed above
%{\DTL} detokenization approach results in 0.2 BLEU points improvement when applied on SMT output. 
%The improvement is achieved without the use of any mapping table or a language model compared to SRILM-disambig.
%Since the direction of translation is into Arabic, which has complex morphology compared to English, BLEU scores are always lower than Arabic to English translation.

The Disambig(T+R+LM) erroneously detokenizes 27 instances, where 21 out of them are correctly tokenized by {\DTL}.
Most of the errors are due to the Hamza and p to h reasons.
It seems that even with a large size language model, the SRILM utility needs a large mapping table to perform well.
Only 4 instances were erroneously detokenized by both Disambig and {\DTL} due to Hamza and the case of the nominal.

The analysis shows that using small size training data, {\DTL} can achieve slightly better accuracy than SRILM scheme.
The limitations of using table and rules are handled with {\DTL} as it is able to memorize more rules.
%CC: I already talked about the SMT results earlier, and I refuse to report 0.01 BLEU as anything more than noise
%These limitations have minimal effect on SMT when tables and rules are combined, with an increase of 0.01 in BLEU score.
%When comparing {\DTL}  detokenization with table detokenization, 
%the table seems to correctly detokenize a few instances which involve changing ``\she'' to ``h'' where {\DTL} fails. 
%For example, {\DTL} incorrectly detokenize \textit{''tslym+h''} <tslym +h>  as \textit{``tslym\she''} <tslymT> rather than \textit{``tslymh''} <tslymh>. 
%CC: Are we sure on this analysis? DTL is very complex, how did you determine that the mp bigram was the deciding factor?
%MS: the analysis above is cut by me
%CC: Is this one of the cases where the table has multiple entries?
%MS:I commented the 2 lines below
%This is due to {\DTL} giving high probability to the ``mp'' bigram. 
%Table-based detokenization correctly tokenizes it since it has memorized this case in its records. 
%Compared to rule-based tokenization, {\DTL} correctly detokenized all cases where rule-based approach succeeded in. 
%CC: Does this sentence contradict the one above it?
%GK: I commented it out
%The only cases when
%{\DTL} fails and rule-based approach succeeds are when detokenization can be done by simple concatenation.

\section{Conclusion and Future Work }

In this paper, we addressed the detokenization problem for Arabic using {\DTL},
a discriminative training model for string transduction. 
Our system performs the best among the available systems. 
It manages to solve problems caused by limitations of table-based and rule-based systems. 
%CC: Do we really want to call that a better score??
%Unlike SRILM-disambig approach, {\DTL} achieves better score without using a language model.
This allows us to match the performance of the SRILM-disambig approach 
without using a language model or hand-crafted rules.
%CC: I don't think our current results support this claim
%Our results shows that we can train our system on a much smaller training set than table based approaches and yield better results. 
%CC: Kind of already covered this, and I'm not really comfortable with "much better"
%Also, our system is able to generate its own rules that can handle the problem better than the manually generated rules. 
In the future, we plan to test our language independent approach on other languages that have morphological characteristics similar to Arabic.


%\section*{Acknowledgments}

%Do not number the acknowledgment section.

\bibliography{naaclhlt2012}
\bibliographystyle{naaclhlt2012}

\end{document}
