\documentclass{article} % For LaTeX2e
\usepackage{nips12submit_e,times}
%\documentstyle[nips12submit_09,times,art10]{article} % For LaTeX 2.09

%\usepackage[vlined,algoruled,titlenumbered,noend]{algorithm2e}
\usepackage{amsmath,amsfonts,amssymb,amsthm}
\usepackage{array}
\usepackage{amsmath,amssymb}
\usepackage{epsfig,subfigure}
\usepackage{pgfplots}
\usepackage{enumerate}
\usepackage{hyperref}


\renewcommand{\labelitemi}{$\bullet$}
\renewcommand{\labelitemii}{$\cdot$}
\renewcommand{\labelitemiii}{$\diamond$}
\renewcommand{\labelitemiv}{$\ast$}



\title{It takes two to tango : Coupled Dictionary Learning for Cross Lingual Information Retrieval}


\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}

%\nipsfinalcopy % Uncomment for camera-ready version
\begin{document}



\maketitle


%\begin{abstract}
%\end{abstract}

%\section{Introduction}
%\label{intro}
%\begin{itemize}
%\item Usefulness of DL methods these days.



%\item Related work of DL methods and the various variations proposed in recent literature: very very briefly.

%Some important dictionary learning variants supervised dictionary learning http://www.di.ens.fr/sierra/pdfs/nips08b.pdf


%Proximal Methods for Sparse Hierarchical Dictionary Learning http://snowbird.djvuzone.org/2010/abstracts/103.pdf

%Online dictionary learning for sparse coding http://www.di.ens.fr/sierra/pdfs/icml09.pdf

%Learning a discriminative dictionary for sparse coding via label consistent K-SVD http://www.umiacs.umd.edu/~lsd/papers/CVPR2011_LCKSVD_final.pdf

%Regularized Dictionary Learning for Sparse Approximation http://www.see.ed.ac.uk/~s0574225/rdl_eusipco08_final.pdf

%\item Introduction to the problem of CLIR: mention of usefulness of DL methods in Domain Adaptation.
%\item What questions we intend to answer
%\item Our contributions
%\end{itemize}


\section{Introduction}
\label{intro}


%Automatic text understanding has been an unsolved research problem for many years. This partially results from the dynamic and diverging nature of human languages, which ultimately results in many different varieties of natural language. These variations range from the individual level, to regional and social dialects, and up to seemingly separate languages and language families. However, in recent years there have been considerable achievements in data driven approaches to computational linguistics exploiting the redundancy in the encoded information and the structures used. Those approaches are mostly not language specific or can even exploit redundancies across languages. 
Automatic text understanding has been an unsolved research problem for many years. This partially results from the dynamic and diverging nature of human languages, which ultimately leads to many varieties of natural language. These variations range from the individual level, to regional and social dialects, and up to seemingly separate languages and language families. However, in recent years there have been considerable achievements in data driven approaches to computational linguistics, which exploit the redundancy in the encoded information and the structures used. Most of these approaches are not specific to a particular language and are capable of finding the commonalities across languages. Representing documents by vectors that are independent of languages enhances the performance of cross-lingual tasks such as \textit{comparable document retrieval} and \textit{mate retrieval}.
\\

When tackling the task of retrieving documents across languages, there seem to be essentially two main paradigms:
\begin{enumerate}
\item Translation-based approaches which rely either on a translation of documents or queries. For the
translation of queries, one typically relies on bilingual dictionaries.

\item Mapping of queries and documents into a multilingual space in which similarity between queries and documents can be computed uniformly across languages.

\end{enumerate}
In this paper, we explore the use of Dictionary based approaches to solve the task of cross-lingual information retrieval. We propose a new dictionary learning algorithm (CDL:Coupled Dictionary Learning) for learning a pair of coupled dictionaries representing basis atoms in a pair of languages, alongside learning two mapping functions which help in transforming representations learnt in one language to the other. Such transformations are necessary for the task of finding similar documents in a different language and hence find immense application for various cross-lingual information retrieval tasks. We present an optimization procedure that iterates between two objectives and uses the K-SVD formulation to efficiently compute the parameters involved. We evaluate our algorithm on the task of cross-lingual comparable document retrieval and compare our results with existing approaches; the results highlight the efficacy of our method.
%In this paper, we explore the use of Dictionary based approaches to solve the task of cross-lingual information retrieval. We propose a new dictionary learning algorithm (CDL: Coupled Dictionary Learning) for learning a pair of coupled dictionaries representing basis atoms in a language pair, alongside learning two mapping functions which help in transforming representations learnt in one language to the other. Such transformations are necessary for the task of finding similar documents in a different language and hence finds immense application for various cross-lingual information retrieval tasks. We present an optimization procedure that iterates between two objectives and uses the K-SVD formulation to efficiently compute the parameters involved. We evaluate our algorithm on the task of cross-lingual comparable document retrieval and compare our results with existing approaches; the results highlight the efficacy of our method.


%Papers: 

%Learning Discriminative Projections for Text Similarity Measures (https://www.aclweb.org/anthology-new/W/W11/W11-0329.pdf)

%CLIR using HMM ( reference.kfupm.edu.sa/content/c/r/cross_lingual_information_retrieval_usin_70740.pdf  )

%CLIR using explicit semantic analysis (http://people.aifb.kit.edu/pso/publications/sorg_paperCLEF2008.pdf )

%From Bilingual Dictionaries to Interlingual Document Representations ( http://www.mt-archive.info/ACL-2011-Jagarlamudi.pdf)


%improving bilingual projections via sparse covariance matrices ( http://acl.eldoc.ub.rug.nl/mirror/D/D11/D11-1086.pdf )


%In this section we would talk about:
%\begin{itemize}
%\item The importance of CLIR- picked up from the abstracts from the various CLIR conferences.
%\item Discuss recent work in top conferences and alongside highlight what all is missing in their approaches.
%\item Discuss about Unsupervised features, Self Taught learning, etc etc...
%\item Discuss our work very very briefly and in a line mention how our method is better than each one of the existing work.
%\item In the end set the path for the DL method.
%\end{itemize}


%The version of the paper submitted for review should have ``Anonymous Author(s)'' as the author of the paper.

\vspace{-2mm}
\section{Coupled Dictionary Learning}
\label{scdl}
\vspace{-2mm}
The linear decomposition of a signal using few atoms of a {\it learned} dictionary has led to state-of-the-art performances in many computer vision and pattern recognition tasks. Recently it has been shown that learning dictionary based representations to model text corpora help in improving classification performance [3] as well as learning hierarchies of topics [2]. In this paper we present a coupled dictionary learning algorithm which simultaneously learns a pair of dictionaries and a pair of mapping functions to solve cross-lingual information retrieval problems. Specifically targeting the domain of resource scarce languages, we propose the use of coupled dictionary learning algorithm for cross-lingual document representation wherein the dictionary pair can well characterize the corpora of the two languages while the mapping function can reveal the intrinsic relationship between the language pair.
 
%The linear decomposition of a signal using few atoms of a {\it learned} dictionary has led to state-of-the-art performances in many computer vision and pattern recognition tasks. Recently it has been shown that learning dictionary based representations to model text corpora help in improving classification performance [3] as well as learning hierarchies of topics [2]. In this paper we present a coupled dictionary learning algorithm which simultaneously learns a pair of dictionaries and a pair of mapping functions to solve cross-lingual information retrieval problems. Specifically targeting the domain of resource scarce languages, we propose the use of coupled dictionary learning algorithm for cross-lingual document representation wherein the dictionary pair can well characterize the corpora of the two languages while the mapping function can reveal the intrinsic relationship between the language pair.
\vspace{-1mm}
\subsection{Problem Formulation}
\label{pf}
\vspace{-1mm}
The cross-lingual document representation problem can be formulated as: given a parallel corpora of a language pair $<l_1,l_2>$, can we learn document representations in each of these languages ($Y_{L_1}$ and $Y_{L_2}$) and their corresponding mappings ($T_{Y_{L_1}\rightarrow Y_{L_2}}$ and $T_{Y_{L_2}\rightarrow Y_{L_1}}$) so as to perform well in the challenging task of retrieving documents to queries in other languages. 
Since we are dealing with parallel corpus in our setting, it is reasonable to assume that there exist a latent space where these representations could be mapped to each other. \\

Most of the existing approaches use manually aligned document pairs to ﬁnd a common subspace in which the aligned document pairs are maximally
correlated. The sub-space can be found using either generative approaches based on topic modeling [5][6][7] or discriminative approaches based on variants of Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) [8].
Unlike existing methods, our framework makes use of abundantly available {\it unlabelled} data in each of the languages and learns meaningful intermediate representations and concepts which better capture the variations in naturally occurring data. Using this representation learnt in unsupervised fashion as initialization of the respective dictionaries, our algorithm then couples the learning of the two dictionaries and alongside learns mappings from each of the representation to the other. The unavailability of data in a resource scarce language motivates the use of this mapping to do computations in the transformed representation.

\vspace{-1mm}
\subsection{CDL Algorithm}
\vspace{-1mm}
We denote by $Y_{L_1} \in R^{n\times N}$ and $Y_{L_2} \in R^{n\times N}$ the training datasets formed by documents in the parallel corpora of the two languages. The corresponding dictionaries are notated as $D_{L_1} \in R^{n\times K}$ and $D_{L_1} \in R^{n\times K}$ with the mapping functions $T_{Y_{L_1}\rightarrow Y_{L_2}} \in R^{K\times K}$ and $T_{Y_{L_2}\rightarrow Y_{L_1}} \in R^{K\times K}$ with $K$ being the number of dictionary atoms. Our framework is based on the Semi-Coupled Dictionary algorithm proposed in [9]. We propose to minimize the following dictionary learning objective:

%\begin{equation*}

$ <D_{L_1},D_{L_2},X_{L_1},X_{L_2},T_{Y_{L_1}\rightarrow Y_{L_2}} T_{Y_{L_2}\rightarrow Y_{L_1}}> \hspace{2mm} = $ \\
\begin{center}
\vspace{-4mm}
$min_{\{D_{L_1},D_{L_2},T_{Y_{L_1}\rightarrow Y_{L_2}},T_{Y_{L_2}\rightarrow Y_{L_1}}\}} \parallel Y_{L_1}-D_{L_1}X_{L_1}\parallel_2^2 + \parallel Y_{L_2}-D_{L_2}X_{L_2}\parallel_2^2 $ \\ $+\hspace{2mm} \alpha\parallel X_{L_2}-T_{Y_{L_1}\rightarrow Y_{L_2}}X_{L_1}\parallel_2^2 \hspace{2mm} + \hspace{2mm} \beta\parallel X_{L_1}-T_{Y_{L_2}\rightarrow Y_{L_1}}X_{L_2}\parallel_2^2$
\end{center}
s.t. $\forall i, \|x_i\|_0 \leq T$ 
and $\|x{_i}{^{'}}\|_0 \leq T $
%
%\end{equation*}
where $X_{L_1}=\left[ x_1,x_2,...,x_n\right]    \in 
R^{K
\times 
N}
$ 
are the sparse codes of the input data of language $l_1$ and $X_{L_2}=\left[ x_1^{'},x_2^{'},...,x_n^{'}\right]  \in R^{K\times N}$ are the sparse codes of the input data of language $l_2$ and T is the sparsity constraint factor.\\\\
The term $\parallel Y_{L_i}-D_{L_i}X_{L_i}\parallel_2^2$ 
for 
$ i \in \{1,2\} $ represents the reconstruction error for documents of both the languages which intuitively implies how well do the learnt representations represent the original documents. $\alpha$ and $\beta$ control the relative contribution between reconstructive and mapping regularizations. an By $\parallel X_{L_2}-T_{Y_{L_1}\rightarrow Y_{L_2}}X_{L_1}\parallel_2^2$ we intend to minimize the mapping error between the transformed sparse codes of document in language $l_1$ and the corresponding document in $l_2$ while by $\parallel X_{L_1}-T_{Y_{L_2}\rightarrow Y_{L_1}}X_{L_2}\parallel_2^2$ we intend to minimize the mapping error between the transformed sparse codes of document in language $l_1$ and the corresponding document in $l_1$. This is the main contribution of our paper as we believe that if both the resource scarce then ideally we should penalize errors in both the mapping functions $T_{Y_{L_1}\rightarrow Y_{L_2}}$ and $T_{Y_{L_2}\rightarrow Y_{L_1}}$. \\

For the case when both the languages are resource scarce then we might have a language in which unlabelled data is more readily available than in the corresponding language, in which case the initialized dictionary would be more capable of giving better results and hence we might want to transform the document from its original language to the other language to perform retrieval tasks. This is our main motivation to have two separate transformation functions instead of a single one as in doing so we penalize the mapping errors in both the transformations and hence get optimized transformations for both the languages. Note that in the proposed model, the coding coefficients of $X_{L_1}$ and $X_{L_2}$ are related by the mapping functions $T_{Y_{L_1}\rightarrow Y_{L_2}}$ and $T_{Y_{L_2}\rightarrow Y_{L_1}}$ using which we could transform a document representation in language $l_1$ to its corresponding representation in $l_2$ and vice-versa.
%======== TODO :: BETTER FRAME IT ========\\


%In this section we talk about:
%\begin{itemize}
%\item Formulate the CLIR problem in terms of X, Y etc....
%\item Introduce the notations.
%\item Present the optimization equation
%\end{itemize}
\vspace{-1mm}
\subsection{Dictionary Initialization}
\label{di}
\vspace{-1mm}
We need to initialize the parameters $D_{L_1}$, $D_{L_2}$, $T_{Y_{L_1}\rightarrow Y_{L_2}}$ and $T_{Y_{L_2}\rightarrow Y_{L_1}}$. For $D_{L_i}$, $i \in \{1,2\}$, we employ several iterations of K-SVD for each dictionary using unlabelled data from the corresponding language. This is in spirit of the Self-Taught Learning framework where in unlabelled data is used as to learn an initial representation in an unsupervised manner. To the best of our knowledge, none of the exiting approach employed for Cross-Lingual information retrieval tasks makes use of unlabelled data to improve the performance. Given the initialized dictionaries, we perform original K-SVD to compute the sparse codes $X_{L_i}$, $i \in \{1,2\}$ of training data $Y_{L_i}$, $i \in \{1,2\}$ and use them to initialize the mapping parameters.

\vspace{-1mm}
\subsection{Optimization}
\label{opt}
\vspace{-1mm}
We use efficient K-SVD algorithm to find the optimal solution for all parameter simultaneously. This is quite different from the original approach as adopted in [9]. Since the objective is not jointly convex in all the parameters, the authors in [9] use iterative algorithm to alternately optimize the parameters. Instead of following that approach, we iterate between the representations of the two languages using K-SVD based implementation to find the optimal solutions for all parameters. With the initialization of the dictionary pairs $D_{L_1}$ and $D_{L_1}$, the mapping functions $T_{Y_{L_1}\rightarrow Y_{L_2}}$ and $T_{Y_{L_2}\rightarrow Y_{L_1}}$, we iterate between the solutions of the following two equations:
\begin{center}

	$<D_{new}^1,X^1> = arg min_{\{D_{new},X^1\}} \parallel Y_{new}^1 -D_{new}^1 X^1 \parallel_2^2$
	\hspace{4mm}\\
	
	$and$\\

	$<D_{new}^2,X^2> = arg min_{\{D_{new},X^1\}} \parallel Y_{new}^2 - D_{new}^2 X^1 \parallel_2^2$

\end{center}
s.t. $\forall i, \|x_i^1\|_0 \leq T$ $ and $ $\|x_i^2\|_0 \leq T $
where:\\
%\begin{center}
	$Y_{new}^1 = $
	%$\begin{pmatrix}
	$\left(
	\begin{array}{c}
		Y_{L_1} \\ \sqrt{\alpha}\hspace{2mm} X_{L_2}
	\end{array}
	\right)$ $;	$
$	
	D_{new}^1 =	\left(
	\begin{array}{c}
		D_{L_1}\\\sqrt{\alpha}\hspace{2mm} T_{Y_{L_1}\rightarrow Y_{L_2}}
			\end{array}
	\right)
	$
	%\end{pmatrix}$
	
	$Y_{new}^2 = $ $\left(
	\begin{array}{c}
		Y_{L_2}\\\sqrt{\beta}\hspace{2mm} X_{L_1}
			\end{array}
	\right)
	$ $;$	$D_{new}^2 =$
$\left(
	\begin{array}{c}
		D_{L_2}\\\sqrt{\beta}\hspace{2mm} T_{Y_{L_2}\rightarrow Y_{L_1}}
		\end{array}
	\right)
	$

%\end{center}
The matrices $D_{new}^1$ and $D_{new}^2$ are $L_2$normalized column wise. The equations presented above are exactly the problem which K-SVD[4] solves. Our algorithm learns a pair of dictionaries alongside mapping functions using which we can represent documents in both the languages and can map representation from one language to another so as to solve cross-lingual information retrieval tasks. We next discuss the application of the proposed algorithm to the task of cross-lingual document retrieval and mate retrieval.

\vspace{-2mm}
\section{Cross-Lingual Document Retrieval}
\label{drmr}
\vspace{-2mm}

We obtain $D_{L_i} = [d_1^i, d_2^i, ... , d_k^i], i \in \{1,2\}, T_{Y_{L_1}\rightarrow Y_{L_2}} = [t_1^1, t_2^1, ... , t_k^1] and T_{Y_{L_2}\rightarrow Y_{L_1}} = [t_1^2, t_2^2, ... , t_k^2]$ by employing K-SVD algorithm in an iterative manner to the equations presented above. We cannot simply use these for testing since these are $L_2$-normalized in $D_new^i$ jointly in our algorithm. Hence, we computed the desired dictionaries and mapping transformations as follows:\\

\begin{center}
$	D_{L_1} = \left\lbrace \dfrac{d_1^1}{\parallel d_1^1 \parallel_2} , \dfrac{d_2^1}{\parallel d_2^1 \parallel_2}, ..., \dfrac{d_k^1}{\parallel d_k^1 \parallel_2} \right\rbrace  ;  D_{L_2} = \left\lbrace \dfrac{d_1^2}{\parallel d_1^2 \parallel_2} , \dfrac{d_2^2}{\parallel d_2^2 \parallel_2}, ..., \dfrac{d_k^2}{\parallel d_k^2 \parallel_2} \right\rbrace $

$T_{Y_{L_1}\rightarrow Y_{L_2}} = \left\lbrace \dfrac{t_1^1}{\parallel t_1^1 \parallel_2} , \dfrac{t_2^1}{\parallel t_2^1 \parallel_2}, ..., \dfrac{t_k^1}{\parallel t_k^1 \parallel_2} \right\rbrace  ; T_{Y_{L_2}\rightarrow Y_{L_1}} = \left\lbrace \dfrac{t_1^2}{\parallel t_1^2 \parallel_2} , \dfrac{t_2^2}{\parallel t_2^2 \parallel_2}, ..., \dfrac{t_k^2}{\parallel t_k^2 \parallel_2} \right\rbrace$

\end{center}

For a test document $y_i^j$ in language $l_j$ $(i,j \in \{1,2\})$ we compute its sparse representation $x_i^j$ by solving the optimization problem:
%\begin{center}
	$x_i^j = arg min_{x_i^j} \{ \parallel y_i^j - D_j x_i^j \parallel_2^2 \} \hspace{2mm} s.t. \hspace{2mm} \|x_i^j\|_0 \leq T $
%\end{center}

Specifically, for the task of cross-lingual document retrieval, given a query document $y_i^1$ in language $l_1$ (say) we find its representation $x_i^1$ and then use the mapping $T_{Y_{L_1}\rightarrow Y_{L_2}}$ to transform this representation to get the corresponding representation in the target language domain where we compare it with all the documents using cosine based similarity score to find the most similar document from the corpus in the other language.


\begin{figure*}[t!]
\begin{center}
\vspace{-15mm}
\includegraphics[width=250pt]{plotres1.pdf}
\end{center}
\vspace{-4mm}
\caption{ \footnotesize Comparison of MMR scores. The proposed algorithm is termed CDL.}
\vspace{-2mm} \label{fig-1}
\end{figure*}



%\begin{description}
%\item [Experimental Evaluation] :\\
\textbf{Experimental Evaluation} :\\
In this cross-lingual
document retrieval task, given a query document in one language, the goal is to ﬁnd the most similar document from the corpus in another language. We followed the comparable document retrieval setting described in [1] and evaluated our algorithm on the Wikipedia dataset used in that paper. This data set consists of Wikipedia documents in two languages, English and Spanish. An article in English is paired with a Spanish article if they
are identiﬁed as comparable across languages by the Wikipedia community. To conduct a fair comparison, we use the same term vectors and data split as in the previous study. The numbers of document pairs in the training/development/testing sets are 43,380, 8,675 and 8,675, respectively. The dimensionality of the raw term vectors is 20,000. The models are evaluated by using each English
document as query against all documents in Spanish and vice versa; the results from the two directions are averaged. Performance is evaluated by %two metrics: the Top-1 accuracy, which tests whether the document with the highest similarity score is the true comparable document, and 
the Mean Reciprocal Rank (MRR) of the true comparable. Our approach is compared with most methods
studied in [1], including the best performing one: CL-LSI, OPCA, and CCA, JPLSA and CPLSA. Figure 1 summarizes our results.

%\end{description}


%\section{Conclusion \& Possible Extensions}
%\label{conclude}

%\begin{itemize}
%\item We highlight that we achieved what we intended to achieve.
%\item Briefly mention our contributions.
%\item Discuss about the possible extensions : future of DL for text based applications.
%\end{itemize}


%Indicate footnotes with a number\footnote{Sample of the first footnote} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas).\footnote{Sample of the second footnote}

%\subsection{Figures}

%\begin{figure}[h]
%\begin{center}
%\framebox[4.0in]{$\;$}
%\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
%\end{center}
%\caption{Sample figure caption.}
%\end{figure}

%\bibliographystyle{apa}
%\bibliography{colingbiblio}
\vspace{-3mm}
\subsubsection*{References}
\label{ref}
\vspace{-1mm}
\small{
[1] John Platt, Kristina Toutanova, and Wen-tau Yih. 2010.
Translingual document representations from discriminative projections. In EMNLP.

[2] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. 2010. Proximal methods for sparse hierarchical dictionary learning. In International Conference on Machine Learning.

[3] R. Mehrotra, R. Agrawal, S.A. Haider. 2012. Dictionary based Sparse Representation for Domain Adaptation. In Proceedings of 21st ACM International Conference on Information and Knowledge Management.

[4] M. Aharon, M. Elad, and A. M. Bruckstein, “The K-SVD: An algorithm
for designing of overcomplete dictionaries for sparse representations,”
IEEE Trans. Image Process., vol. 54, no. 11, pp. 4311–4322,
Nov. 2006.

[5] Jagadeesh Jagarlamudi and Hal Daum´e III. 2010. Extracting multilingual topics from unaligned comparable corpora. In Advances in Information Retrieval,
32nd European Conference on IR Research, ECIR,
volume 5993, pages 444–456, Milton Keynes, UK.
Springer.

[6] Jagadeesh Jagarlamudi, Hal Daume III, and Raghavendra
Udupa. 2011. From bilingual dictionaries to interlingual document representations. In Proceedings of the
49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies,
pages 147–152, Portland, Oregon, USA

[7] Duo Zhang, Qiaozhu Mei, and ChengXiang Zhai. 2010.
Cross-lingual latent topic extraction. In Proceedings of the 48th Annual Meeting of the Association
for Computational Linguistics, pages 1128–1137, Uppsala, Sweden, July. Association for Computational
Linguistics.

[8] John C. Platt, Kristina Toutanova, and Wen-tau Yih.
2010. Translingual document representations from
discriminative projections. In Proceedings of the
2010 Conference on Empirical Methods in Natural
Language Processing, EMNLP ’10, pages 251–261,
Stroudsburg, PA, USA. Association for Computational
Linguistics.

[9] S Wang, L Zhang, Y Liang, Q Pan. 2012. Semi-Coupled Dictionary Learning with Applications in Image Super-resolution and Photo-Sketch Synthesis. In Proceedings of International Conference on Computer Vision and Pattern Recognition (CVPR).

}

\small{
}
\end{document}
