\documentclass[ENG,PhD]{lti-tech-report}
%\documentclass[ENG,MSc,DRAFT]{cinvestav}
\usepackage{color}
\usepackage{algorithmic}
\usepackage{latexsym, amssymb}
\usepackage{xcolor}
\usepackage{pstricks}
\usepackage{epsfig}
\usepackage{ucs}
\usepackage{lettrine}
\usepackage[utf8x]{inputenc}
\usepackage{url}
\usepackage{pseudocode}
\usepackage{longtable}
\RequirePackage{hyperref}
\RequirePackage{algorithm}
\usepackage{algorithmic}
\usepackage{latexsym, amssymb}
\usepackage{subfigure}
\usepackage{fancybox}
\usepackage{rotating}
\usepackage{pdflscape} 
\usepackage{multirow}
\usepackage{textcomp}
\newcommand{\V}[1]{\mathbf{#1}}
\renewcommand{\thefigure}{\arabic{figure}\alph{subfigure}}
\renewcommand{\thetable}{\arabic{table}}
\graphicspath{{figs/}}
%\usepackage[english]{babel}\selectlanguage{english}
%\usepackage[spanish]{babel}\selectlanguage{spanish}
\title          {Context experimentation for unsupervised Word Sense Disambiguation on a specific domain}%Agreguen el titulo de su reporte
\shorttitle {Unsupervised approach for WSD}
\author         {Franco Rojas Lopez, Ivan Lopez-Arevalo}
\adscripcion{CINVESTAV UNIDAD TAMAULIPAS. LABORATORIO DE TECNOLOGÍAS DE INFORMACIÓN. Parque Científico y Tecnológico TECNOTAM -- Km. 5.5 carretera Cd. Victoria-Soto La Marina. C.P. 87130 Cd. Victoria, Tamps. }
\city {Cd. Victoria, Tamaulipas, México.}
%\date{December, 2010}
\techreportnumber{08}
\publishedmonth{December} %mes de la publicación
\publishedday{5} %día de la publicación
\publishedyear{2011} %año de la publicación
\keywords{semantic relations, context, semantic graph}
\correspondingauthor{Franco Rojas Lopez $<$frojas@tamps.cinvestav.mx$>$, Ivan Lopez-Arevalo $<$ilopez@tamps.cinvestav.mx$>$}
\grants{\tiny .}
\dateofsubmission{December 5, 2011}
\tobecited{Franco Rojas-Lopez and Ivan Lopez-Arevalo, Unsupervised approach for WSD, Technical Report. Cinvestav - Tamaulipas, Mexico. December, 2011.
}
\placeanddateofpublication{Ciudad Victoria, Tamaulipas, MEXICO.}
\abstract{
This report presents the advance of the last year of thesis work. We are investigating the problem for Word Sense Disambiguation. To achieve this goal, the context in which an ambiguous word occurs and several semantic similarity measures has been analyzed. The research work is focused on an unsupervised approach for Word Sense Disambiguation on a specific domain to automatically to assign the right sense to a given ambiguous word. The proposed approach relies on the integration of two source information: context and semantic similarity information. We consider this problem to be quite relevant because an effective approach to this task would be useful for a number of processing natural language  applications. For comparison purposes the experiments were carried on the english test data of SemEval 2010 and evaluated with a variety of measures that analyze the connectivity of graph structure. The preliminary results  were evaluated using the precision and recall measures and compared with the results of SemEval 2010.
}
\begin{document}
\makeintropages
\section{Introduction}%Su introducción y el desarrollo de todo su tema.
For structural and cognitive reasons the natural language is inherently ambiguous; for example, a single lexical unit can have different meanings, this phenomenon is called polysemy\footnote{The association of one word with two or more distinct meanings}. When a word is polysemous one needs an algorithm to select the most appropriate meaning\footnote{Hereafter the terms meaning, sense and synset are used interchangeably} for the given word in relation to the given context. The problem of assign concepts to words in texts is known as Word Sense Disambiguation (WSD); formally it is defined as a task consisting on select the correct sense for a given ambiguous word in a given context. A word is ambiguous when its meaning varies depending on the context in which it {occurs}. There are several approaches that have been proposed for WSD. In general, in the literature there are two main approaches: supervised and unsupervised. Supervised approaches rely on the availability of sense labeled data from which the relevant sense distinctions are learned, unfortunately the manual creation of knowledge resources is an expensive and time consuming effort \cite{gale92}, while unsupervised approaches typically refer to disambiguating word senses without the use of sense-tagged corpora. Most of the unsupervised approaches proposed in the WSD literature are knowledge based, i.e. they exploit only the information provided by a Machine Readable Dictionary (MRD). Some unsupervised WSD system also use unlabeled data together with the dictionary information to perform an unsupervised approach to disambiguate words. An effective approach to this task would be useful for a number of Natural Language processing (NLP) applications, for example Information Retrieval (IR), Content Analysis (CA), Information Extraction (IE), etc.\\
Our main objective is obtain an unsupervised WSD algorithm on specific-domain for English able to outperform the state-of-the-art. %there exist several works [REF] that justify the study of WSD to different applications in the NLP such as Retrieval Information, Information Extraction, Content Analisys, etc
%que sea competivo con lo reportado en la literatura  los algoritmos analizados que miden la estructura del grafo, las medidas de similitud implementadas para recuperar terms relacionados con la palabra ambigua son descritos en el presente reporte la revision del estado del arte, el estado actual sobre la tarea y el trabajo futuro asi cmoo la comparacion con los algoritmos y resultados reportados en la literatura especificamente con quello reportado en el SemEval 2010----- has attracted interest of research community on WSD
This report presents the advances of the thesis work, the algorithms for analyzing graph structures as well as the implemented semantic similarity measures. The method is based on WordNet as the source of lexical meanings.

 The WSD task is an open research area that in recent years has been motivated by the SemEval\footnote{http://semeval2.fbk.eu/semeval2.php?location=} competition, where different systems may evaluate their performance. The purpose of SemEval is to perform a comparative evaluation of WSD systems in several kinds of tasks. Particularly the obtained results in the task \#17 (\textit{All-words WSD on a Specific Domain}) are reported  in this document. The document is organized as follows, Section \ref{workpreliminary} presents relevant works on WSD. Section \ref{motivation} explains our motivation with unsupervised algorithm on specific-domain. Section \ref{definicion} gives the problem definition. Section \ref{experiments} presents the carried out experiments, and finally, the conclusions and further work are given in Section \ref{discussion}.
\section{Background}\label{workpreliminary}
The field of WSD is a well studied research area \cite{surveyWSD, bookeneko} mainly because WSD is essential to success of several tasks, \textnormal{Sch\"utze} and Pedersen \cite{pedersen95} demonstrated that WSD can be used to improve IR performance enhancing the precision about 4.3\%. In the IE area, Malin \textit{et al.} \cite{malin05} proposed the application of a link analysis method based on random walks to solve the ambiguity of named entities\footnote{Named entities are atomic elements in text such as the names of persons, organizations, locations, expressions of times, quantities, etc.}. 

The list of unsupervised WSD methods is long and comprises:
\begin{itemize}
 \item \textit{Context clustering}, in this approach each occurrence of an ambiguous word in a corpus is represented as a context vector. The vectors are then clustered into groups, each identifying a sense of the ambiguous word \cite{schutze92}.
\item \textit{Word clustering}, that is, methods which aim at clustering words that are semantically similar and can thus convey a specific meaning \cite{lin98}.
% in this report a graph-based approach on specific domain is proposed which demonstrate a good performance and seem to be a promising solution for unsupervised WSD.
\item The \textit{methods graph-based} \cite{lapata07, rada, silva, navigli, siva2010} relies on the construction of semantic graph from text, then explore the structure and link of the graph underlying a particular lexical knowledge base. 
\end{itemize}
Some important work on graph-based methods are presented by Navigli and Mirella \cite{lapata07}, they explore several measures for analyzing the connectivity of semantic graph structure in local and global level. They conclude that local measures perform better than global measures. Rada and Sinha \cite{rada} propose an unsupervised graph-based method for WSD based on an algorithm that computes graph centrality of nodes in the constructed semantic graph. Aguirre y Soroa \cite{Aguirre09} proposed a graph-based approach, their main contribution was personalize the PageRank algorithm for undirected graphs. Siva Reddy \textit{et al.} \cite{siva2010} proposed an unsupervised approach on a specific domain. The aim of their approach was to use sense distributions collected from a specific domain corpora using the method described by Lin \cite{lin98} as a knowledge source to expand the context, they evaluated using personalized PageRank algorithm \cite{Aguirre09}.
 
%Reddy \textit{et. al.} \cite{silva}, and Navigli \cite{navigli}. In these approaches a graph representation for senses (vertices) and relation (edges) is first build from a lexical knowledge base.
 According to its performance, similar works reported in the literature are based on clustering techniques as was mentioned before. For example, Aguirre and L\'opez \cite{aguirre} proposed a method to group senses of words of fine granularity within one of coarse granularity to reduce the polysemy. Pedersen \textit{et al.} \cite{pedersen05} proposed and unsupervised approach that solves name ambiguity by clustering the instances of a given name into groups, each of which is associated with a distinct underlying entity. In the approach, given a name, the actual contexts are grouped to represent the meanings of a word.\\ In this work we are interested in obtain an unsupervised WSD method for a specific-domain, to achieve this goal this study is based in the concept of \textit{second order vector senses}\footnote{Given an ambiguous word, the senses are retrieved from WordNet, each recovered sense again is tagged with the Part-Of-the-Speech to recover the additional senses for each word within the first sense}, we investigate the performance integrating two information sources, the local context together with related terms with an ambiguous word, for this, different semantic similarity measures were employed. %to automatically to assign the right sense to an ambiguous word by obtaining and merging context information and semantic similarity information. 
The preliminary experiments show promising response.
%\clearpage
%\appendix
%\newappendix{}
%Agreguen su apéndice
%\section{Con secciones si así lo quieren}
%aquí va el texto
\section{Motivation}\label{motivation}
Although different works and proposals have been published on WSD task, this is an open research area mainly because is a difficult task to solve, described as an Artificial Intelligence-complete problem according to Mallery \cite{mallery98}. WSD is a problem whose difficulty is equivalent to solving central problems of Artificial  Intelligence such as Turing  Test. Many researchers have tackled the problem using different approaches based on several kinds of corpus \cite{lesk86, banerje03, aguirre-06, rada}. According to SemEval 2010, \textit{ WSD systems trained on general corpora are known to perform worse when moved to specific domains}, so the motivation in this investigation is to development an algorithm to solve the problem on specific-domain WSD. The research is based on WordNet $3.0$ as lexical database, therefore we match the sense of word with against the synsets of WordNet. For this, a graph-based approach is proposed  to assign the right sense to an ambiguous word by obtaining and merging context information and semantic similarity information; the main idea is mutually reinforcing between both techniques.
\section{Problem definition}\label{definicion}
%In this section a problem definition is given
In NLP, WSD is the problem of determining the meaning of each word in a given context. WSD is essentially a task of classification: word senses are the classes, the context provides the evidence, and each occurrence of a word is assigned to one or more of its possible classes based on the evidence.\\This section presents the proposed methodology, graph-based measures and the dataset used in this study. The methodology initially proposed is briefly described in the first section.  
\subsection{Approach}\label{approach}
The methodology initially proposed was an approach based on directed graph which recovered senses of second order vectors from WordNet as well as the incorporation of hypernymys, hyponyms, meronyms and holonymys given the context of the ambiguous word. The structure graph was evaluated using the PageRank algorithm \cite{page98}. Unfortunately the results were not what we expected, we can argument that the WordNet's semantic relations were noisy because they did not belong to the domain of interest. So, in the first quarter of this year a slightly modified methodology was designed which is described in the next section.\\
\subsection{Proposed methodology}\label{methodologyPrposed} %In this section the proposed methodology is described. 
Unlike the reported work in the state-of-the-art, a graph-based representation is proposed which relies on the combination of two techniques to select the right sense for a given ambiguous word: the context  and semantic similarities using a specific domain corpus; both techniques use information from WordNet. Figure \ref{meto} illustrates the proposed methodology, the complete description of the involved procedures is given in the following sections.
\begin{figure}
\centering
\caption{Proposed methodology} \label{meto}
\includegraphics[width=0.7\textwidth]{metodologia.eps} %width=6cm
\end{figure}
\subsection{Data preparation}\label{context}
This section describes the \textit{lexical resource} and \textit{data processing} (see step 1 of the methodology) to extract the context in which occurs an ambiguous word. The experiments were performed using the \textit{all-words} dataset on specific domain of SemEval 2010. The input file consist of several instances of ambiguous words, each instance is a context in which a particular ambiguous word appears. Two techniques to extract the context of an ambiguous word have been investigated, \textit{bag-of-words} and \textit{grammatical relations}.
\begin{enumerate}
 \item Bag-of-words, in this technique the input file firstly is tagged\footnote{The assignment of Part-of-Speech to each word in the document} (step 1 of the methodology), for this task, the Stanford tagger\footnote{http://nlp.stanford.edu/software/tagger.shtml} is used to Part-of-Speech tag the test data. In step 2, the context window size is defined, different window sizes were tested in the experiments to determine how many words before and after an ambiguous word \textit{w} must be included in the context, so, the better resulting window size was $2\beta + 1$, with $\beta = 1$.
\item In the second technique instead of representing contexts as a \textit{bag-of-words}, the features returned by a dependency parser were experimented. For this, the Stanford parser\footnote{http://nlp.stanford.edu/software/lex-parser.shtml} is used, such parser provides a simple description of the grammatical relations in a sentence. For example, given the following sentence: \textit{``what are the five lessons learnt from the evaluation and what do these mean for the nature policy?''}. The Stanford dependencies are represented onto a directed graph (See Figure \ref{SD}) in which the words in the sentence are nodes in the graph and grammatical relations are edge labels. For example, in Figura \ref{SD}, given the word ``policy'' its neighbors (mean, nature) were recovered to form the context.
\end{enumerate}
Using these two techniques the context for each ambiguous word was extracted and used in the experiments described in the section \ref{experiments}.
\begin{figure}
\centering
\caption{Graphical representation of the Stanford dependencies} \label{SD}
\includegraphics[width=0.5\textwidth]{parser.eps} %width=6cm
\end{figure}
\subsection{Querying the Web}
This section describes two existing techniques to automatically extract keys terms from a domain corpus (environment domain), these terms are used to develop a web query to retrieve documents over the Internet. These documents were used to extract semantic terms for each ambiguous word. An important aspect on specific-domain WSD is determine the semantic space where an ambiguous word occurs. In this case an untagged corpus (114 documents) from the \textit{environment} domain provided by SemEval $2010$ was used to extract keywords in the domain. The Term Frequency and Inverse Document Frequency (TF-IDF) and their frequency of occurrence were used, after removed stopwords  and a lemmatization process. So, in this report, TF-IDF and frequency list were explored as domain detection which are explained in the following subsections. 
\subsubsection{TF-IDF}\label{tfidf}
TF-IDF is a statistical measure of weight often used in natural language processing to measure how important is a word in a document in a corpus using a vectorial representation.  The importance increases proportionally to the number of times a word appears in the document, but is offset by the frequency of the word in the corpus. In this model, each document
is represented as a vector whose entries are weights of the vocabulary obtained from the corpus. Specifically, given a text collection $\{D_1, D_2, \cdots, D_M\}$ with vocabulary $V=\{w_1, w_2, \cdots, w_n\}$, the vector $\overrightarrow{D_i}$ of dimension $n$, corresponding to the document $D_i$ has entries $d_{ij}$ representing the weight of the term $w_j$ in $D_i$ as:
\begin{equation}\label{peso}
  d_{ij}=tf_{ij}\cdot idf_j, 
\end{equation}
where $tf_{ij}$ is the frequency of term $w_j$ in document $D_i$, $idf_j=\log_2(\frac{2M}{df_j})$, and $df_j$ is the number of documents in which $w_j$ appears.
With this model, relevant terms on the environment domain were retrieved, the top ten keywords in descending order  in the corpus are: \textit{bluefin, bioscore, orangutan, subregion, amazon, roundwood, fleet, sawnwood, peen, and gibbon}.
\subsubsection{Frequency list}\label{frequencyList}
Other simple and fast form to retrieve keywords is through of frequency list, here the output is a word-frequency list from the corpus, the top ten keywords in descending order  are: \textit{species, biodiversity, conservation, areas, management, european, project, nature, water and forest}. 
\subsubsection{Web query}
The keywords retrieved (subsections \ref{tfidf}, \ref{frequencyList}) were used to retrieve documents from the Web to increase the background documents. The first $20$ words of both techniques (TF-IDF, frequency list), in descending order according to their frequency, were selected and combined in pairs to create web queries of length two according to Iosif and Potamianos \cite{Elias}. For example, for ``\textit{specie and biodiversity}'', the web queries were sent to several search engines (Google, Yahoo, Bing, HotBot, and MetaCrawler) according to the study of Aguilar \cite{aguilar} to retrieve documents. The enlarged corpus (603 documents) also is Part-Of-Speech tagged and stemmed by using the Stanford tagger. After the pre-processing phase, the semantically similarity terms for each ambiguous word are retrieved using Mutual Information (MI) \cite{Church,diana} (see Equation \ref{im}). The context window size was defined as $2\beta + 1$, $\beta=5$, according to Islam and Inkpen \cite{diana}. 
Let $w$ be the ambiguous word and $t_i$ a term, $IM(w, t_i)$ denotes how many times word $t_i$ appeared with word $w$, so a list of words sorted in descending order by their IM were retrieved.
%MI compares the probability of observing $X$ and $Y$ together ($f(X,Y)$) with the probabilities of observing $X$ and $Y$ independently ($f(X), f(Y)$).
\begin{equation}\label{im}
IM(w,t_i)=log_2\frac{f(w,t_i)}{f(w) f(t_i)} 
\end{equation}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Distributional similarity}
In the before section a \textit{bag-of-words} model was implemented, this assumes that order of words has no significance (the term \textit{``climate change''} has the same probability as \textit{``change climate''}).  However some studies \cite{phd} have showed that a semantic representation has a major potential for several applications, thus word-context matrices are most suited for measuring the semantic similarity of word pairs and patterns \cite{turney}. In this section several semantic measures were tested. %  \cite{brian}
\subsubsection{Context}
Context plays a important function in many applications of NLP, particularly WSD depends on the quality of contextual information that taggers or parsers can extract from training data. In this work firstly the Stanford  parser is used to extract 1,378,972 contextual relations. Formally a context relation or context is a tuple $\textless w, r, w'\textgreater$ where $w$ is a headword occurring on some relation type $r$ (for example, $\textless$do, dobj, mean$\textgreater$, indicates that the term do was direct object of the noun mean), with another word $w'$ in one or more sentences. Each occurrence extracted from raw text is an instance of a context, in this case the tuple ($r, w'$) is an attribute of $w$. In the reported result all the grammatical relations defined by Marie \textit{et al.} \cite{SD} were used. 
\subsubsection{Weights}
Once that the contextual relations of each headword has been extracted from raw text (corpus), a word-context matrix is used as representation (see Table \ref{wordContextMatrix}), where ($w_1, w_2, \cdots, w_n$) is the vocabulary of corpus and ($c_1, c_2, \cdots, c_c$) is a feature or attribute of $w_n$, so, each pair ($c_c$, $w_n$) has associated a frequency. The frequency is used by weight functions to assign higher value to contexts that are more indicative of the meaning of a word. The Equation \ref{tf-idf} was tested in the experiments, such Equation uses the notation proposed by Curran \cite{phd} where $f(w,r,w')$ is the total instances of the context that $w$ appears in and $n(*, r, w')$ is the number of attributes that $r, w'$ appears with. Other weight function implemented was Pointwise Mutual Information \cite{lin04} (see Equation \ref{pmiPeso}), which measures the strength association between an attribute $f_i$ and a word $w$. Thus by means of these equations is possible to assign an importance to each pair ($c_c, w_n$) based on their frequency. 
\begin{table} % assign a measure of the informativeness of the attribute and its frequency
  \begin{center} % the weight function gives the most weight to the data
   \caption{Frequency matrix}
   \begin{tabular}{l|llll}      
    context & $w_1$ & $w_2$ & $\cdots$ & $w_n$\\ 
    \hline   
   $c_1$ & $f_{11}$ & $f_{12}$ & $\cdots$ & $f_{1n}$\\
   $c_2$ & $f_{21}$ & $f_{22}$ & $\cdots$ & $f_{2n}$\\
   $\cdots$ & $\cdots$  &  &  & $\vdots$ \\
   $c_c$ & $f_{1c}$&  & $\cdots$ &  $f_{cn}$\\   
   \end{tabular}\label{wordContextMatrix}
  \end{center}
 \end{table}
\begin{equation}\label{tf-idf}
  TF-IDF = \frac{f(w,r,w')} {n(*,r,w')} 
\end{equation}
\begin{equation}\label{pmiPeso}
 pmi(f_i, w) = log\frac{P(f_i, w)}{p(f_i)  p(w)}
\end{equation}
\subsubsection{Semantic similarity measures}
\begin{table} % assign a measure of the informativeness of the attribute and its frequency
  \begin{center} % the weight function gives the most weight to the data
   \caption{Weight matrix}
   \begin{tabular}{l|llll}      
    context & $nature$ & $conservation$ & $area$ & $\cdots$ \\ 
    \hline   
   $prep\_with-farming$ & $1.9030$ & $0$     & $0$ & $\cdots$\\
   $prep\_for-farming$  & $0$      & $3.144$ & $0$ & $\cdots$\\
   $conj\_and-house$    & $2.070$  & $0$     & $0$ & $\cdots$\\
   $prep\_in-house$     & $0$      & $0$     & $3.149$ & $\cdots$\\
   $\cdots$ \\ %& $\cdots$  &  &  & $\vdots$ \\      
   \end{tabular}\label{weightContex}
  \end{center}
 \end{table}
Once defined the weight matrix (see Table \ref{weightContex}) using the Equations \ref{tf-idf} and \ref{pmiPeso},  the context of a word is  represented as a feature vector, the similarity between two words ($w_1$, $w_2$) can be computed using their context vectors. The Equation \ref{cosine} computes the cosine between their feature vectors using the Equation \ref{tf-idf}, here a subscripted asterisk indicates that the variables are bound together as is defined by Curran \cite{phd}. The similarity between two words using cosine of Point Mutual Information (PMI) is defined by Equation \ref{cosPMI}.% using the equation \ref{pmiPeso}.
\begin{equation}\label{cosine}
  Cosine(w_1, w_2) = \frac{\sum wtg(w_1, *_r, *_{w'})(wgt(w_2,*_r,*_{w'}))} {\sqrt{\sum wgt(w_1,*,*)^2  \sum (w_2,*,*)^2}} 
\end{equation}
\begin{equation}\label{cosPMI}
 Sim_{cosPMI}(w_1, w_2) = \frac{\sum_{i=1}^{n}pmi(f_i, w_1) pmi(f_i, w_2)}{\sqrt{\sum_{i=1}^{n}pmi(f_i, w_1)^2}\sqrt{\sum_{i=1}^{n} pmi(f_i, w_2)^2}}
\end{equation}
\subsection{Graph construction}
Hereafter the term ``context'' is used interchangeably to denote related words and the actual occurrence context. In the literature some semantic similarity measures have been implemented to quantify the degree of similarity between two words  using information drawn from WordNet hierarchy (see Ted Pedersen \textit{et al.} \cite{pedersen}). Particularly the Lin and Vector measures were taken into account in the conducted research because they have a good performance on WordNet hierarchy and results. Once contexts are recovered, the senses for each word in the context are retrieved from WordNet and weighted by a semantic similarity score using the WordNet::Similarity\footnote{\textit{This is a Perl module that implements a variety of semantic similarity and relatedness measures based on information from WordNet.}} score  between the senses of word \textit{w} and the senses for each word in the context. These measures return a real value indicating the degree of semantic similarity between a pair of concepts.\\
Formally let $C_w=\{c_1, c_2, \cdots, c_n\}$ the set of words in the context related to an ambiguous word \textit{w}. Let \textit{senses(w)} be the set of senses of \textit{w} and let \textit{senses($c_n$)} be the set of senses for a word in the context, a ranked list is returned in descending order of semantic similarity between \textit{w} and $c_n$, the items that maximize this score are filtered according to the statistical mean. Other  experiment used a threshold $\theta=0.4$, the senses that exceed this value were retrieved. These items constitute the named \textit{first order vectors}. For each ambiguous word, two graph are built (see Figure \ref{meto}). In this representation, $G=(V, E, W)$ where $V$ are the vertices (concepts), $E$ are the edges (semantic relations) and $W$ (a strong link between two concepts or vertices). So, each recovered sense again is tagged with the Part-Of-the-Speech to recover the additional senses for each word within the first sense. These semantic relations for senses constitute the connections in the graph. Once the semantic graph is built, its structure and links are analyzed applying the algorithms described in the section \ref{measures}.
\subsection{Graph-based measures}\label{measures}
Vertex-based centrality is defined in order to measure the importance of a vertex in the graph; a vertex with high centrality score is usually considered more highly influential than other vertex in the graph. In the experiments, four algorithms have been implemented to determine which node is the most important examining the graph structure: in-degree, Key Problem Player, Jaccard, and Personalized PageRank, which are described bellow.\\
\textbf{Indegree} \cite{lapata07}, the simplest and most popular measure is degree centrality. In an undirected graph the degree of the vertex is the number of its attached links; it is a simple but effective measure of nodal importance. A node is important in a graph as many links converge to it. In the implementation, $V$ is the set of vertices on the graph and $v$ a vertex, see Equation \ref{degree}.
\begin{equation}\label{degree}
score(v)=\frac{indegree(v)}{\mid V \mid-1} 
\end{equation}
\textbf{Key Problem Player} \cite{lapata07}, consists on find a set of nodes that is maximally connected to all other nodes. Here, a vertex (denoted by $v$ and $u$, $V$ is the set of vertices) is considered important if it is relatively close to all other vertices, see Equation \ref{kpp}.
\begin{equation}\label{kpp}
kpp(v)=\frac{ \displaystyle\sum_{u\epsilon V:u\neq v}\frac{1}{d(u,v)}}{\mid V \mid-1} 
\end{equation}
\textbf{Jaccard coefficient} computes the probability that two vertex $i$ and $j$ will have a common neighbor $k$. According to Granovetter \cite{jaccard}, the link strength between two vertex depends on the overlap of their neighborhoods. If the overlap of neighborhoods between the vertex $i$ and vertex $j$ is large, it is considered that $i$ and $j$ have a strong tie. Otherwise, they are considered to have a weak link, see Equation \ref{jaccard}. 
\begin{equation}\label{jaccard}
Jaccard(i, j)=\frac{\mid N_i \cap N_j\mid}{\mid N_i \cup N_j\mid} 
\end{equation}
where, $N_i$ and $N_j$ indicate the neighborhoods of the vertex $i$ and $j$ respectively.\\\\
\textbf{PageRank} is a link analysis algorithm traditionally applied on directed graphs, this algorithm can be also applied to undirected graphs, in which case the outdegree of a vertex is equal to the in-degree of the vertex. For this, an adaptation to the PageRank algorithm has been proposed, the Personalized PageRank (PPRank) algorithm \cite{Aguirre09}. After running the algorithm, a score is associated with each vertex as shows the Equation \ref{pp}.
\begin{equation}
PR(v_i)=(1-\alpha) + \alpha * \displaystyle\sum_{v_j\epsilon In(v_i)} \frac{w_{ji}}{\sum_{v_k\epsilon Out(v_j)}w_nk}PR(v_j) \label{pp}
\end{equation}
According to the literature, $\alpha$ is a factor which is usually set as 0.85 that is the value used in the evaluation of the implemented WSD prototype.\\\\
Finally the context and semantic similarity are combined (see step $4$ in Figure \ref{meto}) using the Equation \ref{combining} to get a ranked list in descending order according to their relevance, so, the node with the highest value is selected as the right sense for the ambiguous word in question. Several experiments were carried out with different values for $\delta$ so, the better result was $\delta = 0.6$, thus we give more importance to semantic similarity because surprisingly the best results were obtained using the background documents.
\begin{equation}
  Score(v_i) = \frac{(1-\delta) Result(context) + \delta Result(corpus)} {2} \label{combining}
\end{equation}
%Two words are distributionally similar if one could be substituted for the other in a sentence with a hight probability of preserving the plausibility of seeing the sentence in real text \cite{brian}.
\section{Experiments and Results} \label{experiments}
The purpose of this evaluation is to show if the combination of contextual semantic relationships and semantic similarity of a domain contributes to WSD in an unsupervised manner; usually only the context or expanded context has been used for WSD. Therefore in this approach the context and semantic similarity information were integrated and used afterwards to assign the right sense to an ambiguous word. To evaluate the performance of the WSD approach and to be able to compare it with others algorithms, the experiments were carried on the English test data of SemEval 2010 \cite{semeval10}. Precision (percentage of words that are tagged correctly, out of the words addressed by the system) and Recall (percentage of words that are tagged correctly, out of all words
in the test set) were used as evaluation measures. The dataset is a file with 1398 ambiguous words, 366 verbs, and 1032 nouns. The WSD approach was performed by using WordNet 3.0 as lexical database. The top $N$ related terms to an ambiguous word from ranked list with $N=3$, were selected. The Table \ref{tableBagofwords} shows the results obtained with each algorithm using a \textit{bag-of-words} technique, in Table \ref{dstable},  \textit{distributional similarity} was tested and evaluated with each algorithm and Table \ref{resulSemEval} shows  results obtained in the last WSD competition. In the first technique (\textit{bag-of-words}), the results shown that the proposed approach is low, equal to the Yoan's system and far from Anum's system when is evaluated using the PPRank algorithm, this is because unlike the other ranking algorithms, PPRank takes into account edge weights when computing the score associated with the vertex. The other algorithms only make use of the content or links information, that could explain the worse performance. The results using distributional similarity are shown in Table \ref{dstable}, in this case the preliminary results are been analyzed, the results are good but it is still too early to draw a final conclusion. 
 The results obtained by both approaches were worse with those reported in the literature but the preliminary results of these algorithm are promising if we enrich the background corpus and retrieve the semantically most similar words for an ambiguous word; this could help improve the process of disambiguation.
\begin{table}
\centering
\caption{Performance of connectivity measures in the proposal over the all-words dataset SemEval 2010}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Algorithm & Precision (\%) & Recall (\%) & Nouns (\%) & Verbs (\%)\\ 
\noalign{\smallskip}
\hline
KPP & $33.94$ & $33.11$ & $33.52$ & $36.33$\\
Indegree & $33.87$ & $33.04$ & $31.68$ & $36.88$\\
Jaccard & $34.38$ & $33.54$ & $32.94$ & $35.24$\\
PPRank &$\textbf{35.11}$& $\textbf{34.26}$ & $\textbf{31.78}$ & $\textbf{36.88}$\\
\hline
\end{tabular}\label{tableBagofwords}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%555555555555555555555555555
\begin{table}
\centering
\caption{Performance of distributional similarity, $\theta=0.4$}
\begin{tabular}{c|ccccc}
\hline
  & Algorithm & Precision (\%) & Recall (\%) & Nouns (\%) & Verbs (\%)\\ \hline  
\multirow{4}{*}{$Sim_{cosPMI}$} & \multirow{1}{*} {kpp} & 29.76 & 29.04 &  28.87 & 29.5 \\ %\cline{2-3}
& \textbf{Indegree} & \textbf{38.26} & \textbf{37.33} & \textbf{39.05} & \textbf{32.51} \\ % \cline{2-3} 
& Jaccard & 6.08 & 5.93 & 5.91 & 6.01 \\ % \cline{2-3}
& \multirow{1}{*}{PPRank} & 16.34 & 15.95 & 13.56 & 22.66  \\ 
\hline
\multirow{4}{*}{$Cosine$} & \multirow{1}{*} {kpp} & 32.18 & 31.4 &  28.68 & 32.36 \\ %\cline{2-3}
& \textbf{Indegree} & \textbf{38.92} & \textbf{37.98} & \textbf{29.89} & \textbf{30.87} \\ % \cline{2-3} 
& Jaccard & 9.38 & 9.15 & 10.46 & 5.46 \\ % \cline{2-3}
& \multirow{1}{*}{PPRank} & 18.69 & 18.24 & 16.95 & 21.85  \\ \hline
\end{tabular}\label{dstable}
\end{table}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%555555555555555555555555555%%%%%%%%%%%%%%%%%5555555555555%%%%%555555555555%%5555555555
\begin{table}
\centering
\caption{Overall results for the domain WSD of SemEval 2010}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Algorithm & Precision (\%) & Recall (\%) & Nouns (\%) & Verbs (\%)\\ 
\noalign{\smallskip}
\hline
Anup Kulkarni & $51.2$ & $49.5$ & $51.6$ & $43.4$\\
Andrew Tran & $50.6$ & $49.3$ & $51.6$ & $42.6$\\
Andrew Tran & $50.4$ & $49.1$ & $51.5$ & $42.5$\\
Aitor Soroa & $48.1$ & $48.1$ & $48.7$ & $46.2$\\
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$\\
Aitor Soroa & $38.4$ & $38.4$ & $38.2$ & $39.1$ \\
Davide Buscaldi & $38.1$ & $35.6$ & $35.7$ & $35.2$ \\
Radu Ion & $35.1$ & $35.0$ & $34.4$ & $36.8$ \\
Yoan Gutierrez & $31.2$ & $30.3$ & $30.4$ & $30.1$\\
\textit{Random baseline} & $23.2$ & $23.2$ & $25.3$ & $17.2$\\
\hline
\end{tabular}\label{resulSemEval}
\end{table}
\section{Conclusions and Future Work}\label{discussion}
This report describes an approach aimed to tackle the WSD problem on specific domains. The adaptation and integration of the tested techniques have been implemented in a first prototype. With this prototype, a semantic graph is obtained by using \textit{second order vectors} of senses recovered from WordNet; which corresponds to a specific ambiguous word. Thus, two semantics graphs were obtained and evaluated given the context and related terms to an ambiguous word. The approach has been tested on a standard benchmark dataset released by SemEval 2010  in all-words domain specific WSD task, whose results were presented.\\
As future work we plan mainly to enrich the auxiliary documents according with Navigli \cite{surveyWSD}. A problem in the construction of context vectors is that a large amount of (unlabeled) training data is required to determine a significant distribution of word co-occurrences. So, we plan to further explore the adaptation of several mechanisms:
\begin{enumerate}
 %\item At this moment the semantic similarity measure (Equation \ref{cosine}) is been evaluated
 \item Investigate other technique to extract keywords from corpus to enrich the background documents over the Web
 \item Another measure for semantic similarity will be integrated
 \item A semantic evaluation at large scale, increasing number of sematic terms is desirable
 \item Tests on other datasets are desirable in the future, for example medicine or tourism
\end{enumerate}
%\bibliography{MY}
\input{referenc}
\end {document}

