
% \begin{itemize}
% \item \cite{DBpediaRelationshipFinder} finds relation between two dbpedia concepts
%   \begin{itemize}
%   \item PROS: distance deffinition, use of different relations (not only classification)
%   \item CONS: time consuming, not for one against lot
%   \item SIMIL: also uses DBpedia
%   \end{itemize}
% \item \cite{eps267792} tag dissambiguation ussing dbpedia
%   \begin{itemize}
%   \item PROS: context aware
%   \item CONS: poor use of dbpedia terms relationships
%   \item SIMIL: also uses DBpedia
%   \end{itemize}
% \item \cite{Gabrilovich07computingsemantic} Explicit Semantic Analysis (ESA) Semantic similarities of words and sentences using wikipedia articles as semantic units. weighted secuence of wikipedia concepts
%   \begin{itemize}
%   \item PROS:
%   \item CONS: Costly preparation
%   \item SIMIL: Wikipedia articles a semantic unit
%   \end{itemize}
% \end{itemize}

There is a wide set of works that address the semantic similarities between elements such as words, sets of words or documents using explicit semantic information. The structured and continuously growing information of Wikipedia is an interesting source of semantic knowledge \cite{eps267792}, used in NLP and in semantic similarity analysis \cite{Milne_2007}. DBpedia \cite{dbpedia} is a resource that builds a general purpose ontology from Wikipedia categorization and ``infoboxes'' information, and is used in several works with ontologies\cite{DBpediaRelationshipFinder,eps267792}. Next we show relevant related previous works:
\begin{description}
\item[DBpedia Relationship Finder] \cite{DBpediaRelationshipFinder} is a tool that finds the relations between two DBpedia instances. It finds relevant relations between terms and defines the concepts of the Minimum and Maximum distance between elements based on the paths length. However, its time-consuming algorithm is not suitable for search semantic similarities between bigger collections of data, and therefore it can not be used for search.
\item[Tag disambiguation using DBpedia] \cite{eps267792} propose the use of DBpedia semantic information to disambiguate the words in a set of tags defining an element. The work claims that associating ontology terms to tags help tasks such as search or tagging recommendation. The work relies on word frequency vectors associated with DBpedia taxonomy. Unlike the previous work, does not uses ontology relationships between terms. This makes it fast and suitable for search but does not fully exploit the advantages of semantic web.
\item[Explicit Semantic Analysis] \cite{Gabrilovich07computingsemantic} uses TFIDF schemes with Wikipedia articles as semantic units to characterize text fragments and compute the semantic similarity using the cosine metric to compare the vectors. The semantic similarity is computed then by conventional methods, instead of using explicit semantic relations as DBPedia Relationship Finder or our proposal does.
\item[WikiRelate!] \cite{Milne_2007} proposes two similarity measures among words using Wikipedia categorization. One of the proposed measures uses the concept of shortest path between the categories of Wikipedia articles related with the compared words, the other finds the least common subsumer of a category from each concept with maximum information content\cite{Seco04anintrinsic}. The concept behind using information content based measurements is that infrequent words are more informative than frequent\cite{Milne_2007}.

Like our proposal, WikiRelate! uses the semantic information of Wikipedia to compute the similarity among concepts. The techniques use algorithms that require more resources than our proposal, making it less convenient for search, but obtains normalized results allowing to compare the similarity between two concepts to the similarity of two different concepts.
 
\end{description}



Our work tries to merge the rich semantic similarities identified by tools like DBpedia Relationship Finder or WikiRelate! with the search capabilities of the other proposals, arisen by the low complexity of their comparison algorithms.

