
%%%%%%%%%%%%%%%%%%%%%%% file typeinst.tex %%%%%%%%%%%%%%%%%%%%%%%%%
%
% This is the LaTeX source for the instructions to authors using
% the LaTeX document class 'llncs.cls' for contributions to
% the Lecture Notes in Computer Sciences series.
% http://www.springer.com/lncs       Springer Heidelberg 2006/05/04
%
% It may be used as a template for your own input - copy it
% to a new file with a new name and use it as the basis
% for your article.
%
% NB: the document class 'llncs' has its own and detailed documentation, see
% ftp://ftp.springer.de/data/pubftp/pub/tex/latex/llncs/latex2e/llncsdoc.pdf
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


\documentclass[runningheads,a4paper]{llncs}
\setcounter{tocdepth}{3}
\usepackage{graphicx}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{amsmath}
\usepackage{url}
\newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}}

\usepackage{url}
\urldef{\mailsa}\path|{wenlei.zhouwl, whfcarter, jiansong.chao, |
\urldef{\mailsc}\path| wnzhang, yyu}@apex.sjtu.edu.cn|
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}
\setcounter{secnumdepth}{3}

\begin{document}

\mainmatter  % start of an individual contribution

% first the title is needed
\title{LODDO: Using Linked Open Data Description Overlap to Measure Semantic Relatedness Between Named Entities}

% a short form should be given in case it is too long for the running head
\titlerunning{LODDO: A Named Entities Semantic Relatedness Measuring Approach}

% the name(s) of the author(s) follow(s) next
%
% NB: Chinese authors should write their first names(s) in front of
% their surnames. This ensures that the names appear correctly in
% the running heads and the author index.
%
\author{Wenlei Zhou
\and Haofen Wang\and Jiansong Chao\and \\
 Weinan Zhang\and Yong Yu}
%
\authorrunning{}
% (feature abused for this document to repeat the title also on left hand pages)

% the affiliations are given next; don't give your e-mail address
% unless you accept that it will be published
\institute{APEX Data \& Knowledge Management Lab\\
Department of Computer Science  Engineering\\
Shanghai Jiao Tong University, Shanghai, China\\
\mailsa\\
\mailsc\\}

%
% NB: a more complex sample for affiliations and the mapping to the
% corresponding authors can be found in the file "llncs.dem"
% (search for the string "\mainmatter" where a contribution starts).
% "llncs.dem" accompanies the document class "llncs.cls".
%

\toctitle{Lecture Notes in Computer Science}
\tocauthor{Authors' Instructions}
\maketitle


\begin{abstract}
Measuring semantic relatedness plays an important role in information retrieval and Natural Language Processing. However, little attention has been paid to measuring semantic relatedness between named entities, which is also very significant. As the existing knowledge based approaches have the entity coverage issue and the statistical based approaches have unreliable result to low frequent entities, we propose a more comprehensive approach by leveraging Linked Open Data (LOD) to solve these problems. LOD consists of lots of data sources from different domains and provides rich a priori knowledge about the entities in the world. By exploiting the semantic associations in LOD, we propose a novel algorithm, called LODDO, to measure the semantic relatedness between named entities. The experimental results show the high performance and robustness of our approach.
\keywords{Named Entity, Semantic Relatedness, Linked Open Data}
\end{abstract}


\section{Introduction}

Semantic relatedness measuring plays an important role in the area of natural language processing (e.g., word sense disambiguation \cite{Siddharth}) and information retrieval. With the advance of Semantic Web, more and more documents are annotated with real world entities. Hence, measuring semantic relatedness between these named entities can be regarded as an effective mean to capture semantic associations between documents, which can be further used for semantic search.

In recent years, there are abundant research studies on measuring semantic relatedness between words. They tried to solve the following two challenges:
\begin{itemize}
\item Word Ambiguity. A word might refer to different meanings or can represent different entities.
\item Different Representations of a Single Entity. Even for a unique entity, it may have different representations, which requires us to collect all synonyms of a given word.
\end{itemize}

The existing work can be divided into two types: knowledge based approaches and statistical based approaches. The former ones basically leverage a high-quality knowledge source like WordNet \cite{Wordnet} or Wikipedia\footnote{http://www.wikipedia.org/}. The main limitation of this kind of work is the coverage issue. While Wikipedia is the world largest domain independent knowledge base, it misses a number of entities in some specific domain. On the other hand, statistical based approaches mainly exploit the Web for this task. However, they fail to provide reliable semantic relatedness between words of low frequencies.

In this paper, we propose a novel approach to overcome the previous problems by leveraging Linked Open Data \cite{LOD} (LOD). LOD is an abundant Web of data which contains a vast number of named entities. By September 2010, 203 data sources consisting of over 25 billion Resource Description Framework (RDF) triples, which are interlinked by around 395 million RDF links, have been added into LOD. As the data sources cover many domains, given a named entity, it is highly possible that there is some description about it in LOD. Thus entity coverage problem can be eased by using LOD. On the other hand, while the statistical based approaches regard named entities which have the same name in all documents as the same entity, LOD represents them as different entities. As a result, each entity in LOD has its own description and it is distinguished from other entities of the same name.

The contributions of this paper are threefold. First, we build an efficient LOD index mechanism to solve the two challenges: word ambiguity and different representations of a single entity. Second, we propose a novel approach LODDO to accurately measure the semantic relatedness between named entities by exploiting the semantic associations in LOD. Third, the experiments result shows that our approach outperforms the existing semantic relatedness measuring approaches by at least 39.6\%.

The remainder of the paper is organized as follows. In section 2 we discuss previous work related to named entities semantic relatedness measuring. The methodology is presented in section 3. The conducted experiments and the benchmark dataset with the evaluation results are presented in section 4. In section 5 we conclude the paper and discuss the future work.


\section{Related Work}

The existing semantic relatedness measuring approaches can be grouped into two types according to the sources they use: knowledge based approaches and statistical based approaches. The knowledge based approaches take advantage of a high-quality knowledge source such as WordNet, Roget or Wikipedia. The statistical based approaches calculate the statistical information of words by using Web corpus as their source.

Regarding the knowledge source as a graph of concepts connected with others, a straightforward approach to calculate semantic relatedness between two words (concepts) is to find the length of the path connecting the two words in the graph \cite{Rada,Jarmasz,HSO,Wubben}. Based on the intuition that the relatedness of two words (concepts) can be measured by the amount of information they share, Strube and Ponzetto \cite{Strube,Ponzetto} applied intrinsic information content to Wikipedia category graph. Resnik \cite{Resnik} used information content based on WordNet to measure semantic similarity. Hypothesizing that the higher word overlap in two concepts' glosses, the stronger semantic relatedness of these two concepts, Lesk \cite{Lesk} and Banerjee \cite{Banerjee} introduced a measure based on the amount of word overlap in the glosses of two concepts. Strube \cite{Strube} regarded the first paragraph of the concept's Wikipedia article as the concept's glosses.  Patwardhan \cite{Patwardhan} calculated the cosine of the second-order gloss vectors which represented the corresponding words by using WordNet glosses. Gabrilovich \cite{Gabrilovich} introduced ESA which constructed concept vectors from Wikipedia articles where each vector element represented an article. Milne \cite{Milne} constructed the vectors by using the interlink articles.

For abstract concepts semantic relatedness measuring, single domain independent knowledge source may be enough to cover all the concepts. However, when dealing with hundreds of millions named entities in our real life, the coverage problem arises. Research \cite{wisdom} has also shown that the accuracy differs depending on the choice of the knowledge sources, and there is no conclusion which knowledge source is superior to others. It seems that different knowledge source may have its own preference in describing data, and thus it is unreliable to just use single knowledge source when measuring semantic relatedness.

The statistical based approaches calculate the statistical information of words by using Web corpus as their source. Bollegala \cite{Bollegala} used four popular co-occurrence measures to calculate page-count-based similarity metrics for the pairs of single words and automatically extracts lexico-syntactic patterns about the pairs of single words based on the title, snippet and URL of the Web search results. Spanakis \cite{Spanakis} modified Bollegala's method by adding consideration of the ``Bag of Words'' representation to the Web search results text for each single word. Since a named entity usually contains more than one word, the lexico-syntactic patterns extraction cannot be used directly. Gracia \cite{Gracia} proposed a transformation of the Normalized Google Distance \cite{Cilibrasi} into a word relatedness measure based on Web search engine.

Some shortcomings of statistical based approaches are as follows. Without the help of human knowledge, the statistical based approaches actually regard the words in all documents as the same meaning when calculating one word's statistical information. This will lead to the ineffectiveness when measuring two low frequent words' semantic relatedness. In addition, these approaches also depend on the effectiveness and efficiency of the Web search engine.


\section{Methodology}

In recent years, the amount of structured data available on the Web has been increasing rapidly, making it possible to propose new ways to address complex information needs by combining data from different sources. LOD is aimed to link the existing data sources using RDF, and by September 2010, 203 data sources in different domains consisting of over 25 billion RDF triples have been added into LOD cloud. This gives us an inspiration to measure named entities semantic relatedness based on LOD. As LOD consists of lots of data sources from different domains, by leveraging LOD, the named entity coverage problem can be overcome. And it gives us a possible solution to synthesize multi-sources. While the statistical based approaches regard named entities which have the same name in all documents as the same entity, LOD represents them as different entities. So even the low frequent entity can have its own description, which can be distinguished from other entities of the same name.

Figure \ref{figure_1} shows the architecture of our approach LODDO, which measures named entities semantic relatedness based on LOD. There are two components in the architecture: offline and online components. The offline component is aimed to build an index from the various LOD sources which can be used to find the entities corresponding to a specific entity name fleetly. For the online component, the Description Retrieval can retrieve all the description information of a given entity name from data sources by leveraging LOD Index. The Description Overlap Measuring uses the description information of two named entities to calculate the semantic relatedness between them.


\begin{figure}
\centering
\includegraphics[height=7cm]{figure_1.pdf}
\caption{The Architecture of LODDO}
\label{figure_1}
\end{figure}

\subsection{LOD Index Builder}

LOD identifies an entity via a HTTP scheme based Uniform Resource Identifier (URI). The URI does not only serve as a unique global identifier but it also provides access to a structured data representation of the identified entity. LOD uses RDF, which is a generic, graph based data framework that represents information based on triples of the form (\textit{subject}, \textit{predicate},\textit{object}), to organize the data.

It is not trivial to find the entities which have a specific name directly. For example, two data sources, even the same data source, may represent one target entity by different uris. So it becomes very important to find all the uris which mean the same entity. Moreover, some name variants may correspond to one entity, which is ineffectively solved just by leveraging the string similarity.  To solve these problems, we leverage the name properties, uri format and certain relationships in LOD to enumerate all possible name variants and uris to an entity, which can be represented as an entity triple (\textit{entity\_id},  \textit{uri\_set}, \textit{name\_set}). Here, \textit{entity\_id} is an automatic generated University Unique Identifier (UUID) of an entity.


\subsubsection{Name Extraction for URI}

Thanks to the broad coverage of LOD, most name variants of an entity can be discovered by mining the diverse data sources. In this subsection, we focus on the name extraction for each uri. Unfortunately, different data sources may have different representation for names of an entity. A predicate may be used in different ways in different sources. For example, in RDF schema, the predicate rdfs\footnote{http://www.w3.org/2000/01/rdf-schema\#}:label is defined to provide a human-readable version of a resource's name. However, DBpedia uses it in a different way. Here is an example of rdfs:label in DBpedia: \\
\hspace*{10mm}(\textit{dbpedia\footnote{The dbpedia: stands for the prefix for URI from DBpedia}:The\_World\_Health\_Organization}, \textit{rdfs:label},
``\textit{The}''). \\
Obviously it is not right to regard ``\textit{The}'' as the name. Therefore, we need to analyze the LOD data sources respectively and identify the ways that may describe the name information. In such a way, we can get all the name variants of a uri and by automatically
generating unique entity id corresponding to the uri, we get the initial entity triple space $\Gamma$.

\begin{equation}
\Gamma = \{ \left( \textit{entity\_id},  \textit{uri}, \textit{name\_set}\right)  \mid \textit{uri} \in \textit{LOD} \} \;
\end{equation}

Here we present the name schema of several data sources: DBpedia, Musicbrainz \cite{Musicbrainz} (DBtune), and Freebase \cite{Freebase}.
\begin{itemize}
\item For DBpedia, we find that there's no exact \textit{predicate} which can show the name of a DBpedia uri. As a solution, we extract the name by deleting ``\_'' and ``()'' components from the tail of the uri. For example:\\
\hspace*{15mm}\textit{dbpedia:James\_Sikes} has the name ``\textit{James Sikes}''.\\
\hspace*{15mm}\textit{dbpedia:Think\_Again\_(band)} has the name ``\textit{Think Again}''.\\
\item Musicbrainz (DBtune) represents a uri's name by \textit{predicate}: foaf\footnote{http://xmlns.com/foaf/0.1/}:name,  mo\footnote{http://purl.org/ontology/mo/}:title and skos\footnote{http://www.w3.org/2004/02/skos/core\#}:altLabel.
\item Freebase uses fb\footnote{http://rdf.freebase.com/ns/}:type.object.name as the \textit{predicate} of a uri's name.
\end{itemize}


\subsubsection{Integrate Entity Triples}

We have mentioned that different data sources, even the same data source, may represent one target entity by different uris in LOD. However, there exist some relationships connecting uris which are actually telling the same entity. We have identified three such relationships and make use of them to integrate the entity triples.

\textbf{DBpedia:disambiguates Relationship.} Disambiguation in DBpedia is the process of resolving the conflicts that arise when a name is ambiguous---when it refers to more than one topic covered by DBpedia. A disambiguation uri is linked with other different uris which have the same name. For example, there are two disambiguation triples in DBpedia:\\
\hspace*{10mm}(\textit{dbpedia:Bell}, \textit{dbpedia:disambiguates}, \textit{dbpedia:Bell\_Island})\\
\hspace*{10mm}(\textit{dbpedia:Bell}, \textit{dbpedia:disambiguates}, \textit{dbpedia:Bell\_Labs})\\
which means \textit{dbpedia:Bell\_Island} has ``\textit{Bell}'' and ``\textit{Bell Island}'' as its name variants. And \textit{dbpedia:Bell\_Labs} has ``\textit{Bell}'' and ``\textit{Bell Labs}'' as its name variants.

\begin{algorithm}
\caption{Entity Triples Integration}
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE Initial entity triple (\emph{entity\_id}, \emph{uri\_set}, \emph{name\_set}) space $\Gamma$ got from subsection ``Name Extraction for URI''; LOD triple (\emph{subject}, \emph{predicate}, \emph{object}) space $\Sigma$.
\ENSURE Entity triple space $\Gamma$.
\FORALL{$x$ in $\Sigma$}
\IF{$x$ is a dbpedia:disambiguates or dbpedia:redirect triple}
\STATE $et1 \leftarrow$ entity triple whose \emph{uri\_set} contains $x.subject$
\STATE $et2 \leftarrow$ entity triple whose \emph{uri\_set} contains $x.object$
\STATE $et2.name\_set = et1.name\_set \cup et2.name\_set$
\ENDIF
\ENDFOR
\FORALL{$x$ in $\Sigma$}
\IF{$x$ is a dbpedia:disambiguates or dbpedia:redirect triple}
\STATE $et \leftarrow$ entity triple whose \emph{uri\_set} contains $x.subject$
\STATE $\Gamma = \Gamma - et$
\ELSIF{$x$ is a owl:sameAs triple}
\STATE $et1 \leftarrow$ entity triple whose \emph{uri\_set} contains $x.subject$
\STATE $et2 \leftarrow$ entity triple whose \emph{uri\_set} contains $x.object$
\STATE $entity\_id \leftarrow UUID\_Generation()$
\STATE $\Gamma = \Gamma \cup \{(entity\_id, et1.uri\_set \cup et2.uri\_set, et1.name\_set \cup et2.name\_set)\}$
\STATE $\Gamma = \Gamma - et1$
\STATE $\Gamma = \Gamma - et2$
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}

\textbf{DBpedia:redirect Relationship.} DBpedia may use a redirect relationship to link one uri, which has no description, to another uri which has a description. The reasons for creating and maintaining such a schema include: alternative names, alternative spellings or punctuation, abbreviations, etc \cite{redirect}.
If uri$_1$ redirects to uri$_2$, the uri$_2$ should also have the name of uri$_1$ as its name variant. For example, we have such a triple in DBpedia:\\
\hspace*{10mm}(\textit{dbpedia:UK}, \textit{dbpedia:redirect}, \textit{dbpedia:United\_Kingdom})\\
which means \textit{dbpedia:United\_Kingdom} has ``UK'' and ``United Kingdom'' as its name variants.

\textbf{Owl:sameAs Relationship.} By common agreement, Linked Data publishers use the link type owl\footnote{http://www.w3.org/2002/07/owl\#}:sameAs to state that two URI aliases refer to the same resource. Therefore, if uri$_1$ owl:sameAs uri$_2$, their entity triples should be integrated.

If two uris have an $owl:sameAs$ relationship, their uris and names will be integrated to the same \textit{entity\_id}. For $dbpedia:disambiguates$ and $dbpedia:redirects$ relationships, we just integrate their names excluding uris. The detail algorithm of Entity Triples Integration is shown in Algorithm \ref{alg1}.


\subsubsection{Index Storage}

After getting all the entity triples, we need a mechanism to store and index them in order to guarantee the efficient retrieval for online semantic relatedness measuring. Considering the existence of one word's different formats, such as \textit{apple} and \textit{Apples} which may indicate to the same entity, we need to normalize the names at first. The rules are as follows:
(1) convert the names to lowercase; (2) perform word stemming on the names; (3) remove any articles from names.

Then, the inverted list is utilized to store such information. The storage mechanism is shown in Figure \ref{figure_3} and corresponding notation description is in Table \ref{tab:indexNotations}.

After Entity Triple Integration, all the name variants and uris of an entity are extracted which means that the challenge, different representations of a single entity, has been solved.
By using the LOD Index, we can find all the entities of a given name, which means that the word ambiguity challenge also has been solved.

%\textit{N$_i$} means a name string. \textit{S$^i_j$} means the \textit{j$^{th}$} entity who has a name variant of \textit{N$_i$}. \textit{u$^i_{jk}$} means the \textit{k$^{th}$} uri which indicates to the \textit{S$^i_j$} entity. \textit{n} means the whole number of name strings. \textit{p(i)} means the number of entities who has \textit{N$_i$} as its name variant. \textit{q(ij)} means the number of uris corresponding to entity \textit{S$^i_j$}.

\begin{figure}
\centering
\includegraphics[height=2.7cm,width=12cm]{figure_3.pdf}
\caption{LOD Index mechanism}
\label{figure_3}
\end{figure}

\begin{table}
\centering
\caption{Notations for (Figure \ref{figure_3})}
\label{tab:indexNotations}
\begin{tabular}{cl}
  \hline
  Notation & Description \\
  \hline
  N$_i$  & name string \\
  S$^i_j$ & $j$th entity with a name variant of \textit{N$_i$} \\
u$^i_{jk}$ & $k$th uri which indicates to the \textit{S$^i_j$} entity\\
$n$ & the whole number of name strings\\
$ p(i)$ & number of entities with \textit{N$_i$} as its name variant\\
$ q(ij)$ & number of uris corresponding to entity \textit{S$^i_j$}\\
 \hline
\end{tabular}
\end{table}


\subsection{Semantic Relatedness Measuring}

Given an entity name, normalization of the name should be processed at first. Then we can retrieve all the entities with such a normalized name variant by leveraging the LOD Index. As there is a large variety of description about an entity in LOD, the heuristics arises that the more common description two entities have, the stronger semantic relatedness they have. In the following section, we will describe Description Retrieval and Description Overlap Measuring in detail.



\subsubsection{Description Retrieval}

Since an entity is represented as a set of uris, the description of the entity can be constructed
by accumulating the description of the uris in the \textit{uri\_set}. The description of a uri is
defined as a vector of subjects and objects which forms RDF triples with the uri. In a LOD triple, if \textit{uri$_i$} is the subject, then the object should be inserted into the description of \textit{uri$_i$}. Otherwise, if \textit{uri$_i$} is the object, the subject should be inserted into \textit{uri$_i$}'s description. In LOD, an entity uri may have types in all probability. However, there exist some type assertions which are too loose. For example, almost every entity uri in DBpedia has a type of \textit{owl:Thing}. So for avoiding such noise in LOD, we ignore the type assertion when generating the description.

\subsubsection{Description Overlap Measuring}

Having the heuristics that two related named entities may have many common related things, we leverage the LOD Description Overlap, named as LODDO, to calculate the semantic relatedness between two named entities.

In the real world, there exists such a situation: entity \textit{p} has many related entities including entity \textit{q} which leads to a weak semantic relatedness between \textit{p} and \textit{q}, however \textit{q} only has few related entities including \textit{p} which leads to a stronger semantic relatedness.
So, it becomes an issue about how to determine the final semantic relatedness between \textit{p} and \textit{q}. Having such a puzzle, we use the following two strategies to determine the final semantic relatedness. \\
(1) LODJaccard: Consider equally to both named entities when measuring the semantic relatedness. It is defined as follows:
\begin{equation}
\begin{split}
CommonDescription(p, q) = & \left | Description(p) \cap Description(q) \right | \\
Denominator(p, q) = &\left | Description(p) \right | + \left | Description(q) \right |
\\ &- \left | Description(p) \cap Description(q) \right | \\
LODJaccard(p, q) = &\frac{CommonDescription(p, q)}{ Denominator(p, q) }
\end{split}
\end{equation}
(2) LODOverlap: Have a bias towards the less description named entity when measuring the semantic relatedness. It is defined as follows:
\begin{equation}
\begin{split}
CommonDescription(p, q&) = \left | Description(p) \cap Description(q) \right | \\
Denominator(p, q) = & min(\left | Description(p) \right |, \left | Description(q) \right |) \\
LODOverlap(p, q&) = \frac{CommonDescription(p, q)}{ Denominator(p, q) }
\end{split}
\end{equation}
where \textit{Description(p)} means the description of entity \textit{p}.

\begin{table}
\centering
\caption{Four strategies to measure LOD Description Overlap}
\label{tab:strategies}
\begin{tabular}{|c|c|c|}
  \hline
  label & strategy name & description \\
  \hline
  1 & LODJaccard\_L & \tabincell{c}{Choose LODJaccard to determine semantic \\relatedness. And choose largest LODJaccard \\to deal with multi-pairs problem.} \\
  \hline
  2 & LODOverlap\_L & \tabincell{c}{Choose LODOverlap to determine semantic \\relatedness. And choose largest LODOverlap \\to deal with multi-pairs problem.} \\
  \hline
  3 & LODJaccard\_LC & \tabincell{c}{Choose LODJaccard to determine semantic \\relatedness. And choose largest CommonDescription \\to deal with multi-pairs problem.} \\
  \hline
  4 & LODOverlap\_LC & \tabincell{c}{Choose LODOverlap to determine semantic \\relatedness. And choose largest CommonDescription \\to deal with multi-pairs problem.} \\
  \hline
\end{tabular}
\end{table}

As there may be several entities which have the same name variant, given two entity names, multi-pairs may be generated. So we should determine which two entities should be chosen to calculate the semantic relatedness. Because of the lack of context around the given entity names, we should choose the entities pair which is mostly in agreement with usual human sense. There are two strategies: (1) Largest LODJaccard or LODOverlap: Choose the pair which has the strongest semantic relatedness. This strategy has been adopted by many semantic relatedness measuring approaches, such as \cite{Jarmasz,Resnik}. (2) Largest CommonDescription: Choose the pair which has the most abundant related things in common. If several pairs have the same largest CommonDescription, the smallest Denominator will be chosen.

All in all, four strategies can be used to deal with the semantic relatedness between two named entities. They are described detailedly in Table \ref{tab:strategies}. An experimental study is provided in Section \ref{sec:exp} to compare the four strategies.


\section{Experiments}
\label{sec:exp}
In this section, we conducted some experiments to demonstrate the effectiveness of our proposed approach in named entities semantic relatedness measuring. The experiments results showed that our approach greatly outperformed the previous semantic relatedness measuring approaches. Extensive experiments were also carried out to prove the robustness of our approach.

\subsection{Experimental Setup}
\subsubsection{LOD Data Sources}

In our work, we randomly select two cross-domain data sources: DBpedia, Freebase, and a specific-domain data source: Musicbrainz (DBtune). In our future work, we will consider other domains and do more comprehensive experiments. we have generated a LOD Index which includes DBpedia, Musicbrianz (DBtune) and Freebase. The scale statistics are shown in Table \ref{tab:numberstatistics}.

\begin{table}
\centering
\caption{LOD scale statistics}
\label{tab:numberstatistics}
\begin{tabular}{|c|c|c|c|}
  \hline
  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
  Data Source & DBpedia & Musicbrainz (DBtune) & Freebase \\
  \hline
  Entity Number (million) & 3.9 & 23.2 & 29 \\
  \hline
\end{tabular}
\end{table}

From Table \ref{tab:numberstatistics} we find that the entity number of Musicbrainz (DBtune) and
Freebase exceeds DBpedia greatly. As DBpedia is extracted from Wikipedia, and Wikipedia has a larger coverage than WordNet, we can conclude that LOD does enlarge entity coverage tremendously than Wikipedia and WordNet.
So by leveraging LOD, the entity coverage problem, which appears in traditional
knowledge based approaches, can be solved.


\subsubsection{Evaluation Measure}

There are two different correlation measures which have been used for evaluating semantic relatedness measuring. The Pearson product-moment correlation coefficient $\gamma$ is to correlate the scores computed by a semantic relatedness measuring approach with the numeric judgements of semantic relatedness provided by humans. The Spearman rank order correlation coefficient $\rho$ is to correlate named entities pair rankings. Zesch \cite{wisdom} compared these two measures and recommended to use Spearman rank correlation to evaluate semantic relatedness measuring. So in our experiments we just leverage Spearman rank correlation $\rho$ as the evaluation measure.

\subsubsection{Dataset}

Unfortunately, there is no benchmark data set for named entities semantic relatedness measuring. In our experiment, we make our own data set and offer it as a standard for testing named entities semantic relatedness. In our work, we have generated a LOD index which includes DBpedia, Musicbrianz (DBtune) and Freebase. Musicbrainz (DBtune) mainly focuses on the music domain while DBpedia and Freebase are cross-domain data sources. we randomly select 60 music related entities pairs from last.fm\footnote{http://www.last.fm/} and 60 other domains entities pairs from Wikipedia, giving a total of 120 pairs of named entities.

In the evaluation work, the semantic relatedness of each pair is rated by six subjects with the following instructions:

\textit{Indicate how strongly these named entities are related using integers from 0 to 4. The description and an example corresponding to each number are given as follows, and if you think some pairs fall in between two of these categories you must push it up or down (no halves or decimals).}\\
\hspace*{10mm}\textit{0: not at all related; ``Linux'' and ``Beijing''} \\
\hspace*{10mm}\textit{1: vaguely related; ``China'' and ``Tokyo''} \\
\hspace*{10mm}\textit{2: indirectly related; ``Backstreet Boys'' and ``Britney Spears''} \\
\hspace*{10mm}\textit{3: strongly related; ``Backstreet Boys'' and ``As Long as You Love Me''} \\
\hspace*{10mm}\textit{4: inseparably related;  ``Gate of Heavenly Peace'' and ``Tiananmen''} \\

The named entities pairs were sorted in descending order by average score, and 100 pairs were selected in order to balance the rate distribution from 0 to 4. The average Spearman rank correlation $\rho$ among these six subjects is 0.9617, which means the rate result is objective. Moreover, 0.9617 can also be used as the upper bound of the performance.

\subsection{Description Overlap Strategy Comparison}

In this section, we compare the performance of the four strategies in Description Overlap Measuring.
The results are shown in Figure \ref{fig:performance}.

\begin{figure}
\centering
\includegraphics[width=10cm]{figure_5.pdf}
\caption{Four Description Overlap Measuring strategies' performance; Spearman rank correlation $\rho$ with humans}
\label{fig:performance}
\end{figure}

From Figure \ref{fig:performance}, we can see that LODOverlap\_L outperforms LODJaccard\_L, LODOverlap\_LC outperforms LODJaccard\_LC. This tells us that when dealing with the semantic relatedness between named entities, it is more reasonable to focus on the less description named entity. From the results, we also find that LODOverlap\_LC is better than LODOverlap\_L, LODJaccard\_LC is better than LODJaccard\_L. It is mainly caused by the noise in LOD when handling multi-pairs problem. In LOD, there exist some obsolete and incomplete uris. They have little and even wrong description which will lead to high overlap between two unrelated named entities and thus reduce the performance. Leveraging the largest common description pair has two advantages:
(1) The largest common description pair is probably well described in LOD, which can reduce the influence of noise in LOD;
(2) It is more likely to have an objective semantic relatedness which conforms to the human sense.

In the following experiments, we choose LODOverlap\_LC as the strategy of our approach LODDO. Table \ref{tab:examples} shows some result examples of LODDO.

\begin{table}
\centering
\caption{Result examples of LODDO}
\label{tab:examples}
\begin{tabular}{|c|c|}
  \hline
  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
  Named Entities pair & LOD Description Overlap \\
  \hline
  ``\textit{Gate of Heavenly Peace}'' and ``\textit{Tiananmen}'' & 1 \\
  \hline
  ``\textit{Backstreet Boys}'' and ``\textit{As Long as You Love Me}'' & 0.3758 \\
  \hline
  ``\textit{Backstreet Boys}'' and ``\textit{Britney Spears}'' & 0.1538 \\
  \hline
  ``\textit{China}'' and ``\textit{Tokyo}'' & 0.0556 \\
  \hline
  ``\textit{Linux}'' and ``\textit{Beijing}'' & 0.0047 \\
  \hline
\end{tabular}
\end{table}

\subsection{Semantic Relatedness Measuring Performance}

Six previous semantic relatedness approaches are used to compare with our proposed approach.
\begin{itemize}
\item Rad \cite{Rada} regards WordNet as a graph: concepts as vertexes and all types of relationships as edges. Given two concepts, the semantic relatedness is represented by the shortest path length between them, the larger path length, the weaker semantic relatedness between them.
\item GlossOverlap \cite{Strube} calculates the text overlap of two concepts' glosses, which are the first paragraph of their Wikipedia articles, to measure the semantic relatedness. GlossOverlap is defined as follows:
\begin{equation}GlossOverlap(p, q) = \tanh \left ( \frac{overlap(Gloss(p), Gloss(q))}{length(Gloss(p)) + length(Gloss(q))} \right )\end{equation}
\item Intrinsic Information Content (IIC) \cite{Ponzetto} applies an intrinsic information content measure relying on the hierarchical structure of the Wikipedia category tree. It's defined as follows:
\begin{equation}
IIC(p, q) = 1 - \frac{log(hypo(lcs(p, q)) + 1)}{log(C)}
\end{equation}
where \textit{lcs(p, q)} means the least common subsumer of \textit{p} and \textit{q} in Wikipedia category tree. \textit{hypo(lcs(p, q))} is the number of hyponyms of node \textit{lcs(p, q)} and \textit{C} equals the total number of conceptual nodes in the hierarchy.
\item ESA \cite{Gabrilovich} firstly constructs weighted vector of Wikipedia concepts to each input text. Then to compute semantic relatedness of this pair of text, it compares their vectors using the cosine metric.
\item WebJaccard and WebOverlap \cite{Bollegala} are two popular co-occurrence measures to compute semantic similarity using page counts. They are defined as follows:
\begin{equation}WebJaccard(p, q) = \frac{H(p \cap q)}{H(p) + H(q) - H(p \cap q)}\end{equation}
\begin{equation}WebOverlap(p, q) = \frac{H(p \cap q)}{min(H(p), H(q))}\end{equation}
Here \textit{H(p)} denotes the page counts for the query \textit{p} in a search engine. In our
experiment, we choose Google\footnote{http://www.google.com} to get page counts.
\end{itemize}
Figure \ref{fig:comparedAlgo} shows the results of these approaches on the test dataset.

\begin{figure}
\centering
\includegraphics[width=10cm]{figure_4.pdf}
\caption{Different approaches' performance; Spearman rank correlation $\rho$ with humans}
\label{fig:comparedAlgo}
\end{figure}

From Figure \ref{fig:comparedAlgo}, we can find that our proposed approach significantly improves the performance of named entities semantic relatedness measuring. Even compared with ESA, the second best performance, we get an improvement of 39.6\%. As WordNet has limited entity coverage and 75 pairs in the test dataset cannot be measured because of the miss-hit in WordNet, Rad achieves a low Spearman rank correlation. ESA, GlossOverlap and IIC obtain a better performance than Rad, because Wikipedia has a larger coverage and richer description than WordNet. In Wikipedia, only 6 pairs in the test dataset is miss-hit. However, ESA considers the words in a name independently, thus may misunderstand the meaning of the name. GlossOverlap regards the uncritical words equally to the critical words in the gloss, thus the effectiveness may be reduced by the uncritical words. Since IIC only takes into account the category hierarchy relation without considering other meaningful relations, the performance is limited. WebJaccard and WebOverlap use the Google search statistical information to measure the semantic relatedness between named entities. As they regard a name in all documents as the same meaning, the effectiveness can be reduced. Since WebJaccard considers the two named entities equally, the larger hit entity brings more noise which influences the accuracy greatly. Furthermore, WebOverlap provides a better performance than WebJaccard, which proves the heuristics that the semantic relatedness should bias the less description entity.

\subsection{LOD Data Source Selection}

In this section, the influence of selecting different LOD data source is figured out. What will the performance change if we merge the data sources rather than use them singly. Table \ref{tab:sourDiff} gives the results of using different data sources.

\begin{table}
\centering
\caption{Performance of selecting different data sources}
\label{tab:sourDiff}
\begin{tabular}{|c|c|c|c|}
  \hline
  % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
  Data Source & \tabincell{c}{average description \\ number} & missed pairs number & \tabincell{c}{Spearman rank \\ correlation $\rho$ \\ with humans} \\
  \hline
  Musicbrainz (DBtune) & 35.79 & 26 & 0.0128 \\
  \hline
  Freebase & 10468.4 & 16 & 0.4217 \\
  \hline
  DBpedia & 11658.5 & 6 & 0.7668 \\
  \hline
  \tabincell{c}{Musicbrainz (DBtune) \& \\ Freebase \& Dbpedia} & 26076.1 & 0 & 0.8114 \\
  \hline
\end{tabular}
\end{table}

It is noted that the Spearman rank correlation is calculated without the consideration of the pairs which can't be found in corresponding data sources. There are two reasons why Musicbrainz (DBtune) gets such a low performance:
(1) The description of an entity is insufficient (only 35.79 descriptions on average), compared with other data sources (more than 10k descriptions on average); (2) The entity corresponding to a name in Musicbrainz (DBtune) sometimes is not the sense in our daily experience, for example ``Ferrari'' is a song in Musicbrainz (DBtune) rather than automotive in common sense. From the column ``\textit{missed pairs number}'' we can know that the use of single data source also leads to entity coverage problem, however, by merging the data sources together, the coverage problem can be relieved. Although the average description number of Freebase and DBpedia are similar, their performances are different. So we can conclude that different data sources may have different constructions and qualities, which contributes to the different semantic relatedness measuring performances. In addition, having more description is likely to lead to better performance. It verifies that with more data sources, the performance can be improved steadily, which proves the robustness of our approach.


\section{Conclusion}

In this paper, we target on the task of named entities semantic relatedness measuring. As the existing knowledge based approaches have the entity coverage issue and the statistical based approaches have unreliable result to low frequent entities, we propose a more comprehensive approach by leveraging LOD to solve these problems.  By exploiting the semantic associations in LOD, we propose a novel algorithm, called LODDO, to measure the semantic relatedness between named entities. Specifically, we first propose a mechanism to index the various LOD sources which can be used to find the entities corresponding to a specific entity name fleetly. Then, we bring forward LOD Description Overlap to measure the named entities semantic relatedness. The experimental results show that our approach greatly outperforms the previous semantic relatedness measuring approaches. And it is robust to leverage more data sources in LOD and provide better performance.

In the future, we plan to investigate more data sources from LOD in order to extend the coverage and promote the performance. We will also try to find a uniform approach to measure the semantic relatedness between abstract concepts and named entities.


\bibliographystyle{splncs03}
\bibliography{myrefs}
\end{document} 