{\rtf1\ansi\ansicpg1252\cocoartf949\cocoasubrtf540
{\fonttbl\f0\froman\fcharset0 TimesNewRomanPSMT;\f1\fmodern\fcharset0 CourierNewPSMT;}
{\colortbl;\red255\green255\blue255;}
{\info
{\title Original file was paper.tex}
{\doccomm Created using latex2rtf 2.3.3 r1230 (released Feb 26, 2013) on Thu May 23 16:33:36 2013}}\paperw12280\paperh15900\margl2680\margr2700\margb1760\margt2540\vieww18760\viewh13860\viewkind0
\deftab720
\pard\pardeftab720\qj

\f0\fs20 \cf0  \
\pard\pardeftab720\fi300\sb240\sa240\qc

\b\fs36 \cf0  
\b0\fs20  \
\pard\pardeftab720\li1024\fi300\ri1024\qj
\cf0 We here present a vision of what needs to be addressed to boost the interlinking in the Linked Data. We propose an paradigm shift in the way of doing data integration that can improve the accuracy and facilitates the building of interlinks in the Web of Data, immensely. We envision an Organic Linked data where each of its datasets behave as an independent organism capable of interacting to another to achieve a desirable level of connectivity. We adapted principles from biological systems to support this vision. \
\pard\pardeftab720\li1024\fi300\ri1024\qj

\b \cf0 Keywords
\b0 : data integration, RDF interlinking, instance matching, candidate selection, linked data. \
\pard\pardeftab720\sb360\sa120

\b\fs32 \cf0 1  Introduction\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 The vision of the Semantic Web [3], undoubtedly powerful, promises an structured web of data that would greatly improve the access to data by humans and machines. Currently, initiatives as the Linked Open Data project [4] have published and interlinked hundreds of structured datasets following Semantic Web standards (e.g. RDF, OWL). The set of these interlinked datasets forms a web of data called Linked Data. Mainly, two datasets are interlinked by connecting their objects through semantic links called 
\i RDF links
\i0 . Theoretically, through these links data users (humans and machines) can navigate from one dataset to another, potentially exploring the entire Linked Data. Unfortunately, in practice, the establishment of these RDF links has shown to be a non-trivial task. So far, this issue is one of the factors [10] that has considerably limited the development of a global-scale interlinked dataspace, the Semantic Web. \
\pard\pardeftab720\fi300\qj
\cf0 Tummarello et al. [15] had discussed the interlinking issue before, proposing an centralized data integration architecture to solve the poor interlinking in the Linked Data. It warns us that this issue is still alive three year later. Fortunately, we go beyond, proposing a decentralized data integration method for the Linked Data that can co-exist with their centralized vision, consequently solving the problem. Additionally, we propose concrete components to be added to the Linked Data that would make this approach an welcome reality.\
For a large extent, RDF links are established by connecting two data objects that refer to the same world entity using the semantic predicate 
\f1 owl:sameas
\f0  . This 
\i data integration process
\i0  is known as 
\i instance matching
\i0  or 
\i schema matching
\i0 , depending whether the process is applied at instance or schema level, respectively. Instance and schema matching has been studied extensively by the database community for the last 30 years. However, the heterogeneous and decentralized nature of the Linked Data pose additional challenges for data integration in this setting, where assumptions embodied in the existing methods no longer apply. \
We argue that adopting existing data integration paradigms to the scenario of Linked Data considerably limits the interlinking of its datasets, specially regarding the new ones added to the cloud. These methods require the data to be available locally, characteristics of a centralized and off-line dataspace; while, in the Linked Data, data are decentralized and scattered among many servers, in some cases only accessible via Semantic Web protocols, such as the SPARQL protocol. This decentralized architecture requires that we develop ways of thinking about integration that are as rigorous as the existing paradigm, but different, it should incorporate characteristics that exist only in the Linked Data and are relevant for building the interlinks. For a large extent, it requires that we change our philosophy about data integration, in the full sense of the term. \
We envision a more organic interpretation of the Linked Data architecture where each dataset in the cloud behaves as an independent organism having as one of its function the ability of self-interlinking to other datasets in the cloud. The self-interlinking behavior is our solution to the poor interlinking problem. A dataset in the Linked Data can be seen as a cell in an living organism. Analogous to 
\i cell signaling
\i0  in a living cell (a communication mechanism that governs basic cellular activities and coordinates cell actions), a communication mechanism could orchestrate the datasets interlinking (like in a cell activity) which would happen independent from human intervention. Notice that although a living cell behaves independently, intra-cellular structures guide the cell behavior. Analogously, we propose here structures that should be part of a dataset in the Linked Data to guide the self-interlink behavior. In practice, we expects a dataset to be automatically interlinked as soon as it is published (becomes alive in cloud) without any human intervention. The ideas proposed here will play a crucial role to make from this vision a reality. Fig. 1 shows some of the structures that we will introduce.\
\pard\pardeftab720\fi300\sb240\qc
\cf0  \
\pard\pardeftab720\sb120\sa120\qc

\fs24 \cf0  \
\pard\pardeftab720\sb120\sa120
\cf0 Figure 1: Overview of the structures of in self-linking Linked Data.\
\pard\pardeftab720\fi300\sb240\qj

\fs20 \cf0 In this paper, we describe a possible architecture that can support our vision. It can boost the interlinking in the Linked Data, greatly contributing to speed-up the vision of a Semantic Web. We not only describe the components of this architecture but also we present a prototype tool as a proof of concept. We evaluated this new paradigm on reference benchmarks in field and the results show that this architecture is feasible and more time efficient than the traditional data integration methods in certain conditions. Concluding, it indicates that a self-linking behavior can be part of the Linked Data environment. \
\pard\pardeftab720\sb240\sa120

\b\fs32 \cf0 2  A General Architecture\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 There are three main research questions regarding the building of a self-linking Linked Data: How to boost the interlinking between datasets in the Linked Data?  How can we fully automate the process of generating these RDF links?  How can we represent the generated RDF links? \
\pard\pardeftab720\sb120\sa120

\b\fs32 \cf0 2.1  Building a Self-Linking Linked Data\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 The first issue is to tackle the problem of boosting interlinking in the Linked Data. We argue that to create a truly linked data, datasets should be capable to self-link as soon as they are published. For that, the Linked Data architecture must include components to allow it to happen. In the foundation of these components lies a different data integration paradigm that we discuss next.\
\pard\pardeftab720\fi300\qj
\cf0 As we mentioned at the introduction, there is a predominant school of data integration that proposes an off-line approach for the problem. As a principle, the source and target data to be integrated must be available locally so the data can processed and links established. To use this method, datasets in the Linked Data have to be downloaded to a central server (or cluster) so that the interlinking can computed. Many authors [5] have applied this paradigm on the Linked Data, successfully; however, they paid the price of having to allocate a large amount of computer resource and human labor to accomplish this task. More importantly, this strategy cannot be used for all datasets in the cloud, given that some are not available for download.\
Part of the problem with existing data integration paradigms lies on the fact that the Linked Data was designed to be accessed via querying a remote SPARQL endpoint; while, traditional data integration methods requires the data to be fully available locally to be index and processed [11][9][12][9]. Although data can be obtained by querying a remote endpoint, to download large datasets through this method is inefficient and quite often reaches timeout imposed by the remote endpoints. Apart from that, dataset sizes largely vary on the Linked Data, consequently to interlink a small source dataset to large target dataset requires only part of the target data. In these cases, an efficient selection of the necessary target data was not considered as an important issue so far because it is assumed that the data is available for local processing, which is not always true in the Linked Data scenario.\
For these reasons, we argue that the Linked Data requires a different paradigm of data integration that must operate directly over the SPARQL endpoint in an on-the-fly and pay-as-you-go fashion. To that, we proposed a set of principles to support this vision, which would consequently boost its interlinks. \
\pard\pardeftab720\sb120\sa120

\b\fs32 \cf0 2.2  Interlinking as a Query Problem\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 We argue that interlinking can be done directly by querying the source and target endpoints, instead of fully downloading it and processing it locally. This strategy has been underestimated or neglected so far as an alternative to the massive download of data widely used, currently. \
\pard\pardeftab720\fi300\qj
\cf0 From now on, we will mostly focus on the interlinking at instance level; however, the architecture that we propose here can also be used for schema integration as well. We pose the problem of building interlinks as a query problem on the Linked Data. Therefore, find a target instance that match a source instance translates into issuing a SPARQL query in the target endpoint looking for an target instance with similar attributes to the source instance. Instances are interlinked when their similarities are above a threshold. Previous studies show that by analyzing the data the attributes to be compared, the similarity functions and threshold can be determined, automatically, in unsupervised fashion. \
Approaching the problem in this fashion, we benefit that the data is already processed in each endpoint, avoiding a lot of pre-processing (e.g., indexing, data cleaning) necessary in the traditional scenario. This saves human labor and computer resources due to the access to data via endpoints. Consequently, facilitating the implementation of a self-linking mechanism in the Linked Data.\
Basically, at instance level, two data objects are considered the same if they share some common attribute of data. Mainly, instance matching requires two data objects to be directly compared, in a process called 
\i direct matching
\i0 . As the number of data objects may be large, to speed up the process of comparisons, most of the matching approaches usually split the problem of instance matching in two steps: candidate selection and match refinement. The candidate selection step uses a low cost method for fast retrieval of possible candidate matches for the source instances; while, the match refinement uses more elaborate methods for detecting among the candidates the correct target matches for a source instances. In this way, it reduces the number of comparisons necessary to find the matches, which initially would require 
\i S
\i0 \'d7
\i T
\i0  comparisons, where 
\i S
\i0  and 
\i T
\i0  are number of instances in the source and target datasets, respectively. \
In the architecture that we propose, candidate selection is done over the SPARQL endpoint. Once the candidates are selected, they are treated as the target dataset and the query refinement can be done using any of the data integration methods available in the literature. However, as this method evolves, the candidates obtained during the candidate step can be so precise that the refinement step will be unnecessary. The challenge is to build SPARQL queries to obtain the candidate matches, or the correct matches in the optimistic scenario. As a requirement, these queries must be effective in retrieving all the correct candidates but also must be executed efficiently. \
Recently, we have proposed a SPARQL based candidate selection method, and we demonstrated that it produces good candidate matches, with high recall and precision. Apart from that, we have shown that this mechanism is a more efficient that downloading the entire data, when a certain condition holds, i.e., when the datasets sizes largely varies.\
Fig. 2 illustrates the configuration of the candidate selection and match refinement components in the Linked Data architecture.\
\pard\pardeftab720\fi300\sb240\qc
\cf0  \
\pard\pardeftab720\sb120\sa120\qc

\fs24 \cf0  \
\pard\pardeftab720\sb120\sa120
\cf0 Figure 2: Interlinking Apparatus inside a Linked Data Endpoint. \
\pard\pardeftab720\sb360\sa120

\b\fs32 \cf0 2.3  Self-linking Policies\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 On the top the proposed components, data integration policies are applied to enable the endpoint to self-maintain its RDF links. For example, the interlinking could be trigger at four distinct moments. It could be trigger by an explicit user command, automatically as soon as the data is published, every time that the data is updated in the endpoint, and when a new dataset is published in the cloud. These are basic policies that would allow the endpoints to self-maintain its RDF links, responding to any internal or external data update in the Linked Data. Additional policies could be defined to optimize the computation of the interlinks, for example, controlling the query load during the generation of the candidate matches. Practically, these policies should be described in a vocabulary to be specified and integrated in the design of the RDF stores. A standard policy must be established to guarantee the self-linking behavior in the cloud.\
\pard\pardeftab720\sb120\sa120

\b\fs32 \cf0 2.4  SPARQL Extensions to Support Self-linking\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 Basically, the SPARQL language would have to be extended with a few primitives to support 
\i approximate string matching
\i0 , which is fundamental during the process of candidate selection, given that exact queries do not account for syntactical differences that exist between datasets (e.g. Michael Jackson vs. Jackson Michael). Although the current SPARQL specification supports filters and regular expression that can simulate approximate matching queries, many of the available implementations do not support an efficient computation of these queries. For this reason, we argue that would help to add to the current specification new language operators (e,g, like, ilike) that require an efficient implementation by future RDF stores. Currently, some RDF stores, such as Open Link Virtuoso, support an efficient computation of such approximate matching queries, using no standard notations. Nevertheless, this have to exist in the entire Linked Data in a standard way so the proposed method can be truly functional.\
\pard\pardeftab720\sb120\sa120

\b\fs32 \cf0 2.5  Standardization of Interlinking Algorithms\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 We argue that a candidate selection and a matching refinement algorithm must be designed as a standard, so all RDF stores would have the expected behavior implemented. Most of these algorithms that target the Linked Data are converging. In their foundations, they all exploit the same meta-properties of the data (e.g. discriminative power and coverage of predicates) to determine RDF links. Currently, the state-of-art interlinking methods can be formalized into a unique and default way of doing interlinking. This is a fundamental step to embed the self-linking behavior in the datasets in the Linked Data. Of course, other interlinking methods would continue co-existing to the standard method.\
\pard\pardeftab720\sb240\sa120

\b\fs32 \cf0 3  Proof of Concept\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 We illustrate a real case scenario to show the benefit of a self-linking Linked Data for an ordinary data owner. \
\pard\pardeftab720\fi300\qj

\b \cf0 Scenario.
\b0  John has a collection of 5000 band names that he would like to know their origins. He decides to make use of the Linked Data, because he heard that it contains other datasets that could be used to enrich his own data. Then, he generates a single RDF triple for every band names. For example, 
\f1 example:band1
\f0  
\f1 rdfs:label
\f0  
\f1 "Metallica"
\f0 . He publishes these data using a RDF store that supports the self-linking behavior standard. As soon as the data is published, the RDF engine starts looking for possible target interlinks in the cloud. It finds the MusicBrainz dataset as a good candidate for interlinking because all of the band names (strings) in John\'92s data also occur in this dataset. Immediately, it starts to interlink John Band\'92s data to the Music Brainz data, without John even being aware of it. A few minutes later, John decides to query its RDF store to check his data, and then he notices that his data is already interlinked to MusicBrainz. Navigating through the just created RDF links, he goes from his data to MusicBrainz and finally to DBPedia, where unexpectedly he encounters not only information about the origin of the bands but their member, discography, etc. Thanks to the Self-Liking Linked Data, now John can build a richer application over its collection than the one that he had initially conceived.\
To implement this task. we use a subset of Linked Data datasets loaded into the Open Linking Virtuoso server. To emulate the self-linking behavior, we use SONDA as candidate selection module and SERIMI as the match refinement module, two recent published techniques that follows the paradigm that we discuss here. Using these tools a single command was issued to obtain the RDF links, namely, 
\f1 ruby
\f0  
\f1 sonda.rb
\f0  
\f1 -source http://localhost:8890/sparql
\f0  
\f1 -target
\f0  
\f1 http://localhost:8891
\f0 \
where the two uris represents the John\'92s data and Music Brainz data, respectively.\
The data used in this proof of concept are available for download at GitHub.\
To measure the quality of the interlinks produced, we sample the data and check it manually. We obtained as recall, precision and F1, the values of 0.93, 0.86 and 0.90, respectively. No other configuration was given apart from the reported here. Although this exercise does not operate over a complete implementation of the presented concept, it shows that the vision of a self-linking Linked Data is feasible and it brings immediate benefits. \
\pard\pardeftab720\sb240\sa120

\b\fs32 \cf0 4  Related Work\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 Due to the nature of this position paper, an exhaustive overview on related work cannot be given here. Instead, we focus on the identification of some specific problems that allow us to help clarify how can we weave the Linked Data into a self-linking system.\
\pard\pardeftab720\fi300\qj
\cf0 In our previous work [2], we have shown that an interlinking can done via SPARQL endpoints. However, at large scale, considering all the Linked Data, many other problem have to the addressed. For instance, the problem of selecting a correct dataset in the cloud to be interlinked. LDIF is a framework that addressed some of these problems [13]. Although quite elaborated, the framework is still designed to be operated manually, as it requires matching rules to be supplied by the data designer. Although elements existing in these approaches are necessary to build our vision, much more have to be considered. \
Automatic and unsupervised techniques and algorithms to produce matching rules are available in the literature [8][14][7][6]. To accomplish our vision, as they share the same principles, a research agenda should be defined aiming to converge these techniques to a basic acceptable standard approach. Vocabularies to describe endpoint have been already proposed [1], they could be extended to support self-linking policies. \
\pard\pardeftab720\sb240\sa120

\b\fs32 \cf0 5  Conclusion\
\pard\pardeftab720\sb60\qj

\b0\fs20 \cf0 We have motivated our work with general considerations about the use of traditional data integration techniques on the building of the Linked Data. While there are many open problems, we focus on a specific family of interrelated problems that are centered on the notion of datasets self-linking in an on-the-fly fashion. As a result, we propose a novel data integration architecture to boost the interlinking in the Linked Data, contributing to speed-up the vision of the Semantic Web.\
\pard\pardeftab720\fi300\qj
\cf0 Data in the Linked Data are only functional whether they are interlinked. It success depends on pragmatic designing decision putting the self-linking behavior at the foundation of the ideology. \
\pard\pardeftab720\sb120\sa120

\b\fs32 \cf0  }