\chapter{Conclusions} % Write in your own chapter title
\label{chapter:conclusions}
\lhead{Chapter 7. \emph{Conclusions}} % Write in your own chapter
 
This chapter reflects on the research questions posed in Chapter \ref{chapter:introduction} and discusses the status of each with respect to the results that we obtained. Further, we discuss future research directions.

\section{Research Questions}

\subsection{Towards Self-Linking Linked Data}
The vision of a Self-Linking Linked Data introduced in Chapter \ref{chapter:vision} seems feasible by deploying results of Chapter \ref{chapter:serimi} and Chapter \ref{chapter:sonda}. The components described in this thesis can be immediately deployed to attain a limited form of  self-linking behavior. However, we acknowledge that many research questions that need to be answered before we have the proposed self-linking behavior implemented in the Linked Data. In particular, a community effort would be needed to integrate these components in the standard Linked Data tool suite.

\subsection{SERIMI: Class-based Matching for Instance Matching Across Heterogeneous Datasets}
\textit{How can we obtain correct matches for a set of source instances when there is no overlapping between the source and target schemas? (Chapter \ref{chapter:serimi})}  

Firstly, it is important to mention that the cases where there is limited overlapping between schemas occur in the real-world matching tasks. In Chapter \ref{chapter:vision}, we discussed a scenario where only an overlap in the entity's labels occurs, and we showed the proposed method can satisfactorily solve the instance matching problem in this case. Particularly in this scenario, there were no properties in the source data besides the label of the entities (band's names); consequently, no overlapping of properties (in the schema) could exist to the target dataset (MusicBrainz). We have observed only marginal schema overlap between the source and target datasets on the OAEI 2011 benchmark, especially when considering  the New York Times collection. In the person and organization datasets of this collection, only a label and a type property overlapped with the target schemas. This shows that in the reference benchmark in the field, the lack of schema overlap also exists, and it is not an isolated case. These cases help to enforce that the problem that we tackled in Chapter \ref{chapter:serimi} is indeed a relevant problem.

We have shown that it is possible to match instances when the schemas do not overlap, by using newly created method of class-based matching. We observed that this method is more effective when there are  instances in the target dataset that share  the same or similar label, such as we observed in Geonames and DBPedia. In those cases, and when there is not schema overlap between source and target datasets, the class-based matching is the approach to integrate resources. 

The results lead us to conclude that class-based matching should be combined with direct matching to obtain further improvements in instance matching performance (accuracy).  Direct matching   performs better than class-based matching when there is enough schema overlap in the data, i.e., when the predicates that overlap can identify the correct target matching instance for a source instance. Class-based matching and direct matching  complement  each other, because no method exists that will perform optimally in all matching tasks. In the future, these two approaches should be combined with newcomer approaches to cover a larger set of matching tasks.
 

\subsection{Efficient and Effective On-the-fly Candidate Selection over Sparql Endpoints}

\textit{How can we obtain candidate matches for a set of source instance in an effective and time efficiency way, by querying a target remote endpoint? (Chapter \ref{chapter:sonda})}

The querying solution that we propose in Chapter \ref{chapter:sonda} is on average 10 times faster than the straightforward (but too naive) solution that we considered to this problem in the beginning of our research. This can be observed when comparing the two implementations of SERIMI in GitHub, one using the discussed method\footnote{https://github.com/samuraraujo/SondaSerimi} and the other a naive querying approach\footnote{https://github.com/samuraraujo/SERIMI-RDF-Interlinking}. The gain in efficiency compared to the alternative approaches discussed in Chapter \ref{chapter:sonda} is not as big as the gain observed in these two implementations, but the performance increase is still considerable. The experimental results indicate that indeed it is possible to obtain candidate matches via querying. As we showed, for the cases proposed in the OAEI benchmark, even if integration is performed only once, the proposed strategy is more efficient than today's default to download the data and process it locally. We have tested this approach in many other real-world scenarios, and it produced comparable results to the one that we discussed in this thesis. The proposed strategy is therefore considered a viable alternative to instance matching over Linked Data. 

The utility of our method is dependent on the throughput of the remote endpoints, which can be calculated beforehand. In general, our method should be preferred when we consider integration of a few thousands third-party resources to online remote endpoints with reasonable response time. On our studies on DBPedia and Geonames, two central hubs in the LOD network \citep{DBLP:conf/semweb/AuerBKLCI07, DBLP:journals/ws/BizerLKABCH09, DBLP:conf/esws/KobilarovSROSSBL09} it performed faster, with satisfactory accuracy, than approaches that aims to download and index these datasets for further processing. 
%We think so due to all benefits that this method brings and specially because it supports the vision of the Self-Linking Linked Data. We envision that in the next years instance-matching via querying will gain popularity in the Linked Data.

We acknowledge that there is a long path ahead of research in this field. For instance, query engines could be optimized to answer to specific matching queries (e.g., those that includes geo similarity), by using specific query operators (e.g.\ geo-like) and tuning their internal index structures to this end. A related direction to explore is to use the algorithm design on Chapter \ref{chapter:string} (discussed below) to learning the correct formatting of string in the target dataset. This would allow to build exact queries as opposed to approximate queries. Exact queries are more precise and more efficient to compute, as we observed in our experiment. However, a deep investigation of this problem is necessary to draw an a convincing conclusion.  


\subsection{Learning Edit-Distance Based String Transformation Rules From Examples}
\textit{How can we learn string transformation rules from a limited set of examples that can correctly transform a large amount of unseen strings similar to the examples? (Chapter \ref{chapter:string})}

The results obtained in Chapter \ref{chapter:string} demonstrate how a learning approach can infer rules with high coverage from only few unique examples.  Partially, the good results presented are due to the distribution of the data in the datasets that we evaluated; and partially the good results are due to the novel algorithm that we proposed. The method works so well because most of the human readable strings, or strings produced by human language, are quite homogeneous.  No algorithm can transform data  if there is no regularity (pattern) between the data  and the given example transformations used to learn the rules. We expect that it will be  hard to improve the current coverage of the rule leaner, without drastically compromising its efficiency.  In contrary, the rule selector can easily be improved upon, when applying  state-of-the-art machine learning techniques to select a specific rule from the set of learned ones. 
%Notice that in this work, we decided to divide the problem into learning and selecting the rules. There is a clear trade-off in doing this instead of learning a single transformation rule from the set of all examples. As we discussed in the Chapter \ref{chapter:string}, the current approach has a simpler learning space since it only uses information in one transformation to learn a rule, opposed to an alternative method that 

Concluding, the proposed method can learn string transformation rules from a limited set of examples that can correctly transform a large amount of unseen strings similar to the examples, assuming that the examples are representatives. The algorithm proposed can be easily incorporated into information processing tools such as spreadsheets and data cleaning engines. Microsoft has recently released a version of   its spreadsheet tool (Excel) implementing an algorithm to support the tasks discussed in our work. The proposed algorithm can be easily incorporated into open source related tools to provide competitive functionality. Among its applications, we have used this algorithm to clean and normalize data on databases, to extract information from HTML tables and XML files, and to modify text on repetitive string transformation tasks.


\subsection{Exercises on Knowledge Based Acceleration}
\textit{How can we design a model of centrality for news documents? (Chapter \ref{chapter:trec})}

The study done in this work indicates that the model of centrality is very subjective to the explicit decisions done by the human annotators. The use of information retrieval techniques or semantics on the modeling of centrality produces marginal improvement to an elementary baseline because it does not capture the decisions of the human annotators.

A significant improvement in this area would require better understanding of the human annotators judgment of centrality. It seems reasonable that new facts that appear for the first time in the news about an entity is relevant for this entity. To determine whether a new fact is central or important for an entity, and worth to be mentioned in Wikipedia, requires more  discussion. Concrete questions that may help to understand this issue are: what class of facts are worth mentioning in Wikipedia? Daily habits of an entity are important? Is there any temporal aspect of a fact that can determine its importance? Is there any meta-property (e.g. time, space) about a fact that can be used to determine its importance? Whether the answers to these  questions lead to decisions that converge to an human annotator's decisions is also an interesting subject of study. It is hard to outline whether the issue of determine centrality of a fact is a decision theory discipline or a computer science one. 

%The annotation produced by a machine cannot converge to a human decision in such subjective context.  

\section{Future Research}
In this thesis, we studied a variety of topics mostly focused on instance matching of heterogeneous and distributed data. Within each research theme,  several perspectives   are not fully addressed, and or open issues   are worth further investigation.

We start with Chapter \ref{chapter:vision}. The self-linking architecture we proposed requires further research in many areas. Research on federated querying  aiming to improve the discovery of datasets by the self-linking engines is necessary. Also, research on self-linking policies is a brand new area to be explored. Unsupervised approaches for instance matching should be reviewed when considering its application in  Linked Data. Progress on indexing should be conducted aiming to support the self-linking behavior. For example, as matching queries  are quite specific, an index structure could be designed to support a quicker evaluation of these types of queries. Finally, research on SPARQL language and protocol  should investigate new primitives to support signaling between datasets in the Linked Data specifically focusing on the self-linking behavior.

In Chapter \ref{chapter:serimi}, our results on combining class-based matching and direct matching are preliminary. The results that we obtained rise  a new research question: \textit{How can we select the best matching strategy for a matching task?} The benefit of selecting the most appropriate matching strategy is to have gain in efficiency, avoiding unnecessary computation. Efficiency is another area of study, given that at a large scale may make sense to consider pruning strategies. For example, avoiding computing scores for  candidate instances that are identified as false-positive, early in the process.

In Chapter \ref{chapter:sonda}, we concluded that exact queries are faster than other queries but do not work in all cases, mainly due to variations on the string formatting between the source and target instances. This rises a new question: \textit{How could we formulate exact instance matching queries?} The challenge is two-fold. First, find a function that transform a source string (instance label) to the correct target format is problematic. As data are heterogeneous many formats may exist; consequently, how to select the correct one is an issue to be investigated. Second, there is no guarantee that  the time to learn the transformation functions plus the time to execute the exact queries will be faster than the time to execute the method proposed in Chapter \ref{chapter:sonda}. The efficiency of this new approach would need further investigation. Due to the expected improvements that it may bring, this is definitely an interesting direction to be explored.

In Chapter \ref{chapter:string}, we obtained promising results using the proposed algorithm. Considering that this algorithm has a large range of applications, the identification of a rule selector method that performs optimally to a specific data collection is an interesting research area.

In Chapter \ref{chapter:trec}, we studied the particular problem proposed by Knowledge Based Acceleration of TREC 2012. The research area of building a model of centrality for a new stream is still in its childhood. It would be interesting to investigate by interviewing human annotators how they define centrality. Further investigation of a centrality model based on these reported aspects would be of great interest for this community.

Overall, the results in this thesis merit further investigation on how to adapt the proposed components into the Linked Data. Also, our findings prompt the question whether instance matching has a different nature in Linked Data settings. Our studies and results indicates that it does, which might stimulate further research into the purpose of instance matching for the Semantic Web.

This thesis sheds more light on instance matching on heterogeneous and distributed data, broadens its relevance for Linked Data and opens up a range of topics for future research.