		\documentclass{article}
%\documentclass[journal]{IEEEtran}
%\documentclass{ieee4paper}

\usepackage[a4paper, bottom=4cm, top=4cm]{geometry}
\usepackage{amsmath,amsfonts,amsthm} % alle benodigdheden voor formules typen
\usepackage{booktabs}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{verbatim}  
\usepackage{graphicx}
\usepackage{float}
\usepackage{appendix} 
\usepackage{fix-cm} % om errors met verkeerde fonts op te lossen 
\usepackage[hang,footnotesize,bf]{caption}%hang is de indent, footnotesize is de groote van de caption.
\usepackage{tikz}

\makeatletter 
\renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}%  {-3.25ex\@plus -1ex \@minus -.2ex}%
  {1.5ex \@plus .2ex}%
  {\normalfont\normalsize\bfseries}}
\makeatother

\usepackage[english]{babel} 
\title{Assignment B - Web \& Semantic Web Engineering 2011 - Design and Implementation of DBTube}
\author{S.P. Hoogendijk - 1379046 \and Armin Parnia - }
\date{\today}
\pagestyle{empty}

\begin{document}
\maketitle
\thispagestyle{empty}
\pagestyle{empty}
\section*{I. Introduction}
The goal is to design and implement a Semantic Web Application for the WSWE course. What we have designed is an application called DBTube which collects data from different new agencies and represents this data in RDF. The application also enriches the data by connecting it with existing RDF data from DBpedia. The goal of application is to create an enriched news video based on RSS and give two viewpoints of the news; First the point of the news agencies and second point of the people uploading videos on Youtube.  So we have an application which brings news from RSS feeds of different news agencies and brings related videos to the specific news from Youtube. These videos are the most relevant videos posted by users of Youtube and these videos will be used to enrich the RSS data, the background information will be delivered by DBpedia. What we will get in the end, is an application which has updated news, a related video from the peoples side of view and background information about the news. Hence in this way users can easily have access to news headlines by just watching a related video. They will get the news as quick and in the most attractive way as possible.

There are also different options considered in this application, the user could choose their preferable news agency, so they get news only from that agent. They could also get news in a variety of categories like; sport, financial etc. Also it should be possible for users to change the related video, for instance by changing from recently updated to the most viewed etc. The main goal of this project is to provide the users all the news in a single package as fast as possible and save their valuable time.

\section*{II. Methodology}
In order to implement this application we need to collect data from different data resources we've already introduced. First we needed news, we decided to get our news from RSS feeds, RSS ( Rich Site Summary, Really Simple Syndication, RDF Site Summary) is a family of web feed formats used to frequently publish (updated) works, such as blog entries, news headlines, audio and video, in a standardized format. An RSS document is proved as an XML document and includes full or summarized text, plus meta-data such as publishing dates and authorship. We used the Rome Java library in order to collect the title, content and related URL from the RSS feed of different news agencies.  What was crucial for us in next step was to generate keywords from news title. We decided to do this with DBpedia spotlight, this is a tool for automatically annotating mentions of DBpedia resources in text. For querying DBpedia spotlight, we used the rest API available at the site. This way, we could bring the related background information for the application trough a simple HTTP requests. The last data needed is a video which is related to the news. The keywords which are found through DBpedia spotlight will be used to get related videos from Youtube. In order to get these videos, Google's Data API was used. Trough the Data API of Google we were able to send search request to the Google servers. After the request we get a list of videos which resemble the best match by the keywords given in the search. If no video is found, no RDF item is created.

The next part of the implementation was to design a schema (Ontology) in order to represent the items in our application. We used Prot\'eg\'e' for designing the schema, as was suggested in the assignment text. In Prot\'eg\'e ontology's can be exported into a variety of formats including RDF(S) and OWL

In the program Prot\'eg\'e' we first defined classes, subclasses and sibling classes. Then we needed to make connections between the created classes, so we started to define object and data properties. The creation of the Ontology was a hard part, because we could not understand how to include already existing ontology's. But when we thought we were finished, we exported the schema to a OWL/XML file  , which we  used to generated a Java class which we imported into the application. This Java class was generated with help of the Jena library. With the same library, we generated the RDF/XML file which contained all our items.  In the end we uploaded the RDF data to a server and used a general SPARQL endpoint to query this data. The endpoint used is located at http://www.sparql.org/sparql.html. The data is located at http://www.sijmenhoogendijk.nl/dbtube/DBTubeItems3.rdf. The ontology is also available, this is located at http://www.sijmenhoogendijk.nl/dbtube/DBTube.owl.

\section*{III. Design Accuracy }
One of the design problems was how to make sure that the Youtube videos are related to the news. First of all, this is depended on the keyword generation by DBpedia spotlight. If the keywords are generated in such a way that they capture the topic correctly, the next problem was how to find the most related video. We have discussed a lot about this issue, first we realized that if the application brings most recently updated videos these videos will probably be the most related to the news. But when we tested this method we noticed that most uploaded videos are not really the most relevant,  This is because many people upload their video, and the most recent uploaded matching the keywords their fore is seldom relevant to the news. After discussing a lot about this problem we decided to bring the most relevant video to have a better result. 

In order to have more accurate related video it is possible to limit the search method.  It is possible to add more search limitations, like limiting the date range of  the uploaded to last week or last month and then order by most relevant. Another idea to get more relevant searched is to limit the amount of keywords in the search. This approach is a bit risky though, because it is hard to find out which keywords capture the news the best.

\section*{IV Queries}
We created the following SPARQL queries to illustrate the `intelligence' in the data. Some of these queries are fairly simple, but we had little time to really find out how SPARQL queries are created. This shortage of time is because the time the implementation and the creation of the ontology took. Only after generating data, a SPARQL query could be tested if it worked. The queries provided here are in their human readable form. The actual queries are added in the zip file.

\begin{enumerate}
	\item List all titles and contents of news.
	\item List all Titles and their related Youtube video.
	\item List all Titles and related Videos of a specific topic/Wikipedia item.
	\item List all the content and titles which have a specific word in their content.
	\item List all titles and content that has been released after a certain date .
\end{enumerate}
\end{document}