\subsection{Design of the experiment}
Our evaluation relies on volunteer answers about how similar two initiatives are considered. The evaluation consists on a blind experiment where for a given initiative called target, the user is asked to answer how similar two other initiatives are to the target. One of this two initiatives is proposed by our tool from the five closer initiatives to target, the other is chosen randomly. The place where the initiatives are shown changes randomly, so the user can not know which is our tool's answer and which is the random one. For example, in the test we have a total of 88 initiative, firstly we choose one of them as the target for the comparison, then we will have a list of all the other 87 initiatives ordered by their similarity to the target, from the first 5 initiatives of the list we chose one initiative, then we chose another initiative from the all the other 87 iniciatives.

The evaluation is done in a website which loads automatically 3 initiatives as we described in the previous paragraph and ask the volunteer to rate the similarity of our tool and the random initiatives to the target with a 1 to 5 rate meaning:\\


\begin{tabular}[c]{|c|l|}
\hline
5 & The initiative is very similar to the target.\\
4 & The initiative is similar to the target.\\
3 & The initiative is kind of similar to the target.\\
2 & The initiative is different to the target.\\
1 & The initiative is very different to the target.\\
\hline  
\end{tabular}\\


The volunteer can click on the link of each initiatives on the web page which will direct to the introduction page of this initiative on the p2pfoudation website so that the volunteer can check the introduction text and keywords of every initiative to help the evaluation.

We chose PHP + MySQL to implement the evaluation website and it is running on the URL is http://tagsonomy.ourproject.org and now we have several volunteers whom haveanswered our questionnaires.

\subsection{Development of the experiment}\label{sec:res}

\subsubsection{Words to concepts}\label{sec:w1}
In the proccess where a word is related with a Wikipedia/DBpedia concept, some keywords are lost. From 120 initiatives with keywords, the tool was not able to find at least one keyword for 11 initiatives, that where not used in further steps.

\subsubsection{Concepts caegorization (similarity meassurement)}\label{sec:w2}
From 109 initiatives with keywords assigned to concepts, 21 initiatives did not have a concept that was classified in wikipedia. This 21 initiatives where not used in further analysis. So as a result, we have a total of 88 initiatives for our evaluation as we mentioned in the previous paragraph.

\subsubsection{Similarity}
\label{sec} 
Among the 62 answers of questionnaries that we have stored in our MySQL database we have: \\

\begin{tabularx}{\textwidth}[c]{|X|l|l|}
\hline
15M Oviedo Grupo de Urbanismo, Barrios y Medio Ambiente del/es & 3.0000 & 3.5000\\
15Mpedia.org/es & 2.7778 & 2.5556\\
ABCdeCRIMI/es & 1.5000 & 1.5000\\
Agronautas/es & 2.8333 & 2.1667\\
Alertux/es & 3.0000 & 2.0000\\
Alg-a Lab/es & 4.0000 & 2.5000\\
Almanaque Azul Panamá, Guía de Viajes/es & 1.6667 & 1.6667\\
Amical Viquipèdia/es & 2.4000 & 1.6000\\
Articultores/es & 2.5556 & 2.4444\\
Average  & 2.6111  & 2.1667 \\
\hline  
\end{tabularx}\\

The average rate of our application-suggested similar initiatives is 2.6111 while the average rate of the random initiatives is 2.1667. This shows that our application-suggested similar initiatives are closer to the target than the random ones.

