% Appendix A - Collecting the CiteULike Data Set

\chapter{Collecting the CiteULike Data Set}
\label{app:collecting-citeulike}
\lhead{Appendix A. \emph{Collecting the CiteULike Data Set}}



\section{Extending the Public Data Dump}

We described in Chapter \ref{chapter3} that our \cul\ data set is based on the November 2, 2007 data dump that is made available publicly\footnote{Available from \url{http://www.citeulike.org/faq/data.adp}.} by \cul. This dump contains all information on which articles were posted by whom, with which tags, and at what point in time. Figure \ref{figure:cul-data-dump} shows a tiny subset of this data dump.

\shrinky

  \begin{figure}[h]
    \centering
      \includegraphics[scale=0.56,angle=90]{./figures/data-dump-before.pdf}
    \caption[A small subset of a CiteULike data dump]{A small subset of a CiteULike data dump. The columns from left to right contain the article IDs, user IDs. time stamps, and tags.}
    \label{figure:cul-data-dump}
  \end{figure}

Each line represents a user-item-tag triple with the associated timestamp of the posting, so if a user posted an article with $n$ tags, then this resulted in $n$ rows in the file for that article-user pair. If a user added no tags, then the article-user pair is represented by 1 row with the tag \tag{no-tag}. On the \cul\ website, users can pick their own user name, but these were hashed in the data dumps using the MD5 cryptographic hashing function for privacy reasons. Unfortunately, most of the interesting features on the \cul\ website are linked to the user names and not to the hashed IDs. For instance, each article has a separate page for the users that added it, which contain the personal metadata such as reviews and comments. Unfortunately, these article pages display only the \cul\ user names, which means that for us to be able to link the data dump to the personal metadata we had to match each user name to its corresponding MD5 hashes. 

Directly applying the MD5 hashing function to the username did not result in any matching hashes, which suggests that \cul\ uses a {\it salt} to make reverse lookup practically impossible\footnote{A {\it salt} is a series of random bits used as additional input to the hashing function. It makes reverse lookup using dictionary attacks more complicated by effectively extending the length and potentially the complexity of the password.}. We therefore had to find another way of matching the user names to the hashes. First,  we crawled all user-specific article pages. Each article on \cul\ has its own generic page, which can be accessed by inserting the article ID from the \cul\ data dump in a URL of the form \textcolor{blue}{{\tt http://www.citeulike.org/article/{\bf ARTICLE\_ID}}}. Each article page contains metadata and also lists, by user name, all users that have added the article to their personal library. We crawled all article pages and collected all information. After acquiring the names of the users that added an article, we also crawled all user-specific versions of these pages. These can be accessed at a URL of the form \textcolor{blue}{{\tt http://www.citeulike.org/user/{\bf USER\_NAME}/article/{\bf ARTICLE\_ID}}}. Using the collected lists of user names, we collected the personal metadata added by each separate user such as tags and comments. 

By collecting these article-user name-tag triples from the \cul\ article pages, we were then able to match user names to MD5 hashes\footnote{Matching usernames and hashes on the specific timestamps when articles were added would have been easier and more specific, but this was not possible: the timestamps were not present on the crawled Web pages.}. This turned the alignment problem into a matter of simple constraint satisfaction. For a single article $A$ there might be multiple authors who added the same tag(s) to $A$, but perhaps only one of those authors added another article $B$ with his own unique tags. By identifying those posts that matched uniquely on articles and tags, we were able to align that user's name with the hashed ID. We ran this alignment process for 7 iterations, each round resolving more ambiguous matches, until no more authors could be matched. \mbox{Table \ref{table:alignment}} shows statistics of our progress in aligning the usernames and hashes. 

\begin{table}[h]
  \caption[Statistics of the user alignment process]{Statistics of the user alignment process. Each cell lists how many users and associated articles and tags were matched after the seven steps.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabular}{l||c||c|c|c|c|c|c|c}
  \hline
     ~               & {\bf Data dump} & {\bf Step 1}  & {\bf Step 2}  & {\bf Step 3}  & {\bf Step 4}  & {\bf Step 5}  & {\bf Step 6}  & {\bf Step 7}  \\
  \hline
  \hline
    {\bf Users}      &  27,133        &  24,591       &  25,182       &  25,260  &  25,284       &  25,292       &  25,352      &  25,375   \\
    {\bf Articles}   & 813,186        & 763,999       & 780,244       & 780,837   & 780,851       & 780,851       & 803,278      & 803,521   \\
    {\bf Tags}       & 235,199        & 224,061       & 227,678       & 227,844  & 227,848       & 227,848       & 232,794      & 232,837   \\ 
  \hline
  \end{tabular}
  \end{footnotesize}
  \end{center}
  \label{table:alignment}
\end{table}

After seven rounds, 93.5\% of the initial 27,133 user names were correctly identified. Furthermore, 98.8\% of all articles were retained and 99.0\% of all tags. We believe that this represents a substantial enough subset of our the original data dump to proceed with further experiments. \mbox{Figure \ref{figure:cul-data-dump-aligned}} shows the same extract of the \cul\ data dump of \mbox{Figure \ref{figure:cul-data-dump}} with the resolved author names. For instance, we were able to resolve the user ID hash {\small {\tt 61baaeba8de136d9c1aa9c18ec3860e8}} to the user name {\small {\tt camster}}.

  \begin{figure}[h]
    \centering
      \includegraphics[scale=0.56,angle=90]{./figures/data-dump-after.pdf}
    \caption[A small subset of a CiteULike data dump with the proper user names]{The same small subset of the CiteULike data dump from Figure \ref{figure:cul-data-dump} after aligning the user names with the hashed IDs. The columns from left to right contain the article IDs, user names. time stamps, and tags.}
    \label{figure:cul-data-dump-aligned}
  \end{figure}





\section{Spam Annotation}
\label{A:sec:spam-annotation}

\mbox{Figure \ref{fig:screenshot-spam-annotation}} illustrates the straightforward interface we created for the spam annotation process. For each user it randomly selects a maximum of five articles and displays the article title (if available) and the associated tags. It also shows a link to the \cul\ page of the article. Preliminary analysis showed that articles that were clearly spam were usually already removed by \cul\ and returned a {\em 404 Not Found}\/ error. We therefore instructed our judges to check the \cul\ links if a user's spam status was not obvious from the displayed articles. Missing article pages meant users should be marked as spam. In this process, we assumed that although spam users might add real articles to their profile in an attempt to evade detection, real dedicated \cul\ users would never willingly add spam articles to their profile. Finally, we noticed that spam content was injected into \cul\ in many different languages. From the experience of the annotators, most spam was in English, but considerable portions were in Spanish, Swedish, and German. Other languages in which spam content was found were, among others, Dutch, Finnish, Chinese, and Italian.

Of the 5,200 users in our subset, 3,725 (or 28.4\%) were spam users, which is a much smaller proportion than in the \bib\ system, but still a considerable part of the entire data set. The numbers in \mbox{Table \ref{table:statistics-spam-datasets}} are reported for this 20\% sample of \cul\ users. It is likely that such a sizable chunk of spam can have a significant effect on the recommendation performance. We test this hypothesis about the influence of spam in Section \ref{8:sec:influence-on-recommendation}.

  \begin{figure}[t*]
    \centering
    \includegraphics[scale=0.38,viewport=520 30 550 820]{./figures/screenshot-spam-annotation.pdf}
    \caption[Screenshot of the spam annotation interface for CiteULike]{A screenshot of the interface used to annotate a subset of CiteULike users as possible spam users. The user in the screenshot is a typical example of a spammer. For every user, we display up to five randomly selected posts with the article title and the assigned tags.}
    \label{fig:screenshot-spam-annotation}
    \vspace{30em}
  \end{figure}


