\section{Dataset Preprocessing}
\label{sec:dataset}
\label{sec:preprocess}
In this section, we explain how we preprocess the data for training 
our classifiers and some motivations behind them. 

To train and validate our models, we use the Profiles in Terror dataset
created by University of Maryland \cite{orgpitlink} 
and edited by Eric Doi and Ke Tang \cite{doipaper}. 
The dataset consists of $851$ rows containing information for pairs of suspected terrorists
who have a known link between them and $612$ binary features
for each of the terrorists who share a link. The links between any two individuals
also indicate whether they are members of the same organization or 
if they share a different type of link.

\paragraph{Network Structure}Our first goal is to determine which of the $29,646$ potential undirected
links between the $244$ nodes exist. For our second task, we
aim to distinguish the type of each of the $851$ known links as mentioned above. 
For each of these two tasks, we employ singular value decomposition technique 
to capture the properties of the network structure of the nodes, followed
by logistic regression. 

In order to apply these techniques
we first convert the original dataset into an adjacency matrix denoting the
presence of links between nodes. The conversion to adjacency matrix is 
done differently for the two tasks. For the first task,
since we only have to to predict the existence of a link, it does not matter 
what the type of each link is. Hence, we initialize all known or positive
links between nodes to $1$ and unknown or negative links as $0$ in the adjacency
matrix. For the second task, we treat the organization links as positive
and initialize these entries in the adjacency matrix to $2$, non-organizational 
links as negative which we initialize to $1$ and we denote missing links as $0$. 
We then apply singular value decomposition to this modified adjacency 
matrix to capture the network structure between the nodes and subsequently use
it in building our models.
The way we use these adjacency matrices is explained in Section \ref{sec:experiments}.

\paragraph{Node Properties}We perform some initial clean-up on the given dataset in order to make it
easier to use in our Rapidminer process. There are $612$ features for each
node, with features of two nodes per row of the given dataset. We construct 
a separate nodes dataset which consists of these features for every node. We employ various
combinations of features for each node, such as pointwise feature vector multiplication and
concatenation. How the per node information is used is explained in Section 
\ref{sec:experiments}.
