\section{Discussion}
\label{sec:discussion}
We take into account the fact that there may be some edges which although
exist but are present in the dataset. We do this by considering the adjacency
matrix for all the $244$ nodes for both tasks 1 and 2. We perform
singular value decomposition on the entire adjacency matrix and thus learn the
network structure.
\paragraph{Pitfall 1: Dataset construction}\
We do not have information available in the given dataset to distinguish
between terrorists and non terrorists. Hence, we assume here that the
given $244$ nodes are all terrorists for tasks 1 and 2. 
\paragraph{Pitfall 2. Example Representation}\
We avoid the problem of propensity described by considering pointwise
multiplication of vectors. Although we do report results of experiments
which concatenate vectors, we find that the performance of the pointwise
multiplication experiment the best. We concatenate the pointwise
multiplication of vectors learned from singular value decomposition
to the original set of features and learn the model. This captures
network structure.
\paragraph{Pitfall 3. Measure of Performance}\
We do not undersample negative examples in either training or test
since we use a stratified 10-fold cross validation, thus avoiding the
pitfall of under-representing negative examples.

\paragraph{Limitations of Feature Combination} 
Concatenating the attributes of each of the nodes joins the information 
which gives us useful data about how properties of individual nodes 
influence the existence of a link, but also has some drawbacks. Since 
the links are undirected, the order of the aggregated data should not 
matter; however, concatenation has order which ultimately misrepresents 
the data. This potentially can degrade prediction performance since the
results will differ when the nodes are concatenated in a different order.
Pointwise multiplication removes this issue and reasonably combines 
the data of the two nodes.


