\section*{Clustering}
A lot of information about the reviews can be extracted from the data by mining it, for example by grouping the data in communities with similar properties. Based on these communities it is possible to predict the likelihood of certain outcomes by looking at the behavior of the actors in the communities. The data can be categorized into two groups, explicit groups and implicit groups. Explicit groups are groups formed from actors actively forming groups in some sort of community with shared interest. Implicit groups on the other hand, are the actors which are group together from discovering behavior patterns or similar properties. It is finding these implicit groups that is the goal of community detection.

When working with data in this context it is usually represented in form of graphs where the nodes are the actors and the edges between them are the relationship between the nodes, often given as an edges weight. The way of determine the weight between nodes is very important in order to construct communities. For weighting you can use structural equivalence which is determining to what extent two nodes are connected to the same nodes or node similarity for example \emph{k}\textbf{NN} (\emph{k} nearest neighbors).

For grouping data into clusters or communities there exist different solutions, or rather methods for doing so. These include member-based-, group-based- and hierarchical clustering.

One of the methods for member based clustering, namely clique percolation, centers around forming k-cliques based on the graph, structure these cliques into adjacent subgraphs where the cliques shares k-1 nodes. The nodes in each of these subgraphs then represents a community together. Finding a strict clique, a maximum subgraph where the nodes are adjacent to each other, is computational expensive and therefore you generally works with relaxed cliques. A relaxed clique is a subgraph where the nodes does not necessarily have k degrees going out of them.

Another method similar to this is reachability clustering where the clustering is either done by: k-clique where you look for subgraphs for k distance, k-clubs same as k-clique with a shortest path constraint and lastly k-clan which comes the two others and a additional constraint that that the shortest path is k or less long.

Group based clustering works by cutting or removing the weakest edges and thereby split a graph into a number of subgraphs. A weak edge is an edge which connects a lot of nodes with a lot of other nodes. Spectral clustering implementing the principle of ratioCut function which is the minimum cut problem where the community is taken into consideration. Spectral clustering works by finding the eigenvalue and eigenvector of a Laplacian matrix. A Laplacian matrix is constructed from a symmetric square matrix, called A, representing the edges between nodes with a weight and a diagonal matrix, called D, which contains the sum of weights for each node. The Laplacian matrix is then D - A.
After computing both eigen vector and value, the first eigenvalue not equal zero indicates the eigenvector you should use. After sorting the found eigenvector you find the largest jump in values and split the graph there.

Hierarchical clustering works by the notion of nested communities and cuts the weak ties between them. The Girvan-Newman algorithm is method of finding communities by discovering weak ties and cut these. This is done through calculating the betweenness of edges. Betweenness is measured through how many shortest path that goes through an edge.


We chose to implement spectral clustering because we wanted to separate the graph into clusters of friends. Looking for nested clusters was unnecessary because we only were interested in the larger clusters of friends. Therefore spectral clustering seem as the best fit. Another pro for spectral clustering is that friendship are mutual, meaning we will be getting a symmetric matrix.

We implement spectral clustering by first loading each person in a node and construct edges that person friends. This graph we chose to store in an inverted table to save space by only representing edges that exists, meaning the edges with a weight. We chose to weight every edge one, because every friendship are considered of equal meaning. Under the construction of the inverted index table we utilize the fact that friends a mutual and thereby save a lot of calculation by simple reflecting the triangular matrix. To calculate the eigen decomposition, the eigenvalues and eigenvectors, we use a library called "ILNumerics" which provides the algorithms for it. We then constructs the Laplacian matrix so it fits the requirements to be usable by the library. After finding the largest gap in the found and sorted eigenvector, we separates the nodes into two subgraphs and remove edges that goes outside of the subgraph itself.

We used a lot of time actually understanding how spectral clustering work and finding a library that could actually calculate the eigen decomposition within reasonable time but after these struggles it gave us result that we could accept. We are able to find 4 reasonable clusters, another split of the clusters and they begin to be very sparse.