\section{Ranking}
Now that we have a set of crawled pages and an inverted index containing the observed terms, we need to rank them in comparison to a search query specified by the user. For this we created a \textit{ranker} class which calculates a score for each pages and present the most relevant pages in regards to the search query. The score is calculated based on the content that we fetches from the web pages. The score is calculated by looking the for rare terms which occurs in multiple documents.

The class \textit{ranker} contains an object of \textit{indexer} to access the indexed \textit{terms}. The \textit{ranker} also contains a list of \textit{Calculation} which is used to contain a terms frequence and two lists which are used to hold the term-weight and the normalized term-weight.

The term-frequence for each page in each term is calculated and stored in a \textit{Calculation} object. This can be seen of code snippet \ref{lst:cal}.

\begin{code}{lst:cal}{The calculation method which calculates the term-frequence for each term.}
\begin{lstlisting}
public void calculateTF()
{
	foreach (Term term in index.termIndex)
    {
	    Calculation calc = new Calculation(term.termName);

        foreach (KeyValuePair<int, int> freq in term.frequence)
        {
    	    double result = 1 + Math.Log10(freq.Value);
            calc.addToTf(freq.Key, result);
        }
        this.calculations.Add(calc);
    }
}
\end{lstlisting}
\end{code}

The search query is made into tokens and stemmed, same procedure as done with \textit{terms}. These stemmed tokens are then used to lookup which pages these terms appears by fetching the \textit{term} objects that contains the list with pages and adding them to the \textit{result} list. Each element in the \textit{result} list are iterated through and for each element the term-frequence, inversdocument-frequence and term-weight are calculated and stored in the variable \textit{wtTemp}. This can been seen on code snippet \ref{lst:termwt}. The variable \textit{wtTemp} is used to calculate the normalized term-weight.

\begin{code}{lst:termwt}{Code illustrating the calculation of a term-weight.}
\begin{lstlisting}
foreach (Term t in results)
{
	int tf = queryTokens.Count(x => x.Equals(t.termName));

	int pageCount = t.frequence.Count();
                
	double idf = Math.Log10(crawlCount / pageCount);

	double wt = tf * idf;

	wtResultList.Add(new KeyValuePair<Term, double>(t, wt));

	wtTemp = wtTemp + Math.Pow(wt, 2);
\end{lstlisting}
\end{code}

For each page the term-weights are calculated and stored in a \textit{resultHolder} object. This can be seen on code snippet \ref{lst:pagewt}. The class \textit{resultHolder} contains a method called \textit{getNormalizedList} which calculates the normalized term-weight of each term of a specific page.

\begin{code}{lst:pagewt}{Code for calculating each pages term-weight.}
\begin{lstlisting}
foreach (KeyValuePair<int,int> freq in t.frequence)
{
	int pageID = freq.Key;
	int termFreq = freq.Value;
	double temp_wt = 1.0 + Math.Log10(termFreq);

	List<resultHolder> tmp = docWrtList.Where(x => x.pageName.Equals(index.crawledPages[pageID])).ToList();

	if (tmp.Count <= 0)
	{
		resultHolder res = new resultHolder();
		res.pageName = index.crawledPages[pageID];
		res.wtResult.Add(new KeyValuePair<Term, double>(t, temp_wt));
 		docWrtList.Add(res);
	}
	else
	{
		tmp[0].wtResult.Add(new KeyValuePair<Term, double>(t, temp_wt));
	}
}
\end{lstlisting}
\end{code}

With these two normalized term-weights for each term in each document, the score us by multiplying these two values together. The score for every of the searched terms are summed together which represents a pages final score. This can seen on code snippet \ref{lst:score}. This score for each page are then used to sort the list with pages containing terms from the search query so the most relevant web pages are the highest prioritized.

\begin{code}{lst:score}{Code for calculating the score for each page compared to the search query.}
\begin{lstlisting}
foreach (resultHolder res in docWrtList)
{
	List<KeyValuePair<Term, double>> normPageWT = res.getNormalizedList();

	double score = 0;

	foreach (KeyValuePair<Term, double> normPage in normPageWT)
	{
		score += wtNormalizedList.Where(x => x.Key.Equals(normPage.Key)).ToList()[0].Value * normPage.Value;
	}

	res.score = score;
}
\end{lstlisting}
\end{code}