\paragraph{Discounted Cumulative Gain - DCG}
Discounted Cumulative Gain is a popular measure for evaluating web search engines and related systems. When using DCG there are two assumptions \cite{Jarvelin:2002:CGE:582415.582418}: highly relevant documents are more valuable than marginally relevant documents, and the greater the ranked position of a relevant document, the less valuable it is for the user, because the less likely it is that the user will ever examine the document. DCG is defined as the sum of the ``gain'' of presenting a particular document times a ``discount'' of presenting it at a particular rank $i$, up to some maximum rank $l$.

\begin{displaymath}
 DCG_l = \displaystyle\sum\limits_{i=1}^l gain_i \times discount_i
\end{displaymath}

\noindent For web search, ``gain'' is typically a relevance score determined by human judgment , and ``discount'' is the reciprocal of the logarithm of the rank. Therefore, putting a document with a high relevance score at a low rank results in a much lower value of DCG than putting the same document at a higher rank.

\begin{displaymath}
 DCG_l = rel_1 + \displaystyle\sum\limits_{i=2}^l rel_i \times \frac{1}{log_2 i}
\end{displaymath}

\noindent $rel_i$ are the relevance scores. These typically are scalar values somehow related to the human relevance judgment with respect to a test query. Given the test query, a human provides his judgment about the relevance of each document in the collection with respect to that test query. An example of relevance score is: $rel_i \in \{ 0, 1, 3 \}$. These values can be interpreted as $0$ ``not relevant'', $1$ ``relevant'', $3$ ``very relevant''.

Obviously, the values obtained by the previous formula can be plotted in a chart with DCG values on vertical axis and rank positions $i$ on horizontal axis. The DCG curve is then compared to the IDCG curve, that is the Ideal Discounted Cumulative Gain curve. The IDCG curve is obtained computing the DCG on the perfect ranking (the ranking where the most relevant documents come first).

Differently from the other measures such as Precision and Recall, DCG also takes into account the positions of the relevant documents among the top $l$.

The DCG curves can also be averaged over a set of test queries to obtain a more precise assessment.

\paragraph{Precision at k - P@k}
Precision and Recall are the most popular measures in the Information Retrieval field. They require that a human judge (or another trustworthy system) performs a binary evaluation of each retrieved document as ``relevant'' or ``not relevant``. Moreover, they need to know the complete set of relevant documents within the collection being searched:

\begin{displaymath}
 Recall = \frac{number \, of \, relevant \, documents \, retrieved}{number \, of \, relevant \, documents}
\end{displaymath}

\begin{displaymath}
 Precision = \frac{number \, of \, relevant \, documents \, retrieved}{number \, of \, retrieved \, documents}
\end{displaymath}

\noindent One way of plotting precision is looking at the precision at a fixed critical position of retrieving, that is ''Precision at k``, or P@k. For ranked lists assessment, $k$ can be the number of results we expect users to look at. P@k computes the precision at a certain ranking position after a relevant document is retrieved. If a non-relevant document is retrieved, P@k equals zero.

P@k can be averaged over a set of test queries to obtain a unique curve for the assessment of the system.

\paragraph{11-point Interpolated Average Precision - 11pIAP}
The 11-point Interpolated Average Precision is the evolution of the Interpolated Precision $p_{interp}$. The precision-recall curves have a distinctive saw-tooth shape: if the $(k+1)^{th}$ document retrieved is a non-relevant one, then recall is the same as for the previous k documents, but precision drops down. If the $(k+1)^{th}$ document is a relevant one, then both precision and recall increase, and the curve increases to the top right direction of the precision-recall chart. This implies some jiggles into the curve that it is often useful to remove in order to ease the comparison of different precision-recall curves. This is done by computing the interpolated precision curve. $p_{interp}$ at a certain recall level $r$ is defined as the highest precision found for any recall level $r' \geq r$:

\begin{displaymath}
 p_{interp}\,(r) = \displaystyle\max\limits_{r' \geq r} p(r')
\end{displaymath}

\noindent The Interpolated Precision curve is for one ranked list only (i.e. one query). To evaluate the quality of a search engine there is often the need to obtain a single curve from several curves of different test queries. The traditional way of doing this is the 11-point Interpolated Average Precision. For each information need (i.e. each test query), the interpolated precision is measured at 11 recall levels of 0.0, 0.1, 0.2, ..., 1.0 (recall = 0.0:0.1:1.0). Then, for each recall level, the arithmetic mean of the interpolated precision at that recall level for each query is calculated.

\paragraph{Mean Average Precision - MAP}
Mean Average Precision derives from Average Precision (AP). AP provides a single number instead of a curve. It measures the quality of the system at all recall levels by averaging the precision for a single query:

\begin{displaymath}
 AP = \frac{1}{RDN} \times \displaystyle\sum\limits_{k=1}^{RDN} (Precision \, at \, rank \, of \, k^{th} \, relevant \, document)
\end{displaymath}

\noindent where $RDN$ is the number of relevant documents in the collection.

Mean Average Precision (MAP) is the mean of Average Precision over all queries. Most frequently, arithmetic mean is used over the query set.

\paragraph{Mean Reciprocal Rank - MRR}
Mean Reciprocal Rank (MRR) is the reciprocal of the rank of the first relevant result averaged over the number of test queries:

\begin{displaymath}
 MRR = \frac{1}{Q} \times \displaystyle\sum\limits_{q=1}^{Q} \frac{1}{rank(1^{st} \, relevant \, result \, of \, query \, q)}
\end{displaymath}

\noindent where $Q$ is the number of test queries.
