\section{Evaluation}
\label{sec:eval}
%
%As discussed in Section~\ref{sec:intro}, the main questions with respect to our approach
%are i)~whether the non-experts can achieve a reasonable performance under such a setup; and ii)~whether non-experts
%users can learn from the system feedback on their annotation and improve their performance during this process.
%In order to answer these questions, we conduct a set of experiments to analyse the effectiveness of the proposed
%labeling strategy as well as the non-expert user behaviour under it.

%We evaluate the results from the game by measuring expert and non-expert agreement, 
%game-based ranking in terms of NDCG and learning effects.
%\miniskip
\subsection{Non-expert agreement with experts}
One natural way to evaluate the non-expert performance is to measure
the agreement between the non-expert labels and the expert labels.
%
%annotation is to measure the agreement between
%the labels given by the experts and those given by the non-experts. 
Again, we use Cohen's kappa~\cite{Cohen60}.  %as an agreement measure. 
%
%While there exist certain commonly used interpretation of the $\kappa$ values, e.g., a $\kappa$ value above
%0.6 is considered as a strong agreement, within our specific context, it is not obvious whether
%the non-expert performance is ``reasonable'' or not given a single $\kappa$ value.
%
Recall that in Section~\ref{sec:expt_label} we have already seen that
the marine biologists often disagree on 
the species names % for a given image 
among themselves. 
% If the experts cannot agree on their labels, it is probably unreasonable to require the non-experts achieve an extremely high agreement with the experts. 
%
%In order to obtain a more tangible result of the non-expert performance, we ask the following question:
% We therefore ask the question: How does the agreement between non-experts and experts compare to the agreement between experts?

%Specifically, 
We measure the agreement between the aggregated non-expert labels and
each of the three experts. We compare these to the pairwise agreement
among the experts.  When using majority voting, we take the top 1
candidate as the chosen label.  In the case of ties, we use the same
approach as described in~\ref{sec:expt_label} to calculate the
agreement. 

%In addition, we create a set of aggregated expert labels and compare it to both experts' and non-experts' labels.
%
%To aggregate the experts' labels, we use majority voting and take the categories 
%\footnote{It is possible that multiple categories get same number of votes due to the fact that experts sometimes assign multiple
%categories to the same image.} 
%with the highest votes as the aggregated labels.
%
%On the other hand, for the aggregated non-expert labels, we also take the categories with highest scores
%as target labels.



%As we have discussed already, it is possible that an image has multiple labels assigned by a single expert or 
%the aggregated non-expert, while Cohen's kappa does not handle multiple labels.
%We handle this situation as follows. First, we evaluate the agreement between labels at both species and family levels:
%it is expected that at family level, cases with such situation will be greatly reduced.
%Second, when there exist multiple labels for an image assigned by one expert, we randomly draw one of the them as the 
%target label being evaluated; this process is repeated 100 times and we report the averaged $\kappa$ and its standard deviation
%over the 100 runs. Note that the agreement calculated in this way is rather conservative. 

%\miniskip
\subsection{Non-expert performance in terms of NDCG}
\label{subsec:eval_ndcg}
While the agreement analysis provides us with insights of the
alignment between non-experts and experts,
% in comparison with
%the expert-expert agreement as a baseline, 
it does not provide an intuitive indication of how correct the labels
obtained from the non-experts are. Further, we do not have a
principled way to handle the multi-label situation with Cohen's
$\kappa$.

We therefore also evaluate the use of NDCG~\cite{Jarvelin02:ndcg},
which handles multi-label situations and provides a more intuitive
interpretation of the correctness of the labels.  For a query image,
given the biologists' judgment, each candidate can be rated as 0, 1,
2, or 3 depending on the number of biologists that assign the same
label. 
%These are used as the ground truth with graded judgements. 
The ranked list of candidates generated by the (majority) voting
aggregation method is then evaluated using these graded expert
judgements.

%We therefore provide an evaluation of the non-expert labels in terms of their correctness.
%We use the expert labels as ground truth. For an image, we consider the number of votes of a candidate
%category given by the experts an indication of the relevance of the candidate category to the image. 
%
%Specifically, we have three marine biologists and therefore each image-category pair can be rated $\{0, 1, 2, 3\}$.
%On the other hand, we aggregate the non-expert labels using the two methods as described in~\ref{subsec:agg_nonexp}, 
%which result in ranked lists of categories in descending order of their scores. 
%
%Given the graded relevance judgement and the ranked list of non-expert labels, we use NDCG~\todo{REF} as the 
%evaluation metric. 

%\todo{
%In addition, we are interested in when we can expect the non-expert labels to be reliable. 
%Some images are easier than others for annotators to provide accurate labels, e.g., some species may have unique features, 
%some images are of better qualities, etc. 
%We hypothesize that for easy images, it is more likely to have high agreement among annotators, as it is easier
%to find the correct labels, while for ``difficult'' images the annotators may provide diverse labels as it is hard to determine
%the correct answer. 
%We validate the hypothesis by measuring the Pearson correlation~\todo{REF} between two quantities, namely, 
%the NDCG scores, and the entropy of the non-expert labels of the images.
%Let $C_i$ be all the categories assigned to image $i$, the entropy of the labels is calculated as
%\begin{equation}
% H_i = -\sum_{c_i \in C_i} p(c_i) \log p(c_i),
%\end{equation}
%where $p(c_i)$ is defined as $\textit{count}(c_i)/\textit{count}(C_i)$.
%}

\subsection{Learning behaviour}
%\label{subsec:eval_learn}
%As described in Section~\ref{sec:nonexpt_label}, we provide feedback to the non-expert annotators. 
%Intuitively, if the annotators can learn from the feedbacks they received, their performance
%would improve over time. For example, when seeing an image that has appeared before, one may be able to 
%remember the choice he/she made before and try to make an correction if it was wrong or stick to it when it was correct,
%given that his/her goal is to maximize his achievement score in our game setup. 

We investigate two types of learning behaviors: 1)~memorizing: whether
the players' performance improves over time when an image is shown
repeatedly (but not continuously); and 2)~generalization: whether the
players' performance improves regarding different images that belong
to the same species.  

We measure the performance of a single label as follows. 
Let $L=\{l_k\}_{k=1}^K$ be the candidate labels for an image, 
$J(l)=\{0, 1\}$ be the judgement given by a player, 
and $E(l)=\{0, 1, 2, 3\}$ be the expert votes of label $l$ for the image. 
The performance of a single judgement is computed as
%
\begin{equation}
 s =  \frac{\sum_{l \in L} J(l) \cdot E(l)}{\max_{l \in L} E(l)}. 
 \label{eq:score}
\end{equation}
%
That is, the expert votes for the selected candidate, normalized by
the maximum votes one can achieve for the set of candidates $L$.
%
%Players may make random mistakes and %at some point and %therefore 
%scores achieved at certain time point can be sensitive to such mistakes. 
%
Since scores achieved at a certain time point can be sensitive to the
players' random errors, we smooth the score at each time point with
the scores achieved so far, i.e., $s_t = \sum_{i=1}^t s_i / t$.
%
%Specifically, at time $t$, we average over the scores achieved up till $t$ as the score at point $t$:
%\begin{equation}
 % s_t = \frac{1}{t}\sum_{i=1}^{t}s_i.
 % \label{eq:score_t}
%\end{equation}

In practice, $t$ refers to the $t$-th time a player labels the same
image, (or a different image in the same species).  If a player has
labeled an image $i$ (or images from species $i$) for $t$ times, we
call it a repetition (or generalized repetition) with $t$ labels. 
By comparing the scores defined above at different $t$, 
%computed using Eq.~\ref{eq:score} and~\ref{eq:score_t} at different $t$, 
we can observe whether the non-expert performance improves over time.

In order to have images shown multiple times to a player, in each
session, we randomly select first 12 images without repetition. After
that, with a probability of 0.5 we select an image from the ones that
were already labeled in the current session. 
%
Since images are selected randomly, 
%If a user has labeled an image $i$ (or images from species $i$) for $x$ times, we call it 
%a case of repetition (or generalized repetition in the case of  generalization) of $x$ labels.
%Since images are selected randomly during the labeling process as described in Section~\ref{subsec:eval_learn},
the repetitions or generalized repetitions do not happen the same
number of times.  E.g., in Expr. 1, we have 325 cases of repetition
with 2 labels, but only 14 cases of repetition with 5 labels.  In
order to be able to conduct reliable statistical testing for
comparison (Wilcoxon rank-sum test~\cite{Wilcoxon45} in this case), we
only consider repetitions that have more than 30 cases.  
We set $t=1, ..., 4$ for memorization for both experiments; 
$t=1, ..., 25$ for generalization in Expr. 1 and $1, ..., 10$ in Expr.
2, as fewer sessions were played in Expr. 2 and therefore less
repetitions are available.
%For memorization, with both experiments, we consider $t = 1, ..., 4$.
%For generalization, with Expr. 1, we consider $t = 1, ..., 25$, and with Expr. 2, we consider $t  = 1, ..., 10$, 
%as fewer sessions were played in Expr. 2 and therefore less repetitions are available. 
%See Section~\ref{subsec:res_learn} for detailed results. 

%For memorization, with both experiments, we consider repetition of 1, 2, 3, and 4 labels. 
%For generalization, with Experiment 1, we consider generalized repetitions of 1 to 25 labels, and with experiment 2, 
%we consider generalized repetitions of 1 to 10 labels, as fewer sessions were played in Experiment 2 and therefore
%less repetitions are available. 

