%\textcolor{red}{Vores 12 x subsampling er med til at generere netværk der er mere forskellige fra hinanden end vi kunne ved blot at bruge første femtedel, etc.}
%\textcolor{red}{Skriv at vi har brugt test sæt til at stoppe træning og optimere performance. Samt at vi har brugt alt data, uden at overfitte ved at bruge alle 5 evaluerings sæt.}

Ensemble methods derive their power from using a multitude of different
predictors, rather than one. The more independent the predictors are, the
better the ensemble will perform on average. The diversity of the neural networks can be increased by employing random sampling of the training
data, as well as various combinations of encoding
schemes, random seeds and number of hidden neurons. This is evident from the broad correlation density in fig. \ref{fig:correlation_density}.

Using one fifth of the data as evaluation data, an unbiased estimate of the
predictive performance can be calculated. Having each holdout of the data
used as evaluation set in turns, allows all data to be used for evaluation, and gives an
estimate of the variance of the predictive performance.

Using cross validation with a separate test set greatly reduces model overfitting since the training is stopped when the test set predictions do not improve. The high evaluation set PCC in fig. \ref{fig:test_eval_cor} compared to the test set PCC suggests that the networks are not overfitted as such a scenario would yield a decrease in evaluation correlation compared to the best test set correlation found during training.

%Evidence of not overfitting can be seen in figure 4. The evaluation performance match that of test performance, even better in some cases.

%Using three fifth of the data to train the individual networks, and one fifth to

%stop the training when an improvement has not been seen for 150 epochs
%reduces the problem of over-fitting, but fits to the training data.

%\textcolor{red}{Knyt en kommentar til hvorfor 1 hidden neuron performer dårligere. Samt hvorfor sparse encoding er bedst i nogle sæt, mens blossum er bedre i andre.}
In fig. \ref{fig:neurons_effect} it can be seen that increasing the number of
hidden neurons has little effect on the predictive performance of the
network. Networks with a single hidden neuron perform slightly worse than multi-neuron networks, reflecting the inability of single hidden neuron networks to model higher order
correlations. As can be seen in fig. \ref{fig:encoding_evalcor} the
encoding has a large effect. Although the difference between the encoding
schemes varies, sparse encoding outperforms BLOSUM encoding in 4 out of 5 runs. BLOSUM encoding generally performs better on a limited dataset \citep{nielsen2003reliable}, however, this study contains enough data for this not to be relevant.

%\textcolor{red}{Biasness i data. Valg af evalueringssæt har stor betydning for performance (0.75 - 0.81 correlation). Vores ensemble er godt i alle}
The performance of both the ensembles and the individual networks varies
with the holdout dataset as can be seen in fig. \ref{fig:correlation_density}. The level of predictive performance most likely depends on how
representative the training- and test set data is of the evaluation set. The reason the
correlations are not Gaussian distributed is most likely due to
the differences caused by the encoding, resulting in two peaks.


%\textcolor{red}{Kommenter på korrelation mellem testsæt performance og evalueringssæt performance, generelt ikke så god. Nævn lyseblå outlier i holdout 5}

One way to improve the ensemble could be to remove networks that
consistently perform poorly. To begin with, networks with only one hidden neuron would be
candidates to be removed. Furthermore, one could envision removing networks that
perform poorly on the test set. However as seen in fig. \ref{fig:test_eval_cor} the
correlation between the test set performance and evaluation set performance is
highly dependent on the holdout used, and while some positive correlation is seen in all cases, using such a cut-off will likely remove good networks as well. As such we do not expect such a measure to improve the ensemble dramatically. To validate this, further testing will be necessary.

One curious observation is the distinct cloud of observations seen in subset 9
of holdout 5, this high test set correlation is most likely caused by the
training set being very similar to the test set, but it is seen that this
does not translate to a high correlation with the evaluation set.

%\textcolor{red}{Ser på testsæt correlation som funktion af antallet af neuroner, encoding og mængde af træning. Ser at 2 neuroner med sparse encoding er nok.}
%\textcolor{red}{Vi vælger kondenseret netværk på baggrund af performance itest-sæt, ovenfor har vi lige konkluderet at det ikke er så god en idé, jævnfør correlation koefficienter... Bedre måder at gøre det på: generer sekvenser på 3/5 training, stop opdatering af vægte på 1/5, vælg netværk på 1/5. Gjorde det ikke af tidsmæssige årsager. Alternativ, træn ikke på ægte data, men test på dem?}

Choosing the number of neurons in the condensed network based on fig. \ref{fig:con_testset} is not the optimal solution, but was chosen due to time constraints. As was discussed above, performance on the test set is a poor indicator of performance on the evaluation set.

To evaluate if a larger exploration of the solution space would help in constructing the condensed network, a library of 1,000,000 naturally occurring 9-mer peptides was used as the training set, with 4/5 of the HLA-A*02:02 peptide binding data used as the test set. This resulted in a condensed network able to predict the original evaluation set with PCC values identical to the condensed network trained on real binding data (with three decimal precision). 

A superior method for choosing the best condensed network would be to generate a new pseudo-dataset containing thousands of peptides. Predictions for these peptides can then be generated by both the ensemble and condensed network, and thereafter compared to find the condensed network closest to the ensemble. This procedure can be repeated as many times as necessary until a single, best, condensed network has been isolated, without the fear of overfitting.