\section{Motivation:}
Ensemble neural network methods can have superior performances relative to single neural network predictors trained on the same data set. The price is high computation time for ensemble predictions. A large pool of ensemble predictions on unrelated data can be used to train a single network to mimic the ensemble. This could drastically reduce computation time without affecting predictive power.

\section{Results:}
The predictive performance obtained by the condensed network was slightly lower than that of the ensemble, and on par with the performance of the online prediction server EasyPred. With an improved method for selecting the final condensed network, performance could be increased further, closing the gap between the ensemble and condensed network.