\section{Transfer learning}
\label{sec:transferLearning}


Following the notation of \cite{Arnold:08}, in a traditional supervised machine learning setting, given an example $x$ and a class label $y$, 
a classifier assigned a probability $p(y|x)$ to $x$ of belonging to class $y$. 
In a binary classification setting, the labels are $Y \in {0,1}$  and each example is represented
as a vector of binary features $(f_1(x_i), \ldots, f_F(x_i))$ where $F$ is the number of features.
The data used consist of two disjoint subsets: the training set 
$(X_{train}, Y_{train})  = {(x_i, y_i), \ldots, (X_N, Y_N)}$, that is used 
to learn a model, and a test set $X_{test} = (X_i, \ldots,x_M)$ that is used to 
test the performance of the model.
In these settings, $X_{train}$ and $X_{test}$ we assumed that both sets have the same
distribution.
In transfer learning, the trained classifier is applied to examples from a different distribution from the one upon it was trained with. 
Therefore, we assumed that there are two different distributions, $D^{source}$ and $D^{target}$. 
Given this notation, the transfer learning is state as trying assign labels $Y^{target}_{test}$ to test data $X^{target}_{test}$ drawn from $D^{target}$, giving
training data ($X^{source}_{train}$, $Y^{source}_{train}$) drawn from $D^{source}$.

As stated by \newcite{Yosinski:2014}, in a deep learning architecture, the usual transfer approach is to train a \textit{source network} and then copy the first \textit{n} layers of the network to the first \textit{n} layers of the \textit{target network}.
The remaining layers are then randomly initialize and trained towards the target task.
The transferred features layers can be \textit{fine-tuned} to the new task by back-propagating the errors from the new task to the source model or can remained \textit{frozen}, which means that they do not change during training on the new task.

The intuition is that, when the target data set is small and the number of parameters is large, fine-tuning may result in over-fitting, so in this setting, the features should remained \textit{frozen}.  
and when the target data set is large and the number of parameters
is small, over-fitting is not a problem and features can be fine-tuned to the new data task. 
Finally, if the target data set is very large, there is no need for transfer, because they can learn from the scratch from the target training data set.



















