\section{Related work}
\label{sec:relwork}

Early work on transfer leaning \cite{Thrun95lifelonglearning,Thrun:96,Baxter:97}
highlight the possibility of exploit knowledge acquired in a previous learning task to solve a new task, as a way to overcome the limitations of machine learning algorithms, which cannot generalize from small training sets. 
%object recognition domain

Transfer learning is considered supervised when the data in the target set is labelled, for instance, \cite{•}, and unsupervised when the data in the target domain is unlabelled, for example, \cite{Arnold:07}.
Other methods that deals with the lack of labelled data are semi-supervised and unsupervised classification, nevertheless, most of them assumed the same distribution between the label and unlabelled data. 
Other related, but different learning approach is multi-task learning \cite{Caruana:97}, in which the aims to learn the \textit{source} and \textit{target} tasks simultaneously, and wherein transfer learning techniques can be also applied \cite{Arnold:08}.


Recently, \newcite{Yosinski:2014} investigated how transferable are features in a deep neural network trained in the ImageNet data set.
In their work, they proposed a methodology to identify general vs. specific features
and test several transfer approaches: joint training, cross-training and pre-training; with frozen feature weights vs. fine-tuned features. 
Their remarkable result is that pre-training on a source data set and then training a network on the target training data set, gives the best performance. 

%joint training of a network  leads to co-adaptation if too many layers are kept frozen, so that performance decreases even if the same part of the training set is used to re-train.

%Cross-training between two different training sets and keeping too much of the original network leads to even more loss of performance when too many layers are kept.

%And an exciting result is that pre-training on another training set, then training the whole network on the final training set, gives the best performance of all.

We considered our method as a pre-training approach, in comparing to cross-training and joint training methods, that have been also study by \cite{Yosinski:2014}, as a way of transfer learning.


%\subsection{Transfer learning for NER}
The main scenario where transfer learning has been applied 
for NER is domain adaptation \cite{Arnold:08,Maynard:01,Chiticariu:2010}, where it is assumed that a set of possible labels  \textit{Y} is the same for both \textit{source corpora} and \textit{target corpora}, while \textit{source corpora} and \textit{target corpora} themselves are allowed to vary between domains. 
\newcite{Arnold:08} used a hierarchical prior structure to help transfer learning for 
domain adaptation and multi-task learning. In the multi-task setting, the authors learn to recognized person names by adding additional labelled data with person names, and additional data with protein names. In other words, a model is tuned one domain's data, using a prior trained on a different, but related domain, following the method proposed by \newcite{Chelba:04} for transfer learning in Maximum Entropy models.
In contrast to our work, \newcite{Arnold:08} aims to improve the performance of person names identification by exploiting other corpora, meanwhile, our aim is to learn a set of entity types by using the feature weights from a different, but related set of entity types.

Interestedly, \newcite{Sutton:05} investigated how the \textit{target} task affects the \textit{source} task. They authors trained a cascade of models independently in various training  sets and, at test time, combined them into a single model in which decoding is made jointly. In their experiments, they trained a NER system on the ACE data set, with the output of a NER system trained on the \Conll data set, and demonstrated that decoding for transfer is better than no transfer, and joint decoding is better than cascading decoding, for which they they achieve performance comparable with the state-of-the-art \cite{Florian:04}.



\gabi{Do we want to talk about NER system that also used word embeddings?}


\iffalse
% More about Domain adaptation
\cite{Maynard:01} proposed a multi-purpose rule-based NER system. 
Their system is designed to process multiple types of text
(books, emails, periodical, dialogue and miscellaneous) and its composed 
by three modules: a tokenizer, gazetteers and a grammar (hand-crafted rules describing patterns that matches NEs).
Their aim is to have an adaptable NER that, given the domain of the text, the system is able to switch on and off specific features such as gazetteers 
for that particular domain. 

Similarly, \cite{Chiticariu:2010}, present a rule-based NER system that can easily been adapted to a new domain. 
\fi




















