\section{Introduction}
\label{sec:intro}
There are two main approaches of building models for named entity recognition (NER): i) build sequence labelling models on a large manually labelled training corpus~\cite{}; ii) exploit knowledge bases to recognise mentions of entities in text~\cite{}, which is usually part of the process of named entity disambiguation. The former is restricted by the number of manually labelled named entity types, while the latter requires the existence of named entities in knowledge bases. For many privacy concerned or social media based applications, we usually can not find any knowledge base covering most of the target named entities, and the construction of a large training corpus is too time consuming and expensive. Therefore, in this paper, we aim to leverage the large training corpora in source domains to learn new named entity types in target domains, in which only a small amount of training data is available.

%The challenges of supervised machine learning for Information Extraction on the $21^{th}$ century are scalability and domain adaptation.
%With scalability we refer to the ability to recognized new categories (or labels), not only the ones that were pre-defined before hand, but also new categories not seen during training.   Flexibility refers to the ability to adapt to new domains, with a reasonable effort.
%This is the case, for example, of Named Entity Recognition (NER) systems, which are usually domain dependent and task specific.  
%Named Entities Recognition deals with finding sequences of words in a text,
%which are usually not found in dictionaries, and with labelling those sequences 
%according to a pre-defined typology of entity types such as person and organization names, diseases, dates, quantities, among others.
%The state-of-the-art Named Entity Recognition systems relies in supervised machine learning techniques, thus utilizing a data set annotated with a finite set of Named Entity (NE) types. 
%The drawbacks of this approach are:

%\begin{description}
%\item Training set annotation: having a data set annotated with NEs is time consuming;
%% and require the expertise of a linguist;
%\item Training set size: in order to be useful, the annotated data set have to contain a big number of NE instances;
%\item Domain dependent: the performance of a NER based on supervised machine learning heavily depends on the data used during training (e.g., genre, domain, language, etc.); 
%\item Pre-defined typology: the entity types are classify according to a pre-defined typology and only the entity types seen during training will be recognized;
%\item Scalability: the recognition of a new entity type required the human annotation; 
%\end{description}

While the problem of domain adaptation and lack of annotated resources for NER have been recently studied \cite{Maynard:01,TjongKimSang:2002,jiang:2007,Arnold:08,Chiticariu:2010,Collins:99,Kim:2002,Liao:2009}, little effort have been devoted to extend NER systems to new entities types.
%Why and how NE types change across domains and tasks?}
%Besides the motivation to overcome the drawbacks of NER based on supervised learning, there are two facts that encourages the need of learning new NE types:
%
%\begin{description}
%\item NE types changes across domains: a NE type might not be considered in a particular domain, for example, names of genes might not be considered in a NER applied to tweets or news-wire articles. 
%\item NE types changes across tasks: NE types are considered to be hierarchical (e.g., location $>$ building $>$ room) and a given application might require a particular granularity. 
%\end{description}

In this paper, we propose a deep learning based model \textit{DNNSeq} for learning new named entity types in target domains by using only a small amount of labelled training data. We start with training a deep learning based model in the source domain. Then we transfer the knowledge in the trained model into the models for new entity types by exploiting the semantic relatedness between known named entity types and the new named entity types.
%Transfer learning have been seen as a means to overcome some limitations of fully-supervised learning, by transferring the information gained in a learning task \textit{A} to solve another learning task \textit{B}. 
%In our research, we trained a deep neuronal network for NER that comprises a set of NE types (\textit{source corpora}) and transfer the learned features to a new network that comprises a different or new set of NE types (\textit{target corpora}).
%The intuition behind this idea is that, features learned in the upper layers of the neural network are general features that can be used in a transfer learning set-up, thus alleviating the drawbacks of supervised machine learning approaches, which are traditionally domain and task specific.

%In a transfer learning setting, the feature weights from the source model can be transfer to the target model by using different label matching strategies. 
%In our research, we explored four different matching strategies: random matching, manual matching, closest class weight vector matching, and word embedding cluster center matching. 

%Along the experiments, we also study the influence of the deep learning architecture, which is evaluated by incrementally adding hidden layers to the neuronal network.

%The contribution of the paper are the following:
%\begin{itemize}
%\item 
%\item 
%\end{itemize}

%Transfer learning can be the key for the development of applications for language without resources


