\section{\textit{DNNSeq} for Learning New NE Types}
\label{sec:DNNSeq}

In this section, we present our approaches to learn new named entity types from a small amount of training data by leveraging a large training corpus annotated with some related but different named entity types. We denote the set of known named entity types by $K$ and the set of new named entity types by $N$. The large training corpora annotated with known named entity types are referred as \textit{source corpora} while the small amount of training corpora for new named entity types are called \textit{target corpora}. 
There could be a variety of relationships between the entities in the source and target corpora. In this paper, we focus on the following relationships:
%between the source and target corpora:

\begin{enumerate}
\item Exact: A name entity type $t \in N$ is an exact match of $t\prime \in K$ such that 
$t = t\prime$.

\item Subsumption: A name entity type $t \in N$ is the subclass of $t\prime \in K$, such as \textit{company} to \textit{organisation}. We do not consider the reverse relationship because it is trivial. If several $t\prime \in K$ are the subclass of $t \in N$, any constituents recognised as $t\prime \in K$ can be relabelled as its supertype $t$.

\item Co-occurred NE types: A name entity type $t \in N$ and a named entity type $t\prime \in K$ are mentioned frequently in the same sentence, for example, \textit{Dr}. frequently co-occurred with \textit{person}.

\item New named entity types $t \in N$ with numerical patterns: Such named entity types can be characterised by a set of regular expressions, such as \textit{date} and \textit{time}.
\end{enumerate}


In the following, we introduce a deep sequence model \textit{DNNSeq} to tackle the problem if we know the relationships between known and new named entity types. If we do not know the relationship between a new named entity type and any known named entity types, we propose two methods to find a known named entity type which has the strongest relationship to the new type.



\subsection{Transfer Learning with \textit{DNNSeq}}
Our model \textit{DNNSeq} aims to maximise the use of unlabelled data and all available labelled training data. At the core of this model is a deep neural network. We train it first on the source corpus, and adapt it for the new NE types and new features of new NE types on the target corpus.

\subsection{Variants of Deep NNs}
We start with experimenting a graph transformer with three layers. At the bottom is a layer consisting of word representations. 
For each word position $i$ in a sentence, if we look at $k$ words to the left and $k$ words to the right of the current word, this layer is a concatenation of word embeddings $\mathbf{X}_i = [\mathbf{x}_{i - k}, ... , \mathbf{x}_{i}, ... , \mathbf{x}_{i + k} ]$. The top layer is a linear-chain CRF, which takes the output of the hidden layer and a feature vector of hand-crafted features as input. Till now, we have explored three kinds of hidden layers:
\begin{itemize}
\item \textbf{Linear feature map A}. Co-occurrence patterns of the word embedding features within a local context window could provide evidences of named entity types. 
We apply a linear feature map $\mathbf{W}$ with $\mathbf{W} \in $ $R^{(2k + 1)m \times (2k + 1)m}$ to the concatenation of all word embeddings $\mathbf{X}_i$ in local context, where $m$ is the size of word embeddings.
\item \textbf{Linear feature map B}. Inspired by recent work about deep learning, the composition of word embedding features may be easier to learn. The word at the current position and the words surrounding it should have different characteristics. Therefore, we apply position based linear feature maps $\mathbf{W}_j$ with $\mathbf{W}_j \in $ $R^{m \times m}$ for each position $j$ in the local context. It ends up with $2k + 1$ weight matrices for each position in local context.
\item \textbf{Context based LSTM}. Another option to treat the context words to the left and right of current words as sequences of size $k$ respectively. Then we apply LSTM models with different parameters to left context and right context sequences respectively. For the words in the middle, we apply a linear map $\mathbf{W}$ with $\mathbf{W} \in $ $R^{m \times m}$.
\end{itemize}
Apart from these options, we are implementing neural networks with four or more layers, which will have low-dimensional feature representations for CRF. The new models are expected to require less training data from target domains.

\subsection{Named Entity Type Matching}
\label{sec:matching}

As already mentioned, the entity types of the \textit{source} and \textit{target} corpora are different, but related. 
We need to take advantage of those relationships so that the weights from the \textit{source types}
can be transfer to their most related \textit{target types}. 
We have defined three type matching methods: a manual matching and two automatic matching methods, which are described as follow,  

\begin{description}
\item \textbf{Manual Matching}: The manual matching in made by a human, who is native  English speaker.  

\item \textbf{Closest class weight vector matching}: Find most related pairs of named entity types with closest class weight vectors. We get the class weights vectors from a pre-trained classifier that used the \Conll training set and 10\% of the validation set of a target corpus. The distance between the weights is calculated with the Euclidean distance. 

\item \textbf{Word embedding cluster center matching}: Find most related 
pairs of named entity types by using word embedding cluster centres. 
First, for each Named Entity type, we collect their corresponding word embedding representations from the \Conll training set and 10\% of the validation set of a target corpus. Second, we average the word embedding representation for each type and calculate the Euclidean distance to find the most similar pairs of NEs.
\end{description}

It is interesting to highlight that the manual mapping is not always straightforward since same cases are ambiguous, for example,  \hospital can be categorised as \location or \org .
Another disadvantage of the manual mapping is the subjectivity of the annotator, for example, \ids can be either categorised as \misc or \person . 
Furthermore, meanwhile, the manual mapping carry on in our experiments are feasible in terms
of number of mapping items and in number of mapping schemas, other applications might require more complex mappings for which manual annotation would be impractical.  
Hence, we want to compared the performance of the proposed automatic mapping methods against the manual one and to study whether the proposed automatic methods can overcome the drawbacks of the manual one. 

The automatic methods can be seen as a hard clustering assignment. 
\gabi{Elaborate/justify why the automatic approaches make sense}















