\section{Approach}
\label{sec:approach}

Our approach to learning named entity recogniser for novel types is based on linear-chain Conditional Random Fields (CRF)~\cite{}, which assign a named entity type $y \in Y$ to each word $x \in X$ in sentences.  In a low resource setting, we reduce the need of large training data by i) transferring the parameters of CRF trained for known named types to the model for novel types; ii) using cross-domain word embeddings as textual features. 

In a transfer learning setting, we assume there are two kinds of training datasets available. 
\begin{itemize}
\item Source corpora: large corpora annotated with mentions of \textit{known} named entity types.
\item Target corpora: small corpora annotated with mentions of \textit{new} named entity types.
\end{itemize}
And we denote the set of known named entity types by $K$ and the set of new named entity types by $N$ respectively. 

There could be a variety of relationships between the known and new named entity types. In this paper, we focus on the following relationships:

\begin{enumerate}
\item Subsumption: A name entity type $t \in N$ is the subclass of $t\prime \in K$, such as \textit{company} to \textit{organisation}. We do not consider the reverse relationship because it is trivial. If several $t\prime \in K$ are the subclass of $t \in N$, any constituents recognised as $t\prime \in K$ can be relabelled as its supertype $t$.

\item Co-occurred NE types: A name entity type $t \in N$ and a named entity type $t\prime \in K$ are mentioned frequently in the same sentence, for example, \textit{Dr}. frequently co-occurred with \textit{person}.

\item New named entity types $t \in N$ with numerical patterns: Such named entity types can be characterised by a set of regular expressions, such as \textit{date} and \textit{time}.
\end{enumerate}

%\item Exact match: A name entity type $t \in N$ is an exact match of $t\prime \in K$ such that $t = t\prime$.
To leverage the knowledge learned from source corpora, the core of our transfer step is the smart reuse of the CRF parameters trained for the known named entity types. Given a sentence $\mathbf{x}$ of length $T$ and its label sequence $\mathbf{y}$, a first order linear-chain CRF is a conditional distribution $p(\mathbf{y} | \mathbf{x})$ that takes the form:
\begin{equation}
p(\mathbf{y} | \mathbf{x}) = \frac{1}{Z(\mathbf{x})} \exp \Big\{ \sum_{t = 1}^T\big[ \sum_{k = 1}^K w_k \Phi_k(y_t, \mathbf{x}) + w_{y_t, y_{t-1}}\Phi'(y_t, y_{t-1}) \big] \Big\}
\end{equation}
where $w_j$ denote the parameters of the feature functions $\Phi_j(y_t, \mathbf{x})$ at each word position, and the feature $\Phi'(y_t, y_{t-1})$ indicates the transition between token-level labels. The distribution is normalised by the partition function $Z(\mathbf{x})$.
$$
Z(\mathbf{x}) = \sum_{\mathbf{y}'} \exp \Big\{ \sum_{t = 1}^T\big[ \sum_{k = 1}^K w_k \Phi_k(y'_t, \mathbf{x}) + w_{y'_t, y'_{t-1}}\Phi'(y'_t, y'_{t-1}) \big] \Big\}
$$
In the above equation, we can reorganise the parameters of $\Phi_k(y_t, \mathbf{x})$ so that there is a weight vector $\mathbf{w}_y$ for each $y \in Y$. These weight vectors serve as the major components of hyperplanes, which separate data points into the regions with constant labels. In our case, each region contains words with the same named entity types. The size of $\mathbf{w}_y$ is often huge because it is equal to the size of feature space of $\Phi_k(y_t, \mathbf{x})$. If the training data is large enough, we can train a CRF model with any convex optimisation algorithm. However, in a low resource setting, the available training data are often too sparse to place any hyperplanes at right positions. In this case, if we know that some hyperplanes of known NE types are close to the ones of new NE types, we can use the corresponding weight vectors to initialise the ones of new NE types. Then we fine-tune them with available training data and stop when errors increase on a validation set. Here, we ignore the parameters of $\Phi'(y_t, y_{t-1})$ for label transition, because the number of parameters is $|Y|^2$, which is usually small enough to be trained on available target corpora.. 

\subsection{Named Entity Type Matching}
\label{sec:matching}

As already mentioned, the entity types of the \textit{source} and \textit{target} corpora are different, but related. 
We need to take advantage of those relationships so that the weights from the \textit{source types}
can be transfer to their most related \textit{target types}. 
We have defined three type matching methods: a manual matching and two automatic matching methods, which are described as follow,  

\begin{description}
\item \textbf{Manual Matching}: The manual type alignment is designed by a domain expert.  

\item \textbf{Closest class weight vector matching}: Find most related pairs of named entity types with closest class weight vectors. We get the class weights vectors from a model trained on a combination of validation sets from both source and target corpora. The distance between the weight vectors is calculated with cosine similarity. 
\end{description}

It is interesting to highlight that the manual mapping is not always straightforward since same cases are ambiguous, for example,  \hospital can be categorised as \location or \org .
Another disadvantage of the manual mapping is the subjectivity of the annotator, for example, \ids can be either categorised as \misc or \person . 
Furthermore, meanwhile, the manual mapping carry on in our experiments are feasible in terms
of number of mapping items and in number of mapping schemas, other applications might require more complex mappings for which manual annotation would be impractical.  
Hence, we want to compared the performance of the proposed automatic mapping methods against the manual one and to study whether the proposed automatic methods can overcome the drawbacks of the manual one. 

The automatic methods can be seen as a hard clustering assignment. 
\gabi{Elaborate/justify why the automatic approaches make sense}















