% arara: xelatex
% arara: biber
% arara: xelatex

% !TEX encoding = UTF-8

\documentclass[paper=a4, fontsize=12.55pt, numbers=endperiod]{scrartcl}
\usepackage
	[backend=biber, style=authoryear-comp, maxbibnames=3,
	 isbn=false, doi=false, eprint=false, dashed=false]{biblatex}
\ExecuteBibliographyOptions{maxcitenames=2}


% \usepackage{csquotes}
\usepackage[autostyle]{csquotes}


\renewcommand{\postnotedelim}{: }%
\DeclareFieldFormat{postnote}{#1}%
\DeclareFieldFormat{page}{#1}
\DeclareFieldFormat{pages}{#1}
\bibliography{Bachelor.bib}

% Wichtiges
\usepackage{xltxtra}
\usepackage{blindtext}
\PassOptionsToPackage{hyphens}{url}\usepackage{hyperref}
\hypersetup{
colorlinks=false, linktocpage=false, pdfborder={0 0 0}, pdfstartview=FitV, 
urlcolor=Black, linkcolor=Black, citecolor=Black, %pdfstartpage=3, 
pdftitle={Article}, pdfauthor={Christopher Michels},
}

% Sprache
\usepackage{polyglossia}
\setmainlanguage	[variant=british]	{english}
\setotherlanguage	[spelling=new]	{german}

% Fonts
\usepackage{unicode-math}
\usepackage{amsmath}
\setmainfont{Calibri}
\setsansfont{Cambria}
\setmathfont{Cambria Math}

% Darstellung
\usepackage[left=1.15in, right=1.15in, top=1.1in, bottom=1in]{geometry}
\usepackage{setspace}
\usepackage{eso-pic}
\usepackage{fancyhdr}
\usepackage{tabto}
\usepackage{expex}
\usepackage{changepage}
\usepackage{verbatimbox}
\usepackage{listings}
\usepackage{lstautogobble}
\usepackage{enumerate}

\lstset
{
basicstyle=\ttfamily,
literate={\$}{\$}1,
autogobble
}


    \makeatletter
    \renewcommand\paragraph{\@startsection{paragraph}{4}{\z@}%
      {-3.25ex\@plus -1ex \@minus -.2ex}%
      {1.5ex \@plus .2ex}%
      {\normalfont\normalsize\bfseries}}
    \makeatother

\begin{document}
% ***************************************************************************************************
% Titelseite ********************************************************************************************
% ***************************************************************************************************
\pagestyle{empty}
\setstretch{1}
\pagenumbering{roman}%
\setlength{\parindent}{0em}
\begin{titlepage}
\AddToShipoutPicture*
{
	\put(270,580)
	{
		\parbox[b][9.5cm]{20cm}
		{
			\vfill
			\includegraphics[width=9.5cm, height=20cm, keepaspectratio]{header.jpg}%
			\vfill 
		}
	}
}
\AddToShipoutPicture*
{
	\put(245,-5)
	{
		\parbox[b][12.5cm]{12.5cm}
		{
			\vfill
			 \includegraphics[width=12.5cm, height=12.5cm, keepaspectratio]{footer.jpg}%
			\vfill 
		}
	}
}
\small{
	Universität Trier \\ 
	Fachbereich II - Linguistische Datenverarbeitung \\ 
	Bachelor of Arts \\ 
	Computerlinguistik (HF), English Language and Linguistics (NF)
}
\vfill 
\begin{center}
	\LARGE\textbf{\textsf{A Heuristic Approach to Anaphora~Resolution}} \\
    	\vspace{0.75cm}
    	\large\textbf{\textsc{Bachelor-Arbeit}}\\
    	\vspace{0.15cm}
    	\normalsize
    	vorgelegt am: 15. November 2013 \\
\end{center}
\vfill 
\small{
    	\begin{tabular}{ll}
    		\\ 
    		\\
    		Name: & {Christopher Michels} \\
    		Matrikelnr.: & {1007830} \\ 
    		\\
		Adresse: & Universitätsring 8d \\
		& Zimmer 316 \\
		& 54296 Trier \\ \\
    		Telefon: & (06 51) 99 24 11 55\\
    		E-Mail: & s2chmich@uni-trier.de\\ 
    		\\
      		Erstprüfer: & {Dr. Sven Naumann} \\
      		Zweitprüfer: & {Prof. Dr. Reinhard Köhler} \\
    	\end{tabular}\\
}
\end{titlepage}
\newpage
%****************************************************************************************************
% Eidesstattliche Erklärung ********************************************************************************
%****************************************************************************************************
\setstretch{1.375}
\setlength{\parindent}{0.5in}
\pagestyle{fancy}%
\lhead{\footnotesize{Running Head: Heuristic Anaphora Resolution}} \chead{} \rhead{\footnotesize{\thepage}} \lfoot{} \cfoot{} \rfoot{}

%\setcounter{page}{2}

\section*{Eidesstattliche Erklärung}
Hiermit versichere ich, dass ich die vorliegende Arbeit selbständig verfasst und keine anderen als die angegebenen Hilfsmittel
benutzt habe. Aus fremden Quellen Übernommenes ist kenntlich gemacht. \vspace{2cm}\\
\noindent {Trier, 15. November 2013} \hfill \makebox[2.5in]{\hrulefill} \\
\newpage
%****************************************************************************************************
% Inhaltsverzeichnis *************************************************************************************
%****************************************************************************************************
\setstretch{1}
\lhead{\footnotesize{Heuristic Anaphora Resolution}} \chead{} \rhead{\footnotesize{\thepage}} \lfoot{} \cfoot{} \rfoot{}

\tableofcontents%
\newpage
%****************************************************************************************************
% Dokument *******************************************************************************************
%****************************************************************************************************
\renewcommand{\sectionmark}[1]{\markboth{\thesection.\enspace #1}{}}
\renewcommand{\subsectionmark}[1]{\markright{#1}}
\pagenumbering{arabic}
\chead{\footnotesize{\leftmark}}
% ##########################################################################
\setstretch{1.375}
\setcounter{excnt}{1}
\section{Introduction}
Anaphora \hyphenquote{UKenglish}{has given rise to a great deal of intellectual activity in several fields}, such as \hyphenquote{UKenglish}{linguistics, computational linguistics and cognitive science} \autocite[1]{bot}. Various problems are related to anaphora and the approaches concerning these problems are diverse \autocite[1]{bot}. Especially the problem of anaphora resolution is essential \hyphenquote{UKenglish}{for a number of applications in the field of natural language understanding} \autocite[123-124]{mit0}. These applications include question answering, summarisation of texts or sets of texts, and machine translation, for example \autocite[124]{mit0}. Most of the recent approaches to anaphora resolution aim at supporting one of these potential applications and usually they only rely on linguistic knowledge to a limited extent \autocite[125]{mit0}.

One of these so-called, \hyphenquote{UKenglish}{robust and knowledge-poor solutions} is described by Mitkov \autocite[145]{mit0}. He describes his algorithm \hyphenquote{UKenglish}{as an inexpensive, fast, and yet reliable alternative} which does not require linguistic knowledge to a considerable extent \autocite[145]{mit0}. Instead, it only makes use of a \hyphenquote{UKenglish}{part-of-speech tagger and an NP extractor}, and of \hyphenquote{UKenglish}{heuristics for antecedent identification} \autocite[135]{mit1} which are also called \hyphenquote{UKenglish}{antecedent indicators} \autocite[145]{mit0}. The genre of texts targeted by Mitkov's approach is limited to \hyphenquote{UKenglish}{user manuals} \autocite[151]{mit0}. Furthermore, the approach was \hyphenquote{UKenglish}{initially developed [...] for English} but it is described as only requiring \hyphenquote{UKenglish}{minimum modification} when it is adapted to another language. \autocite[153]{mit0} 

In order to find out in how far the features described for this algorithm hold, it is implemented in C\# with the application \emph{KPAR} and tested with manuals and also with scientific texts. The tools used for the pre-processing of these texts are the part-of-speech tagger \emph{TreeTagger} \autocites[]{santorini}[]{smid94}[]{smid95} and the on-line tool \emph{Textalyser} \autocite[]{texal}. \emph{Textalyser} was used to extract keyword nouns from the texts used for testing. The main focus of testing in how far the features in the description of the algorithm are fulfilled lies on the heuristic and knowledge-poor nature of the indicators mentioned above, and also on the question to what extent these can be implemented language-independently. The manual texts include a manual for a steam iron \autocite[]{iron}, a vacuum cleaner \autocite[]{vacu}, and a photo printer \autocite[]{print}. Although these texts are dated to some extent, they are suitable because they contain less graphic elements and thus the pre-processing for these manuals was less problematic especially for the \emph{TreeTagger}. The scientific texts were taken from the file \emph{BP2} from the domain of applied sciences and artificial intelligence within the \emph{British National Corpus (BNC)} \autocite[]{bncw}\footnote{The basic units in the \emph{BNC} files are so-called <s>-units and examples taken from these texts are referenced with both the file name and the <s>-unit numbers corresponding to the example as follows: \autocite[BP2:123]{bncw}.}.

In order to evaluate the features of Mitkov's approach, the basic concepts related to anaphora resolution are outlined first, including summaries of several other approaches to anaphora resolution. These approaches include both approaches differing from the one presented by Mitkov and similar, knowledge-poor approaches. Then, the essential steps of the implementation of the pre-processing, the twelve antecedent indicators, and of the basic steps of the algorithm are described. Lastly, the evaluation of the implementation follows, including both the tests conducted with the help of the manuals and the scientific texts and a summary of the main problems encountered. The source code of the application \emph{KPAR} is added on a storage medium in the attachment.
% ##########################################################################
\newpage
\setcounter{excnt}{1}
\section{Background}

\subsection{Anaphora Resolution}
In order to establish a basis for the discussion of the implementation of Mitkov's algorithm, the form of anaphora resolution Mitkov's knowledge-poor approach tackles is clarified. The following definition of anaphora by Graeme Hirst serves as an offset for this clarification because it utilises and thus introduces several concepts which are essential to anaphora resolution.

\begingroup
%\begin{small}
\setstretch{1.15}
\hyphenblockquote{UKenglish}{\small{ANAPHORA [...] is the device of making in discourse [...] an ABBREVIATED reference to some entity (or entities) [...]. The reference is called an ANAPHOR [...] and the entity to which it refers is its [...] ANTECEDENT. [...]  A reference and its referent are said to be COREFERENTIAL. The process of determining the [antecedent] of an anaphor is called RESOLUTION \autocite[4]{hirst}.}}
\endgroup

\noindent According to this definition, anaphora resolution has to deal with both abbreviation and coreference in discourse, i.e. in any \hyphenquote{UKenglish}{section of [coherent] text, either written or spoken} \autocite[4]{hirst}. Abbreviation does not refer to the quality of phonetic or lexical brevity but to the lack of \hyphenquote{UKenglish}{disambiguating information} \autocite[4-5]{hirst}. In other words, two of the basic problems involved in tracking the antecedent of an anaphor are coreference and ambiguity.

Unlike Hirst, who uses the terms \emph{anaphora} and \emph{anaphor}  in a lax way, the direction of the link is considered essential here and \emph{anaphor} is used only according to its etymological origin in Ancient Greek to describe a reference pointing back to its antecedent \autocite[4-5]{mit0}.  By analogy, \emph{cataphora} and \emph{cataphor} are used for instances of reference with the opposite direction, i.e. a cataphor points to an entity mentioned in the following text \autocite[19]{mit0}.

%---
\subsubsection{Ambiguity, Coreference, Deixis, and Inference} 
%---
In addition to ambiguity and coreference, deixis and inference also constitute important concepts in the background of anaphora resolution. However, these concepts are not related to anaphora in the sense of absolute identity \autocite[23]{mit0}. In order to elaborate on this, a discussion of the examples below follows.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em, % 
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN: 	aboveexskip=0.6ex	
% EINZELN:	aboveexskip=2.75ex
]
\emph{Nora} told \emph{Anne} \emph{she} hated Dave's present immediately. %\par\nobreak
% \hfill \autocite[9]{mit0}
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\emph{Each child} had to hide \emph{a present} for one of the other children. Dave hid \emph{one} below Nora's chair. Anne hid \emph{one} in Pete's backpack. %\par\nobreak
% \hfill \autocite[9]{mit0}
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex	
% EINZELN:	aboveexskip=2.75ex
]
\hyphenquote{UKenglish}{\emph{I} have told \emph{you} to be more creative a million times,} Nora obsessed about her disappointed hopes. 
% \hfill \autocite[9]{mit0}
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em, % 
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN: 	
aboveexskip=0.6ex	
% EINZELN:	aboveexskip=2.75ex
]
\emph{All their presents} were blue. \emph{Most ribbons} were red. This was a new trend for \emph{wrapping material}. 
% \hfill \autocite[9]{mit0}
\xe
%\end{small}
\endgroup

\noindent The first example above is an illustration of anaphora involving ambiguity. Both \emph{Nora} and \emph{Anne} can be considered potential antecedents for \emph{she}. In this example, semantic information conveyed by the verb \emph{to tell} can aid the resolution of the anaphora. However, knowledge which is not linguistic in nature is required sometimes to aid the disambiguation of the anaphora \autocite[22]{mit0}. 

The example (2.2) illustrates how anaphora does not necessarily involve coreference. The two instances of \emph{one} in these examples both refer to a present, but not to \emph{a present} in the first sentence in (2.1). In fact, \emph{a present} is understood as a set of presents by the reader, whereas each of the two noun phrases \emph{one} is understood as two different members of that set, which nevertheless are similar to some extent. Both presents belong to this specific set, but they are attributed to different children. Although each \emph{one} points back to the noun phrase \emph{a present} and is thus used anaphorically, the noun phrase \emph{a present} does not represent the referent of these two anaphors. Mitkov uses the term \hyphenquote{UKenglish}{identity-of-sense anaphora} to distinguish an anaphor and an antecedent which are not coreferential from anaphora (\hyphenquote{UKenglish}{identity-of-reference anaphora}) for which coreference holds \autocite[16]{mit0}.   

In (2.3), the pronouns \emph{I} and \emph{you} are used deictically. They point to two different participants of the conversation of the example. The situation constituting the background of this utterance could be Nora (\emph{I}) complaining to Dave (\emph{you}) about the disappointing present she got from him. Consequently, the use of the two second person pronouns is context-dependent. Thus, not all pronouns are usually used as anaphors \autocite[9, 20-21]{mit0}. 

The last example shows three noun phrases occurring in three subsequent sentences, illustrating the necessity of inference for a specific type of anaphora. In addition to their primary referent, \emph{Most ribbons} and \emph{wrapping material} indirectly refer to \emph{All their presents} because they are related to the concept of a present. This coreferential anaphoric link is indirect in nature because this relationship is understood, or inferred, which is not necessarily a trivial task \autocite[15]{mit0}.  

%---
\subsubsection{Different Types of Anaphora, Anaphor, and Antecedent} 
%---
In (2.1) to (2.4) above some of the possible realisations of anaphora have already been illustrated. Lexical noun phrases can be anaphors, for example (see 2.4 above), sometimes constituting indirect links instead of direct links when the relationship between the nouns involved goes beyond synonymy \autocite[10,15]{mit0}. Verbs such as \emph{did} in (2.5) below can also be anaphoric \autocite[12]{mit0}. Anaphors can also be elided entirely but are nevertheless understood, such as \emph{\O { (}\emph{she}{)}} in (2.6) below \autocite[12-13]{mit0}.  Similarly, the forms of possible antecedents are not restricted to the level of phrases. Clauses, sentences, as well as subsequent sentences preceding the anaphor may be its antecedent \autocite[17]{mit0}. As an illustration, \emph{This} in (2.7) refers to the preceding sentence and \emph{This} in the last sentence in (2.4) above refers to both preceding sentences.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em, % 
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN: 	aboveexskip=0.6ex	
% EINZELN:	aboveexskip=2.75ex
]
Dave thought his present was the best, and Pete \emph{did} as well.
% \hfill \autocite[9]{mit0}
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em, % 
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN: 	aboveexskip=0.6ex	
% EINZELN:	aboveexskip=2.75ex
]
\emph{Nora} loathed Dave's present and \O { (}\emph{she}{)} trashed it.
% \hfill \autocite[9]{mit0}
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em, % 
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN: 	
aboveexskip=0.6ex	
% EINZELN:	aboveexskip=2.75ex
]
It was not surprising that Pete forgot to bring his present. \emph{This} was his most annoying habit.
% \hfill \autocite[9]{mit0}
\xe
%\end{small}
\endgroup

\noindent The sentence-initial \emph{It} in (2.7) also introduces another problem involved in anaphora resolution. Mitkov uses the term \hyphenquote{UKenglish}{\textbf{pleonastic}} to describe the semantically empty, non-anaphoric use of \emph{it} in \hyphenquote{UKenglish}{it-clefts} \autocite[959]{biber} such as the one in (2.7) or similar constructions \autocite[9,25]{mit0}. The exclusion of such elements from the set of pronouns considered as anaphors \hyphenquote{UKenglish}{in English is not a trivial task} \autocite[10]{mit0}. This task of exclusion may also be applied to \hyphenquote{UKenglish}{non-referring [noun phrases]} \autocite[23]{byron}.

Among the different types of anaphora, Mitkov focuses on \hyphenquote{UKenglish}{\textbf{nominal anaphora}}, which occurs when the antecedent of an anaphor is a \hyphenquote{UKenglish}{non-pronominal noun phrase} \autocite[8]{mit0}. More specifically, the approach relevant for this paper tackles the resolution of \hyphenquote{UKenglish}{\textbf{pronominal anaphora}}, occurring when the element constituting the anaphor is an anaphoric pronoun \autocite[8]{mit0}. For English, personal, possessive, and reflexive pronouns in the third person belong to this set of pronouns, as well as demonstrative and relative pronouns. The pronouns \emph{when} and \emph{where} and also second person pronouns are excluded from that list since they are usually involved in context-dependent deictic constructions \autocite[9, 12]{mit0}. Consequently, the following list of anaphoric pronouns is considered relevant for the implementation of Mitkov's robust, knowledge poor algorithm:

\begingroup
%\begin{small}
\begin{quote}
\setstretch{1}
\TabPositions{5cm}
\begin{itemize}
\item personal pronouns:\tab\emph{he}, \emph{him}, \emph{she}, \emph{her}, \emph{it}, \emph{they}, \emph{them}
\item possessive pronouns:\tab\emph{his}, \emph{her}, \emph{hers}, \emph{its}, \emph{their}, \emph{theirs}
\item reflexive pronouns:\tab\emph{himself},  \emph{herself}, \emph{itself},  \emph{themselves}
\item demonstrative pronouns:\tab\emph{this},  \emph{that}, \emph{these},  \emph{those}
\item relative pronouns:\tab\emph{who},  \emph{whom}, \emph{which},  \emph{whose}
\end{itemize}
\hfill\autocite[9]{mit0}
\end{quote}
%\end{small}
\endgroup

\noindent Some pronouns among the ones listed above may be used attributively, such as \emph{his} in \emph{his present} (see (2.5)). The set of pronouns with this feature includes \emph{my}, \emph{your}, \emph{his}, \emph{her}, \emph{its}, \emph{our}, \emph{their}, and \emph{whose}. Noun phrases containing these pronouns can act both as an anaphor and as an antecedent candidate.

Obviously the terms introduced in bold face above, nominal anaphora and pronominal anaphora, restrict the number of relevant instances of anaphora based on the form of both the anaphor, which is required to be an anaphoric pronoun, and the antecedent, which is required to be a noun phrase which is non-pronominal. In a description of a reimplemented and enhanced version of Mitkov's knowledge-poor approach pronouns are also admitted as antecedent candidates because they are also identified as noun phrases possibly preceding the current anaphoric pronoun. As this is mentioned as a difference with the original approach discussed here, the implementation of the original approach only considers non-pronominal noun phrases as potential antecedents of an anaphor \autocite[165-166, 174]{mit0}.
%---
\subsubsection{Knowledge Involved in Anaphora Resolution} 
%---
In general, anaphora resolution may require morphological, lexical, syntactic, semantic, discourse and real-world knowledge \autocite[28-34]{mit0}. How these different forms of knowledge are integrated into a specific approach may vary depending on the type of approach. Some are considered suitable to implement constraints or preferences, i.e. indicators which rule out specific antecedent candidates, and others are considered indicators which strongly suggest to prefer a specific candidate over others \autocite[41]{mit0}. The following outline gives an overview of how these forms of knowledge are usually included in algorithms for the purpose of anaphora resolution.

Number and gender agreement constraints are usually employed to rule out antecedent candidates. This form of including morphological and lexical knowledge may even be sufficient to find the only suitable alternative. However, with (2.1) below an example is given of how gender and number agreement alone do not suffice to tackle ambiguous information that might occur in a text \autocite[28]{mit0}.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[exno=1,
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em, % 
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN: 	aboveexskip=0.6ex	
% EINZELN:	
aboveexskip=2.75ex
]
\emph{Nora} told \emph{Anne} \emph{she} hated Dave's present immediately. %\par\nobreak
% \hfill \autocite[9]{mit0}
\xe
%\end{small}
\endgroup

\noindent Syntactic knowledge is usually involved when an approach employs a preference for considering noun phrases as candidates which can be found in the preceding clause or act as a subject, for example \autocite[30]{mit0}. Thus, both form and function of the elements of which a sentence consists are relevant when syntactic knowledge is included. Obviously, the assembly of this type of information requires some form of parsing \autocite[30]{mit0}. For some approaches, shallow parsing is sufficient \autocite[21]{byron}, whereas others may even rely on a so-called \hyphenquote{UKenglish}{super-tagger} parser \autocite[165]{mit0}. The importance of the quality of the output of a parser is dependent on the extent to which the approach utilises syntactic knowledge and is essential when it comes to the improvement of \hyphenquote{UKenglish}{the accuracy of anaphora resolution systems} \autocite[165]{mit0}.  

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exno=2,
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	
aboveexskip=2.75ex
]
\emph{Each child} had to hide \emph{a present} for one of the other children. Dave hid \emph{one} below Nora's chair. Anne hid \emph{one} in Pete's backpack. %\par\nobreak
% \hfill \autocite[9]{mit0}
\xe
%\end{small}
\endgroup

\noindent Semantic knowledge is involved when an anaphora resolution system checks the animacy of antecedent candidates, for example \autocite[31]{mit0}. In (2.2) above , the noun phrase \emph{each child} could be excluded from the set of candidates for the anaphoric noun phrase \emph{one} because something that is hidden is usually inanimate. Furthermore, the \hyphenquote{UKenglish}{discourse entity associated} with  \emph{each child} is a group of children, not a single child \autocite[31]{mit0}. However, the latter type of information is not usually included in anaphora resolution systems \autocite[32]{mit0}.

Including discourse knowledge allows for the resolution process to take place against the background of the discourse associated with a given anaphor and its antecedent candidates. The \hyphenquote{UKenglish}{discourse segment} in which an anaphor occurs usually focuses on a \hyphenquote{UKenglish}{salient entity} that is frequently referred to \autocite[33]{mit0}. Whenever other types of included knowledge do not suffice, a preference for the most salient entity can be helpful \autocite[33]{mit0}.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exno=4,
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	
aboveexskip=2.75ex
]
\emph{All their presents} were blue. \emph{Most ribbons} were red. This was a new trend for \emph{wrapping material}. 
% \hfill \autocite[9]{mit0}
\xe
%\end{small}
\endgroup

\noindent In (2.4) above, it is illustrated how real-world knowledge can be involved when it comes to anaphora resolution. Such cases may occur across a variety of domains and with varying degrees of complexity, which is why \hyphenquote{UKenglish}{Incorporating extensive real-world knowledge into a practical anaphora resolution system is a very labour-intensive and time-consuming task}, which is why this type of knowledge is usually not included in such systems \autocite[34]{mit0}.   
%---
\subsubsection{The Steps of Anaphora Resolution} 
%---
Regardless of how the set of different types of knowledge involved in an anaphora resolution strategy is constituted, each strategy consists of the following three basic steps: \hyphenquote{UKenglish}{(1) identification of anaphors, (2) location of the candidates and (3) selection of} the best antecedent candidate \autocite[34]{mit0}. These steps usually also require some form of pre-processing \autocite[38, 39]{mit0}.

As simple as this three-step process may seem, each of these steps requires additional information. The first step usually also tries to identify non-anaphoric pronouns in order to exclude them from the search for candidates in the subsequent step. These non-anaphoric pronouns comprise pleonastic it, for example \autocite[34-36]{mit0}. For the second step, the question of where to search for candidates is crucial. The linguistic unit involved in the specification of this \hyphenquote{UKenglish}{search scope} is usually the sentence \autocite[39]{mit0}. The highest distance reported between an anaphor and its antecedent is 15 sentences \autocite[18]{mit0}. Obviously, these first two steps require the analysis or detection of linguistic units to some extent, with the help of a part-of-speech tagger or a parser, for example \autocite[38, 39]{mit0}. The last step makes use of some form of linguistic knowledge to make a sufficiently reliable decision, applying constraints or assigning some form of preference values \autocite[41 ff.]{mit0}, as the next section is going to illustrate.
% -----------------------------------------------------------------------------------------------------------------------------------------------------
\subsection{Other Approaches}
The approaches presented in this section illustrate both studies including several types of knowledge in anaphora resolution strategies and algorithms operating on the basis of limited knowledge. The first two of the following approaches both involve multiple sources of knowledge. These sources are modelled based on the information gained during specific analyses during the pre-processing required for the algorithm. The approach of Lappin and Leass \autocite*[]{lapplass} is basically rule-based, but it was enhanced by a statistical module later on. Then, the statistical approach of Ge, Hale and Charniak \autocite*[]{geetal} follows. Lastly, the approach of Nasukawa \autocite*[]{nasu} as well as the one of Dagan and Itai \autocite*[]{dagan} is presented. These approaches differ from Mitkov's knowledge-poor approach, but they also rely on some form of limited knowledge. Furthermore, the extent to which pre-processing as well as the modelling of knowledge are conducted is also limited for the latter two approaches.     
%---
\subsubsection{Differing Approaches}
%---
\paragraph{Lappin and Leass 1994}
Shalom Lappin and Herbert J. Leass present RAP (\hyphenquote{UKenglish}{Resolution of Anaphora Procedure}), \hyphenquote{UKenglish}{an algorithm for identifying the noun phrase antecedents of third person pronouns and lexical anaphors (reflexives and reciprocals)} \autocite[535]{lapplass}. The implementation of the algorithm was done in Prolog and it depends on the output of McCord's Slot Grammar parser for the syntactic analysis of a given text \autocite[535]{lapplass}.
 
Syntactic information is used to measure the salience of candidates in the resolution process. Although salience is closely related with discourse knowledge, Lappin and Leass repeatedly state that their salience measures are a \hyphenquote{UKenglish}{simple model of attentional state} based on syntactic information only, and not on any additional semantic, discourse, or real-world knowledge \autocite[535, 544, 552]{lapplass}. However, they describe RAP as relying on elements of centering theory, but these elements are not considered primary \autocite[557]{lapplass}. RAPSTAT, an enhanced version of RAP, also includes semantic and real-world information in the form of \hyphenquote{UKenglish}{statistically measured lexical preferences} in order to aid the initial version in cases when the salience measures favour the wrong candidate merely based on a slightly higher salience weight \autocite[554]{lapplass}.

The algorithm analyses a text sentence by sentence, searching for both anaphoric pronouns and noun phrases acting as discourse referents. For each referent salience factors are determined and an equivalence class is created unless the referent already belongs to an existing class. Then, a list of the most recent candidates from each equivalence class is compiled if a third person pronoun is retrieved. Cataphora results in a significant reduction of the salience weight of a candidate, and parallelism of grammatical roles leads to a small increase of the salience weight, for example. Candidates with a salience weight below a specific threshold are ruled out and syntactic as well as morphological filters are applied to the remaining candidates. The final step tries to determine the best, single candidate based on the highest salience weight. If more than one candidate fulfils this criterion, the most recent candidate is selected \autocite[542-544]{lapplass}.   

The following details of the resolution strategy are also important: The syntactic and morphological filters provide a reliable way of reducing the list of candidates \autocite[544]{lapplass}. The morphological filter is also able to handle ambiguity related to number and gender for pronouns in some languages. Furthermore, salience measures are based on \hyphenquote{UKenglish}{frequency of occurrence}, \hyphenquote{UKenglish}{hierarchy of grammatical roles, level of phrasal embedding, and parallelism of grammatical role} \autocite[544]{lapplass}. Lastly, as an element originating from centering theory, candidates within the same sentence as the pronoun are preferred over candidates in preceding or subsequent sentences \autocite[544, 557]{lapplass}.    

In a blind test, RAP was tested with 360 pronouns contained in manual texts and the resolution was successful for 86\% of these pronouns. With the inclusion of semantic and real-world knowledge, RAPSTAT was able to increase that rate by 2\%. RAPSTAT correctly disagreed with the candidate selected by RAP in about 61\% of all cases \autocite[535, 554]{lapplass}.

\paragraph{Ge, Hale, and Charniak 1998}
The basis of Ge, Hale, and Charniak's approach is a probabilistic model which includes four factors relevant to anaphora resolution. They describe their system as differing \hyphenquote{UKenglish}{from earlier work in its almost complete lack of hand-crafting, relying instead on a very small corpus of Penn Wall Street Journal Tree-bank text [...] that has been marked with co-reference information} \autocite[161]{geetal}. For the training of the algorithm, Ge, Hale, and Charniak used 90\% of the corpus and the rest of the corpus was used for the purpose of testing \autocite[165]{geetal}.

For the first factor, an anaphora resolution algorithm of Hobbs \autocite*[]{hobbs} is used to count how often the correct antecedent for a given pronoun occurs with a specific Hobbs distance in the training corpus \autocite[161, 164]{geetal}. Hobbs distance is the index for the antecedent within the list of all antecedents that are proposed by the Hobbs algorithm. Thus, syntactic information is included in this approach using Hobbs' algorithm. The parse trees of the corpus had to be modified to some extent for this purpose \autocite[164]{geetal}.

The second factor involves gender, number, and animacy information \autocite[161]{geetal}. The number of times a given antecedent occurs for a specific class of pronoun is counted for each of the different sets of pronouns according to their gender, number, and  animacy. If a complex noun phrase with multiple relevant words constitutes the antecedent, a likelihood test is employed in order to find \hyphenquote{UKenglish}{the most informative [word]} \autocite[164]{geetal}. 

Selectional restrictions are included in the algorithm as its third factor. For this factor, the frequency of how often a given candidate occurs in the training corpus with a specific type of parent constituent is calculated \autocite[162]{geetal}. These classes of parent constituents are also referred to as the cluster of the parent constituent of the current candidate. In a study concerning a statistical parser for the same corpus as the one of Ge, Hale, and Charniak the compilation of these clusters is already included \autocite[162,164]{geetal}.

Frequency of occurrence as an element related to centering and also to discourse knowledge is included as the last factor of Ge, Hale, and Charniak's probabilistic model \autocite[162]{geetal}. Furthermore, they also mention that \hyphenquote{UKenglish}{the nearer the end of the story a pronoun occurs, the more probable it is that its referent has been mentioned several times} \autocite[164]{geetal}. Consequently, the sentence number was taken into account when the frequency of occurrence for the antecedents was computed \autocite[164]{geetal}.

The algorithm chooses the candidate as the proposed antecedent for a given pronoun which maximises the product of the four factors above. The system was tested with third person personal pronouns occurring in the part of the corpus remaining after training for 82.9\% of these pronouns the proposed antecedent was correct \autocite[165]{geetal}. 
%---
\subsubsection{Similar Approaches}
%---
\paragraph{Nasukawa 1994}
Nasukawa's knowledge-poor approach includes both discourse knowledge and real-world knowledge, as far as these types of knowledge can be extracted heuristically from a given text. Three factors for the evaluation of the salience of candidates are comprised by these heuristics. This heuristically acquired knowledge is considered \hyphenquote{UKenglish}{world knowledge appropriate to the narrow domain of the source text} \autocite[1157, 1158]{nasu}. Nasukawa describes the algorithm as performing \hyphenquote{UKenglish}{quite well, especially in technical manuals} \autocite[1157]{nasu}.

The first factor utilises the collocation patterns retrieved in the source text based on the output of a simple parser \autocite[1159]{nasu}. Its purpose is to check whether \hyphenquote{UKenglish}{the candidate can be an argument of a predicate that dominates the pronoun} \autocite[1158]{nasu}. If a candidate matches such a pattern, multiple types of preference can be considered to hold: selectional restriction, \hyphenquote{UKenglish}{\emph{case role persistence}}, and \hyphenquote{UKenglish}{\emph{syntactic parallelism}} \autocite[1158]{nasu}. In case of a successful match, which is also possible with a synonym (retrieved with the help of an on-line dictionary) of the lemma of the candidate, a preference value of 3 is assigned to the current candidate \autocite[1160]{nasu}.

The second factor uses a simple string match for the lemma of a candidate in preceding sentences to compile its frequency of repetition \autocite[1159] {nasu}. The preference realised by this factor also takes the distance of the element repeating the lemma of the candidate into account \autocite[1160]{nasu}. Furthermore, it can be enhanced by the recognition of noun phrases in headings which introduce the focus of the subsequent text \autocite[1159]{nasu}. 

Lastly, preference values are assigned to candidates depending on their syntactic position, preferring subjects over objects. The distance between the pronoun and the candidate is also relevant for this factor \autocite[1159, 1160]{nasu}.

In order to select the best candidate, the preference values assigned by these three factors are added up and the candidate with the highest value is selected. For a set of 112 third person pronouns the algorithm achieved a success rate of 93.8\% \autocite[1160, 1162]{nasu}.

\paragraph{Dagan and Itai 1990}
Dagan and Itai try to avoid manual \hyphenquote{UKenglish}{acquisition of semantic constraints in broad domains} by acquiring selectional restrictions from large corpora both automatically and statistically \autocite[330]{dagan}. These restrictions are modelled with the help of collocation patterns retrieved in those corpora. The patterns considered are \hyphenquote{UKenglish}{subject-verb}, \hyphenquote{UKenglish}{verb-object}, \hyphenquote{UKenglish}{adjective-noun} \autocite[331]{dagan}.

These three types of collocation patterns are identified and counted based on the output of a parser. The resolution \hyphenquote{UKenglish}{algorithm has to map surface structures}, i.e. strings of candidate noun phrases, to these patterns, accepting only those patterns which occurred more frequently than the \hyphenquote{UKenglish}{threshold of 5 occurrences} \autocite[331]{dagan}.  

For the following example (2.8), the two occurrences of \emph{it} in the patterns \emph{it-collect} and \emph{collect-it} are replaced with each of the candidates \emph{money}, \emph{collection}, and \emph{government} to retrieve the frequency of these patterns. The most frequent patterns retrieved are \emph{government-collect} for the first \emph{it} and \emph{collect-money} for the second, which leads the algorithm to suggest \emph{government} and \emph{money} as the correct antecedent, respectively \autocite[330-331]{dagan}.  

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	
aboveexskip=2.75ex
]
They know fully well that \emph{the companies} held \emph{tax money} aside for \emph{collection} later on for the basis that \emph{the government} said \emph{it} was going to collect \emph{it} \autocite[adapted from:][330]{dagan}.
\xe
%\end{small}
\endgroup

\noindent In cases when the algorithm cannot suggest a single candidate based on selectional restrictions only, Dagan and Itai suggest \hyphenquote{UKenglish}{other means, such as syntactic heuristics or asking the user} \autocite[331]{dagan}. In general, the intention for their algorithm is not to operate on its own, but to cooperate with other \hyphenquote{UKenglish}{disambiguation means} \autocite[330]{dagan}.

For approximately 36\% of the 59 occurrences of it taken from the Hansard corpus, which was also used for knowledge acquisition, no relevant pattern occurred with a frequency surpassing the threshold. The candidate proposed by the algorithm was the correct antecedent for 87\% of the remaining 38 pronouns \autocite[331, 332]{dagan}.
% ##########################################################################
\newpage
\setcounter{excnt}{1}
\section{Mitkov's Algorithm}
The description of Mitkov's robust, knowledge-poor algorithm \autocite[145-176]{mit0} and its implementation in C\# for English technical manuals and scientific texts \autocite[135]{mit1} is divided into three essential parts. Firstly, the pre-processing steps are explained, illustrating the input and the output of the pre-processing phase with the help of an example sentence. Required language-specific and domain specific information is also presented. Then, the description of the core of the algorithm follows, comprising several heuristic \hyphenquote{UKenglish}{\emph{antecedent indicators}} which are described as avoiding the complex analysis of \hyphenquote{UKenglish}{sophisticated linguistic knowledge} \autocite[145]{mit0}. Furthermore, the basic steps of the algorithm, the classes of \emph{KPAR} relevant to these, as well as examples of the application of the indicators are described.
% -----------------------------------------------------------------------------------------------------------------------------------------------------
\subsection{Pre-Processing}
%---
\subsubsection{\emph{TreeTagger} Chunker}
%---
The main pre-processing task is done by the chunker function for English by the \emph{TreeTagger}, providing part-of-speech and lemma information, as well as sentence boundaries and the basic phrasal elements of the sentences of a text \autocites[]{smid94}[]{smid95}. The \emph{TreeTagger} is reported to reach an accuracy of up to 97.5\% \autocite[8]{smid95}. Although Mitkov describes his algorithm as not relying on any parsing, but merely on \hyphenquote{UKenglish}{simple noun phrase rules} \autocite[135]{mit1}, some indicators require more than merely on information about noun phrase boundaries, as the following sections are going to show. Thus, the use of a chunker can be considered an appropriate solution for pre-processing. Similar to Mitkov's noun phrase rules, the \emph{TreeTagger} chunker does not handle complex noun phrases \autocite[146]{mit0}.

Supposing that most manuals are accessible as PDF files, optical character recognition (OCR) might also be a necessary pre-processing task. Erroneously included graphic elements and redundant white-space characters might have to be removed before the text file is passed on to the \emph{TreeTagger} chunker. 

Furthermore, the \emph{TreeTagger} does not recognise headings and paragraphs by itself. However, the antecedent indicators rely on this type of information. Consequently, the text files have to be annotated with symbols that do not distort the chunker output. The symbols \emph{  \~} and \emph{§} were chosen in order to insert them before headings and paragraphs, respectively. Mitkov does not specify how these types of additional structural information are added.

As soon as the possibly necessary OCR-processing and the annotation with basic structural annotation are complete, the resulting text file can be passed on to the chunker. Figure 1 below shows an example of the output produced by the \emph{TreeTagger} chunker for a sentence taken from a manual for a vacuum cleaner. 
 
\begingroup
\setstretch{1}
%\begin{small}
\begin{figure}[ht]
\centering
\begin{tabular}{c}
\begin{lstlisting}
<VC>
Extend	VV	extend
</VC>
<NC>
the	DT	the
wand	NN	wand
</NC>
<PC>
to	TO	to
<NC>
its	PP$	its
full	JJ	full
length	NN	length
</NC>
</PC>
.	SENT	.
\end{lstlisting}
\end{tabular}
\caption
{\rightskip=2em\leftskip=2em An example of the \emph{TreeTagger} chunker output for a sentence taken from a vacuum cleaner manual \autocite[1]{vacu}}
\end{figure}
%\end{small}
\endgroup

\noindent As this example illustrates, the output of the chunker is a mixture of seemingly incomplete XML tags and of tabulator-separated values for the individual tokens of the text, containing the string, the part-of-speech information, and the lemma. Reading the output file line by line, a separate module of the implemented algorithm creates an XML file which is used as the basic input file for the algorithm. Furthermore, the results of the anaphora resolution are serialised into this file.

The XML file created during this step contains the following information. Firstly, the symbols used for headings and paragraphs also occur as separate lines in the chunker output file. These lines result in the creation of tags for headings and paragraphs. Furthermore, if a heading symbol occurs, a section tag is created if the previous heading or set of headings already has been followed by at least one paragraph. The seemingly incomplete tags contained in the chunker output have equivalent tags contained in either heading or sentence tags in the XML file. The most basic tag is the tag for individual tokens, containing the lemma and part-of-speech information as the values of corresponding attributes and the token string as its text value. The following example of the XML output in Figure 2 for the sentence already shown in its chunker-output format in Figure 1 illustrates additional details relevant to the anaphora resolution algorithm.

\begingroup
\setstretch{1}
%\begin{small}
\begin{figure}[ht]
\centering
\begin{tabular}{c}
\begin{small}
\begin{lstlisting}
<sentence id="30:64:118">
  <VC id="30:64:118:3212">
    <token id="30:64:118:3213" pos="VV"
           lemma="extend">Extend</token>
  </VC>
  <NC number="sg" gender="neut" prepositional="False" 
      corefInfo="21" id="30:64:118:3214">
    <token id="30:64:118:3215" pos="DT" 
           lemma="the">the</token>
    <token id="30:64:118:3216" pos="NN"
           lemma="wand">wand</token>
  </NC>
  <PC id="30:64:118:3217">
    <token id="30:64:118:3218" pos="TO" 
           lemma="to" prepositional="True">to</token>
    <NC number="sg:sg" gender="neut:neut" 
        prepositional="True" 
        corefInfo="21" id="30:64:118:3219">
      <token id="30:64:118:3220" pos="PP$" 
           lemma="its">its</token>
      <token id="30:64:118:3221" pos="JJ" 
           lemma="full">full</token>
      <token id="30:64:118:3222" pos="NN" 
           lemma="length">length</token>
    </NC>
  </PC>
  <token id="30:64:118:3223" pos="SENT" 
           lemma=".">.</token>
</sentence>
\end{lstlisting}
\end{small}
\end{tabular}
\caption
{\rightskip=2em\leftskip=2em Example of the XML output for the sentence in Figure 1 above \autocite[1]{mit0}}
\end{figure}
%\end{small}
\endgroup
\newpage
\noindent The composition of the identification attribute values is important for the retrieval of relevant information for anaphoric pronouns and their candidates. The value of the attribute \emph{id} can be simple or complex, depending on the level of embedding of the elements. For sections, this value is simple, it constitutes the number of the section within the text. For elements embedded in sections, headings and paragraphs, the \emph{id} value consists of 2 partial \emph{id} values. Partial \emph{id} values are separated by a colon (:). For headings, the second part is the identification for the heading, whereas the first part is the identification for the section of the heading. By analogy, the second part for paragraphs is the number for the current paragraph in a given text. Sentence tags are contained in paragraph tags and add their own partial \emph{id} value to the partial \emph{id} value of the current section and the current paragraph. Consequently, the sentence in Figure 2 constitutes the 118th sentence in the 64th paragraph of the 30th section of the vacuum-cleaner manual.
Regardless of further levels of embedding, all elements contained in sentences only add their own \emph{id} value as the last part for this attribute. Heading tags do not contain paragraph tags. Thus, elements contained within headings have the placeholder \emph{H} for the second partial value, which is usually the value for the number of the paragraph for elements contained in sentence tags. 

Furthermore, with the noun phrase \emph{its full length} the possibility of complex \emph{number} and \emph{gender} values is illustrated. The reason for this possibility is the attributive use of possessive pronouns such as \emph{its}. In such cases, the tags for noun phrases have 2 partial values for both \emph{number} and \emph{gender}. The first partial value belongs to the noun phrase, whereas the second partial value belongs to the pronoun. Thus, these noun phrases can be easily recognised and are not excluded from candidate lists of other anaphoric pronouns possibly occurring in the subsequent text. 

The anaphoric link between \emph{the wand} and the possessive pronoun \emph{its} is recorded with the help of the attribute \emph{corefInfo}. This attribute is also used to protocol the exclusion of pronouns, such as occurrences of pleonastic \emph{it} or of deictic second person pronouns, for example.
%---
\subsubsection{Language-Specific Lists}
%---
The implementation of Mitkov's algorithm depends on several language-specific lists of strings. These lists are discussed in three subsets because these subsets serve different purposes. The first set of language-specific lists is used to retrieve and filter pronouns from a given text, the second subset is related to gender and number information, and lastly, the third list is involved in the evaluation of antecedent indicators.

The first subset includes (a) a list of all personal, possessive, reflexive, demonstrative, and relative pronoun strings, (b) a list of all the deictic pronouns from list (a), and (c) a list of part-of-speech tags which are used to tag actual pronouns by the \emph{TreeTagger}.

\begingroup
%\renewcommand{\theenumi}{\alph{enumi}}
%\begin{small}
\begin{quote}
\setstretch{1}
\TabPositions{5cm}
\begin{enumerate}[(a)]
\item \textbf{personal, possessive, reflexive, demonstrative, and relative pronouns:}\\
\emph{i}, \emph{me}, \emph{my}, \emph{mine}, \emph{myself}, \emph{you}, \emph{your}, \emph{yours}, \emph{yourself}, \emph{he}, \emph{him}, \emph{his}, \\\emph{himself}, \emph{she}, \emph{her}, \emph{hers}, \emph{herself}, \emph{it}, \emph{its}, \emph{itself}, \emph{we}, \emph{us}, \emph{our}, \emph{ours}, \emph{ourselves}, \emph{your}, \emph{yours}, \emph{yourselves}, \emph{they}, \emph{them}, \emph{their}, \emph{theirs}, \emph{themselves}, \emph{this}, \emph{that}, \emph{these}, \emph{those}, \emph{who}, \emph{whom}, \emph{which}, \emph{whose}
\item \textbf{deictic pronouns:}\\
\emph{i}, \emph{me}, \emph{my}, \emph{mine}, \emph{myself}, \emph{you}, \emph{your}, \emph{yours}, \emph{yourself}, \emph{we}, \emph{us}, \emph{our}, \emph{ours}, \emph{ourselves}, \emph{your}, \emph{yours}, \emph{yourselves}
\item \textbf{part-of-speech tags for pronouns:}\\
\emph{PP}, \emph{PP\$}, \emph{WP}, \emph{WP\$}, \emph{WDT}, \emph{DT}
\end{enumerate}
\end{quote}
%\end{small}
\endgroup

\noindent The list (a) is used to retrieve all these pronouns from a given text using simple string matching. List (b) is used to exclude deictic pronouns from the list of pronouns which are passed on to the resolution step. Finally, list (c) forms the basis for a filter for the exclusion of certain pronoun strings, such as \emph{that} used as a subordinating conjunction. The tags \emph{PP} and \emph{PP\$} are used for personal, possessive and reflexive pronouns, whereas \emph{WP}, \emph{WP\$}, and \emph{WDT} are possible tags for relative pronouns \autocite[4-6]{santorini}. The tag \emph{DT} corresponds to determiners \autocite[2]{santorini} but demonstrative pronouns are also tagged this way if they occur without a modified noun.

The subset of the lists (d) to (j) is important for determining the values of the attributes \emph{gender} and \emph{number} for pronouns and also for specific candidates:

\begingroup
%\renewcommand{\theenumi}{\alph{enumi}}
%\begin{small}
\begin{quote}
\setstretch{1}
\TabPositions{5cm}
\begin{enumerate}[(a)]
\setcounter{enumi}{3}
\item \textbf{masculine pronouns:}\\
\emph{he}, \emph{him}, \emph{himself}, \emph{his}
\item \textbf{feminine pronouns:}\\
\emph{she}, \emph{hers}, \emph{herself}, \emph{her}
\item \textbf{neuter pronouns:}\\
\emph{it}, \emph{itself}, \emph{its}, \emph{this}, \emph{that}, \emph{which}
\item \textbf{singular pronouns:}\\
\emph{i}, \emph{me}, \emph{my}, \emph{mine}, \emph{myself}, \emph{yourself}, \emph{he}, \emph{him}, \emph{his}, \emph{himself}, \emph{she}, \emph{her}, \emph{hers}, \emph{herself}, \emph{it}, \emph{its}, \emph{itself}, \emph{this}
\item \textbf{plural pronouns:}\\
\emph{we}, \emph{us}, \emph{our}, \emph{ours}, \emph{ourselves}, \emph{yourselves}, \emph{they}, \emph{them}, \emph{their}, \emph{theirs}, \emph{themselves}, \emph{these}, \emph{those}
\item \textbf{possessive or relative pronouns which can be used attributively}\\
\emph{my}, \emph{your}, \emph{his}, \emph{her}, \emph{its}, \emph{our}, \emph{their}, \emph{whose}
\item \textbf{collective nouns (experimental):}\\
\emph{government}, \emph{team}, \emph{parliament}, \emph{data}
\end{enumerate}
\end{quote}
%\end{small}
\endgroup\noindent Lists (d) - (f) are used to assign the \emph{gender}-values (\emph{masc}, \emph{femi}, \emph{neut}, or \emph{indisc}) for specific pronouns because the \emph{TreeTagger} does not provide gender information. By analogy, (g) and (h) are used to assign \emph{number}-values (\emph{sg}, \emph{pl}, or \emph{indisc}). Pronoun strings not included in these five lists are assigned the value \emph{indisc} (indiscernible gender or number). For example, this is the case with \emph{you} concerning number, or \emph{they} concerning gender. List (i) is used to recognise possessive or relative pronouns used attributively. Noun phrases containing this type of pronoun involve two different referents, and each referent has its own number and gender (see \emph{its full length} in Figure 2, p. 18). Lastly, list (j) is an experimental list which serves the purpose of identifying collective nouns whose \emph{number}-value is set to \emph{indisc} because they \hyphenquote{UKenglish}{can be referred to by both \emph{they} and \emph{it}} \autocite[29]{mit0}. This list is experimental because it is obviously incomplete and not described in detail \autocite[871]{mit2}, and it is also not a trivial task to compile a complete list. 

Furthermore, during the creation of the XML file, the application \emph{} also tries to identify the \emph{gender}- and \emph{number}-values for non-pronominal noun phrases. Noun phrases which contain coordinating conjunctions or a plural head noun are identified as plural noun phrases with indiscernible gender. If a noun phrase is not a collective noun and does not contain a plural head, it is identified as a singular noun phrase. Noun phrases containing proper nouns are assigned the \emph{gender} value \emph{personal} and the application asks the user to check the created XML file for the purpose of replacing this placeholder value with the correct \emph{gender}-before it continues with anaphora resolution. In English, most of the remaining noun phrases are neuter, which is why the remaining pronouns are assigned the \emph{gender} value \emph{neut}. This is a very simplified solution which does not handle references to human beings or classes \autocite[29]{mit0}, such as with \hyphenquote{UKenglish}{\emph{the maintenance engineer}} in \hyphenquote{UKenglish}{Sometimes \emph{the maintenance engineer} is dealing with a type of failure he has never had before} \autocite[BP2:150]{bncw}. However, this solution can serve as a preliminary but sufficient basis for the gender and and number agreement tests of the algorithm.

The last subset, consisting of the lists (k) - (o), is involved in the heuristics for some of the antecedent indicators:

\begingroup
%\renewcommand{\theenumi}{\alph{enumi}}
%\begin{small}
\begin{quote}
\setstretch{1}
\TabPositions{5cm}
\begin{enumerate}[(a)]
\setcounter{enumi}{10}
\item \textbf{definite article list \autocite[148]{mit0}:}\\
\emph{the}, \emph{this}, \emph{that}, \emph{these}, \emph{those}, \emph{my}, \emph{your}, \emph{his}, \emph{her}, \emph{its}, \emph{our}, \emph{their}
\item \textbf{indefinite article list \autocite[137]{mit1}:}\\
\emph{a}, \emph{an}, \emph{other}, \emph{another}
\item \textbf{verb preference list \autocite[146]{mit0}:}\\
\emph{analyse}, \emph{assess}, \emph{check}, \emph{consider}, \emph{cover}, \emph{define}, \emph{describe}, \emph{develop}, \emph{discuss}, \emph{examine}, \emph{explore}, \emph{highlight}, \emph{identify}, \emph{illustrate}, \emph{investigate}, \emph{outline}, \emph{present}, \emph{report}, \emph{review}, \emph{show}, \emph{study}, \emph{summarise}, \emph{survey}, \emph{synthesise}
\item \textbf{\hyphenquote{UKenglish}{noun phrase} preference list \autocite[136]{mit1}:}\\
\emph{chapter}, \emph{section}, \emph{table}
\item \textbf{clause limiter tags:}\\
\emph{WP}, \emph{WP\$}, \emph{WRB}, \emph{WDT}, \emph{CC}
\end{enumerate}
\end{quote}
%\end{small}
\endgroup

\noindent The lists (k) and (l) serve the purpose of identifying indefinite noun phrases. Only candidates which contain any element from list (k) are considered definite. Lists (m) and (n) are employed to assign a positive score to candidates following any of the elements contained in these lists. List (o) helps to identify clause boundaries heuristically. \emph{WP}, \emph{WDT},  and  \emph{WP\$} are tags for relative pronouns which can indicate the start of relative clauses, \emph{WRB} includes, \emph{when}, \hyphenquote{UKenglish}{\emph{how}, \emph{where}, \emph{why}, etc.}, and \emph{CC} is used for coordinating conjunctions \autocite[2-6]{santorini}.
\subsubsection{One Domain-Specific List}
%---
There is one more language-specific list relevant to the pre-processing step of \emph{KPAR} which has to be acquired manually. Most importantly, however, this list is specific to the \hyphenquote{UKenglish}{domain} of the input text \autocite[136]{mit1}. This list contains all key-word nouns which are listed by the on-line tool \emph{Textalyser} as occurring more than ten times in the source text \autocite[]{texal}. The standard settings for English texts were used for \emph{Textalyser}. As an illustration, the output of \emph{Textalyser} for the steam iron manual \autocite[]{iron} resulted in the following list (p):

\begingroup
%\renewcommand{\theenumi}{\alph{enumi}}
%\begin{small}
\begin{quote}
\setstretch{1}
\TabPositions{5cm}
\begin{enumerate}[(a)]
\setcounter{enumi}{15}
\item \textbf{domain concepts for the steam iron manual:}\\
\emph{iron}, \emph{steam}, \emph{water}, \emph{cord}, \emph{button}, \emph{temperature}, \emph{fabric}, \emph{jet}, \emph{feature}, \emph{dial}, \emph{spray}, \emph{selector}
\end{enumerate}
\end{quote}
%\end{small}
\endgroup
% -----------------------------------------------------------------------------------------------------------------------------------------------------
\subsection{Antecedent Indicators}
The following indicators form the basis for the candidate selection of Mitkov's robust, knowledge-poor algorithm for anaphora resolution. Each candidate retrieved in the sentence of the current pronoun and in up to two preceding sentences are evaluated based on these indicators, and the values assigned with their help range from -1 over 0 and +1 to +2. Consequently, candidates can be either penalised or rewarded based on specific features \autocite[146-149]{mit0}. 

The penalising indicators are \emph{Indefiniteness} and \emph{Prepositional Noun Phrases}, whereas the rewarding indicators are \emph{Givenness}, \emph{Domain Concept Preference}, \emph{Verb Preference}, \emph{Noun Phrase Preference}, \emph{Section Heading Preference}, \emph{Collocation Pattern Preference}, \emph{Lexical Reiteration}, \emph{Immediate Reference}, and \emph{Antecedent-Pointing Constructions}. The indicator \emph{Referential Distance} can both penalise and reward candidates \autocites[146-148]{mit0}[135-138]{mit1}.The indicators and their implementation are discussed in the following with the help of results yielded by \emph{KPAR} during testing. 


%#####################
%#####################
%#####################


%---
\subsubsection{Indefiniteness and Definiteness}
%---
This indicator penalises candidate noun phrases which are \hyphenquote{UKenglish}{indefinite} \autocite[148]{mit0}, i.e. an affected candidate is assigned a score of -1 by the indefiniteness indicator. Mitkov's algorithm \hyphenquote{UKenglish}{regards a noun phrase as definite if the head noun is modified by a definite article, or by demonstrative or possessive pronouns} \autocite[148]{mit0}. This set of relevant modifiers is specified by the language specific list (k): \emph{the}, \emph{this}, \emph{that}, \emph{these}, \emph{those}, \emph{my}, \emph{your}, \emph{his}, \emph{her}, \emph{its}, \emph{our}, and \emph{their}. Consequently candidates not containing a modifier from that list are penalised. Noun phrases with an indefinite modifier from the language-specific list (l), containing \emph{a}, \emph{an}, \emph{other}, as well as \emph{another}, are also assigned a score of -1. The following examples illustrate both straightforward applications of this indicator as well as one case where the influence of the indicator is only marginal.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\emph{This vacuum cleaner$^{1}$} must be grounded. If \emph{it$^{1}$} should malfunction or breakdown, grounding provides a path of least resistance for electric current to reduce the risk of electric shock \autocite[5]{vacu}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Empty iron immediately after using. Don't store \emph{the iron$^{1}$} with water in \emph{it$^{1}$} \autocite[9]{iron}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
When you are finished printing, remove  \emph{any unused paper$^{1:2:3}$} from the paper tray. Store \emph{it$^{1}$} in the waterproof wrapper in which \emph{it$^{2}$} was originally packaged to preserve \emph{its$^{3}$} quality \autocite[22]{print}.
\xe
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent In the first example (3.1) the candidate noun phrase \emph{This vacuum cleaner} is obviously premodified by a demonstrative article. Thus, it is considered definite, it is not penalised with a value of -1, and it is successfully identified as the antecedent of the pronoun \emph{it} occurring in the second sentence of (3.1). In (3.2), the candidate \emph{iron} is clearly indefinite, whereas the noun phrase identified as the antecedent of \emph{it}, \emph{the iron}, is not affected by a negative score due to the indefiniteness indicator. 

(3.3) illustrates that the penalties realised by this indicator may be marginal, and they do not necessarily have to be correct in pointing away from an indefinite candidate noun phrase. With the several instances of anaphora shown here, the indefinite noun phrase \emph{any unused paper} is preferred over the definite candidates \emph{the paper tray} and \emph{waterproof wrapper} by the algorithm on the basis of other rewarding indicators. One example would be \emph{Domain Concept Preference} because \emph{paper} belongs to the set of nouns specified as domain concepts by the domain-specific list for the printer manual. 

Mitkov also mentions an exception for the application of this indicator. Sometimes definite articles are omitted throughout a whole \hyphenquote{UKenglish}{discourse segment} of a manual \autocite[135]{mit1}. If this is the case, the indicator \emph{Indefiniteness} is ignored, i.e. no penalising scores are assigned to all the candidates of the current pronoun for this indicator \autocite[135]{mit1}. For the purpose of implementation, the paragraphs relevant to the current pronoun are considered the current discourse segment. However, manuals are not always consistent in the omission or use of definite articles, as (3.1) above illustrates with \emph{iron} and \emph{the iron}. Nevertheless, example (3.4) shows how this exception applies for a pronoun in one of the texts from the \emph{BNC} which are not manuals.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	
aboveexskip=2.75ex
]
\emph{Fault-response based systems$^{1:2}$} are usually advocated when lots of reliable diagnostic information is available. \emph{They$^{1}$} can usually be built relatively quickly and at low cost and are easy to understand and operate. However, \emph{they$^{2}$} are very problem specific and therefore difficult to adapt to new situations, also large rule bases become very difficult to maintain (500 rules) \autocite[BP2:577-579]{bncw}. \par\nobreak
\xe
%\end{small}
\endgroup

\noindent In the paragraph of the pronoun \emph{they} in the last sentence of (3.4) only indefinite noun phrases occur: \emph{Fault-response based systems}, \emph{lots of reliable diagnostic information}, \emph{low cost}, \emph{new situations}, and \emph{large rule bases}. Thus, the conditions for the exception hold and no negative scores are applied for any of these candidate noun phrases.

The implementation of the indefiniteness indicator and of the exception for article omission is not a complicated matter. In order to retrieve definite and indefinite noun phrases, merely string matching is involved. The language-specific lists (k) and (l) (see p. 20) function as the filters for this matching procedure. 


%#####################
%#####################
%#####################


%---
\subsubsection{Prepositional Noun Phrases}
%---
The indicator \emph{Prepositional Noun Phrases} penalises candidates which are embedded inside a prepositional phrase. The value assigned to these prepositional candidates is -1 \autocite[148]{mit0}. Concerning this indicator, Mitkov refers to the centering theory of Brennan et al. \autocite*[]{bren} who consider an indirect object, which is often constituted by a prepositional noun phrase, one of the syntactic functions at the lower levels of a salience hierarchy \autocite[148]{mit0}. The following examples partially show the search scope of anaphora resolution processes where candidates being penalised by this indicator occur.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
When \emph{the iron$^{1:2}$} is left unmoved with \emph{its$^{1}$ soleplate} facing down or on \emph{its$^{2}$} side for approx. 1. minute, [...] \autocite[7]{iron}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Consequently, \emph{operations} during \emph{printing} may seem sluggish and \emph{the system clock} may appear to stop. However, do not cancel \emph{this mode$^{1}$}; \emph{it$^{1}$} is necessary for proper printing \autocite[21]{print}.
\xe
%\end{small}
\endgroup

\noindent In (3.5), \emph{its soleplate} occurs as an antecedent candidate of the attributively used possessive pronoun \emph{its} in \emph{its side}, along with the correct antecedent \emph{the iron}. In co-operation with other evaluation processes including the rest of the indicators, the selection of \emph{its soleplate} is discouraged due to the \emph{Prepositional Noun Phrases} indicator because \emph{its soleplate} is embedded in the prepositional phrase \emph{with its soleplate}. Similarly, among the candidates for \emph{it} in (3.6), namely \emph{operations}, \emph{printing}, \emph{the system clock}, and \emph{this mode}, the candidate noun phrase \emph{printing} is penalised with a score of -1 because it is premodified by the preposition \emph{during}. This penalising effect can be overridden by other indicators in the same way as with \emph{Indefiniteness}, i.e. by other indicators rewarding other features of an antecedent candidate.

The basic information for this indicator is acquired during pre-processing. The chunker also recognises prepositional phrases (\emph{<PC>}-elements, see Figure 2, p. 16), and during the creation of the XML file from the chunker output, for identified noun phrases (\emph{<NC>}-elements, see Figure 2, p. 16) the value of the attribute \emph{prepositional} is set to \emph{True} or \emph{False}, depending on whether the current noun phrase is embedded in a prepositional phrase or not. Consequently, the algorithm simply has to check the specified value of this attribute. 


%#####################
%#####################
%#####################


%---
\subsubsection{Givenness}
%---
This indicator is also called \hyphenquote{UKenglish}{\emph{First noun phrases}} \autocite[146]{mit0} by Mitkov because it essentially rewards an antecedent candidate for occurring as the first noun phrase of a sentence. \emph{Givenness} is described as being \hyphenquote{UKenglish}{a linear approximation of the subject preference} which is supported by two theories \autocite[146]{mit0}. Firstly, this subject preference is obviously related to centering \autocite[cf.][]{bren}. Secondly, the name \emph{Givenness} goes back to of Firbas' theory \autocite*[]{firbas} that the \hyphenquote{UKenglish}{theme} as \hyphenquote{UKenglish}{the given or known information [...] usually appears first [in a coherent text]} and is followed by the \hyphenquote{UKenglish}{new information, or rheme} \autocite[146]{mit0}.  The score assigned for a candidate which is the first noun phrase of a sentence is +1 \autocite[146]{mit0}, such as with the selected candidates in the three examples below.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.5ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Find \emph{the System Folder$^{1}$} on your hard drive - but do not open \emph{it$^{1}$} \autocite[20]{print}.
%27:152:323:6306
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.5ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Do not modify \emph{the plug$^{1}$} provided with the vacuum cleaner - if \emph{it$^{1}$} will not fit the outlet, have a proper outlet installed by a qualified electrician \autocite[5]{vacu}.
%11:29:52:1202
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.5ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\emph{Expert systems$^{1}$} cannot perform the full range of reasoning strategies handled by the human brain and therefore do not replace human experts rather \emph{they$^{1}$} are complementary to human expertise \autocite[BP2:750]{bncw}.
%7:16:51:1653	
\xe
%\end{small}
\endgroup

\noindent The candidate noun phrase \emph{the System Folder} in (3.7) is preferred over \emph{your hard drive} because \emph{the System Folder} is the first noun phrase of that sentence. The same is true for the candidate \emph{the plug} in (3.8) because \emph{the vacuum cleaner} constitutes secondary, new information. Similarly, in (3.9) the selected candidate is \emph{Expert systems} and not one of the other plural noun phrases \emph{reasoning strategies} and \emph{human experts}, although the latter is also rewarded for representing a domain concept. Subject preference can also be misleading, however, as (3.10) illustrates.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.5ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	
aboveexskip=2.75ex
]
\emph{This vacuum cleaner$^{1*}$} is for use on a nominal 120-volt circuit, and has a grounding plug \emph{that$^{1*}$} looks like the plug illustrated in sketch A in the following Figure \autocite[5]{vacu}.
\xe
%
%\end{small}
\endgroup

\noindent The asterisk in (3.10) indicates that the result of the algorithm for the pronoun \emph{that} is wrong. The correct antecedent would have been the noun phrase immediately preceding the pronoun, which is the candidate \emph{a grounding plug}. This wrong decision has been made by the algorithm only partly because of the \emph{Givenness} preference. Another preference which has been applied here for the candidate \emph{This vacuum cleaner} is the indicator \emph{Domain Concept Preference}.

The implementation of this indicator simply involves the retrieval of the identification numbers of the first noun phrases in each sentence within the search scope for the current pronoun. If the identification number of a candidate is listed among the identification numbers of those \hyphenquote{UKenglish}{given} noun phrases, the rewarding score of +1 is applied.   


%#####################
%#####################
%#####################


%---
\subsubsection{Domain Concept Preference}
%---
\emph{Domain Concept Preference} is a genre- or domain-specific indicator \autocite[148]{mit0}. Candidate noun phrases \hyphenquote{UKenglish}{identified as representing terms in the genre of the text} are rewarded with a score of +1 \autocite[148]{mit0}. 

\begingroup
%\sloppy
%\begin{small}
\setstretch{1}
\rightskip=2em
%\ex
%[
%exnoformat=(\thesection.X),
%labeltype=numeric,
%numoffset=2em,
%textoffset=.5em,
%% OBEN:	
%aboveexskip=2.75ex, belowexskip=0.6ex
%% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
%% UNTEN:	aboveexskip=0.6ex
%% EINZELN:	aboveexskip=2.75ex
%]
%For example, in time constrained situations \emph{a good decision support system$^{1}$} is required capable of synthesising a great deal of information and from this rapidly producing the best solution with as little input as necessary from the aircraft engineer. Conversely, \emph{it$^{1}$} should allow for ‘what if’ model based solutions [...] \autocite[BP2:605-606]{bncw}.
%\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Before ironing, sort the garments according to the different heat settings required. Iron \emph{fabrics$^{1}$} \emph{that$^{1}$} need a lower temperature first \autocite[5]{iron}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Always clean attachments before using on fabrics. \emph{Dusting Attach- ments$^{1:2}$} used in dirty areas, such as under a refrigerator, should not be used on other surfaces until \emph{they$^{1}$} are washed. \emph{They$^{2}$} could leave marks \autocite[11]{vacu}.
\xe
%1)	BP2:605-606	BNC2	5:24:83:3147
%2)	5	Iron 	8:27:55:1393	
%3)	11	Vacuum	30:66:125:3437	
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent In (3.11), for example, one reason why \emph{fabrics} is preferred over the noun phrases \emph{the garments} and \emph{the different heat settings} is that the string \emph{fabric} occurs on the domain-specific list for the steam iron manual. The same is true for the domain-specific list of the manual for the vacuum cleaner and the string \emph{attachment}, which is why the candidate \emph{Dusting Attachments} is preferred over \emph{dirty areas} and \emph{other surfaces} as far as the indicator \emph{Domain Concept Preference} is concerned.

This indicator clearly requires the inclusion of domain-specific knowledge, which is acquired on the basis of a domain-specific list for a given text. This list is created manually during pre-processing. List (p) above (see p. 20), for example, was created for the steam iron manual. A candidate noun phrase is identified via string matching:  It is regarded as representing a domain-specific referent if one of the noun strings included in the domain-specific list also occurs within the concatenated strings of the elements of the candidate noun phrase. 


%#####################
%#####################
%#####################


%---
\subsubsection{Verb Preference}
%---
According to Mitkov, \hyphenquote{UKenglish}{empirical evidence suggests that noun phrases [immediately] following a verb} of the language-specific set of \hyphenquote{UKenglish}{\emph{analyse}, \emph{assess}, \emph{check}, \emph{consider}, \emph{cover}, \emph{define}, \emph{describe}, \emph{develop}, \emph{discuss}, \emph{examine}, \emph{explore}, \emph{highlight}, \emph{identify}, \emph{illustrate}, \emph{investigate}, \emph{outline}, \emph{present}, \emph{report}, \emph{review}, \emph{show}, \emph{study}, \emph{summarise}, \emph{survey}, [and] \emph{synthesise}} are more salient than other candidate noun phrases and they are assigned a score of +1 \autocite[146]{mit0}. With a score of +1, \emph{Verb Preference} constitutes a rather moderate preference and it seems to be rather unreliable. One of the following examples shows how this preference can be misleading, and another one illustrates a resolution error that could have been resolved correctly based on the indicator \emph{Verb Preference} alone.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\emph{Hypermedia} is software that retrieves information from a variety of different sources, stored on different media, and presents \emph{all this information$^{1*}$} as a single coherent database. \emph{It$^{1*}$} aims to mimic the brain 's ability to access information quickly and intuitively by forming associative links between subjects \autocite[BP2:823-824]{bncw}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
For example, in time constrained situations \emph{a good decision support system$^{1*}$} is required capable of synthesising \emph{a great deal of information} and from \emph{this$^{1*}$} rapidly producing the best solution with as little input as necessary [...] \autocite[BP2:605]{bncw}.
\xe
%2)	BP2:823-824	BNC3
%%domain%%	3)	BP2:605	BNC2	5:24:82:3117
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent In (3.13), \emph{all this information} is falsely selected as the best antecedent candidate and it follows one of the verbs contained in the set above, namely \emph{present} in \emph{presents}. The correct antecedent would have been \emph{Hypermedia} at the beginning of the first sentence in the example. In (3.14), the correct antecedent \emph{a great deal of information} could have been identified with the help of this indicator alone due to the preceding verb \emph{synthesise} in \emph{synthesising}. The noun phrase \emph{a good decision support system} was erroneously selected by \emph{KPAR}, however.

Basically, the implementation of this indicator consists of two steps. Firstly, retrieve the identification numbers of all noun phrases in the search scope which are immediately preceded by a verb phrase with a lemma that is contained in the language-specific set. Secondly, if the identification number of a candidate occurs in this list, the candidate is assigned a value of +1 for this indicator. The list used here is the longest list that could be found in the descriptions by Mitkov. It is indicated, however, that this list is not complete \autocite[146]{mit0}.


%#####################
%#####################
%#####################


%---
\subsubsection{Noun Phrase Preference}
%---
With the aim to test the implementation of Mitkov's algorithm also with non-manual texts related to computer science, this indicator was also included in the implementation. This indicator is not included in the description of the approach which forms the basis for a fully automated version (MARS - \hyphenquote{UKenglish}{Mitkov's Anaphora Resolution System}) \autocite[146-149, 164, 174]{mit0}, but it is included in a previous description \autocite[136]{mit1}. The language-specific list containing the strings \emph{chapter}, \emph{section}, and \emph{table} forms the basis for this indicator. A candidate following the verb phrase which is preceded by a noun phrase containing one of these three nouns is assigned a value of +1 \autocite[136]{mit1}. Neither the two non-manual texts nor the three manuals in the test did contain any candidates where this indicator was applied.
The following examples illustrate how these nouns mainly occurred in references to subsequent tables or different sections or chapters.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\emph{This table} gives you a checklist of solutions to common printing problems \autocite[31]{print}.
%1)	31	Print	53:237:506:10559	
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
See illustration in the REPLACING THE AGITATOR \emph{section} \autocite[15]{vacu}.
%2)	15	vacuum	40:91:192:5107		
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Check the paper in the paper tray and close the paper tray cover securely (See the \emph{chapter} \hyphenquote{UKenglish}{Loading the Paper}) \autocite[30]{print}.
%3)	30	print	51:233:497:10385
\xe
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent This indicator is also implemented with the help of two basic steps. The identification numbers of all noun phrases in the search scope which are preceded by a sequence of one of the three nouns and a verb phrase are listed. Then, each candidate noun phrase with an identification number from this list is rewarded with a score of +1. 


%#####################
%#####################
%#####################


%---
\subsubsection{Section Heading Preference}
%---
Candidate noun phrases which \hyphenquote{UKenglish}{also occur in the heading of the section in which the pronoun appears} are rewarded with a score of +1 \autocite[147]{mit0}. The following three examples illustrate cases in which the conditions for this indicator hold.  

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\textbf{REPLACING THE AGITATOR}

[...] After the lower plate is removed, carefully lift up \emph{the agitator$^{1}$} until \emph{it$^{1}$} clears both sides of the nozzle housing \autocite[16]{vacu}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\textbf{Avionics and Knowledge Based Systems}

[...] Avionics now forms a substantial part of \emph{modern aircraft systems$^{1}$} and the introduction of BITE to many of \emph{these$^{1}$} has helped to reduce the number and types of diagnostic check [...] \autocite[BP2:587,591]{bncw}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\textbf{CAUTION DURING USE OF RETRACTIVE POWER CORD [...]}

Unwind \emph{the power cord$^{1}$} in the amount of length you need before plugging \emph{it$^{1}$} to the wall outlet \autocite[4]{iron}.
\xe
%1)	16	Vacuum	42:96:207:5489	
%2)	BP2:587,591	BNC3	5:21:72:2626	
%3)	4	iron	6:17:34:816	
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent In (3.18) - (3.20) above, all candidates selected by the algorithm as the correct antecedent are rewarded based on the indicator \emph{Section Heading Preference}. The candidate \emph{the agitator} in (3.18) completely matches the noun phrase \emph{THE AGITATOR} in the relevant section heading because strings involved in string comparisons are either converted to lower case where possible or compared while ignoring letter case. In (3.19) the lemma \emph{system} of the head noun in \emph{modern aircraft systems} also occurs in the heading, \emph{Knowledge Based Systems}. By analogy, the conditions for the application of the \emph{} indicator also hold for \emph{the power cord} and \emph{RETRACTIVE POWER CORD}. 
 
Again, simple string comparison while ignoring letter case is the solution to the implementation of this indicator. As illustrated in the examples above, the lemma of the head noun also constitutes a possibility for a string match. In order to enable the algorithm to detect these matches, the lemma of the head noun also has to be retrieved. Because the \emph{TreeTagger} chunker only  recognises simple and no complex noun phrases, this is done heuristically by searching the \emph{<NC>}-element of the candidate for the last XML element whose part-of-speech attribute identifies it as a noun.


%#####################
%#####################
%#####################


%---
\subsubsection{Collocation Pattern Preference}
%---
This indicator involves the following patterns: \emph{(un-)verb + noun phrase/pronoun}, \emph{noun phrase/pronoun + (un-)verb}, and \emph{noun phrase/pronoun + be + adjective} \autocites[136]{mit1}[147]{mit0}. This indicator is inspired by the collocation matches discussed by Dagan and Itai \autocite*[]{dagan} and assigns a score of +2 if any of the patterns mentioned above holds \autocite[147]{mit0}.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\emph{The white and grey secondary electrostatic filter$^{1}$} must be replaced when dirty. \emph{It$^{1}$} should be replaced regularly depending on use conditions \autocite[12]{vacu}. 
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
If you open the paper tray while \emph{the printer$^{1}$} is printing \emph{it$^{1}$} may stop printing, produce an underdeveloped print or cause a paper jam \autocite[24]{print}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
If \emph{the Ready lamp$^{1}$} is flashing, wait until \emph{it$^{1}$} stops flashing \autocite[24]{print}.
\xe
%1)	12	vacuum	33:69:136:3672	
%2)	24	printer	333:184:385:7793	
%3)	30	printer	57:241:514:10735	
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent In (3.21) the pattern \emph{noun phrase/pronoun + (un-)verb} can be applied for \emph{The white and grey secondary electrostatic filter} and \emph{It} based on the lemma \emph{replace} of the verb phrases \emph{must be replaced} and \emph{should be replaced}. The same indicator also applies successfully for (3.22) and (3.23) based on the verbs \emph{print} and \emph{flash}.

For the indicator \emph{Collocation Pattern Preference} string matching and also lemma identification for verb and adjective phrases is crucial. For preceding verb phrases and for following verb phrases or adjective phrases following the verb \emph{be}, the heuristic identification of the lemma is similar to the identification of the lemma of the head noun for the indicator \emph{Section Heading Preference}: the lemma of the last element inside a \emph{<VC>}- or  an \emph{<ADJC>}-element identified as a verb or adjective by its part-of-speech attribute is selected. As a result of this heuristic way of solving this, the verb phrase \emph{stops flashing} in (3.23) also results in a match. The \emph{(un-)} contained in these patterns also allows for a verb lemma to match with the same lemma negated with the prefix \emph{un-} \autocite[147]{mit0}. The lemma \emph{be} could lead to an inflationary application of this indicator. Thus, the algorithm tries to find a subsequent adjective lemma first before merely \emph{be} is accepted as the lemma for the pattern which has to be matched \autocite[147]{mit0}. 


%#####################
%#####################
%#####################


%---
\subsubsection{Lexical Reiteration}
%---
According to Mitkov, noun phrases which are \hyphenquote{UKenglish}{repeated twice or more in the paragraph in which the pronoun appears} are rewarded with a score of +2, and \hyphenquote{UKenglish}{a score of +1 is assigned to those [noun phrases] repeated once in that paragraph} \autocite[146]{mit0}. However, \hyphenquote{UKenglish}{synonyms or superordinates [...] are not counted} due to the lack of appropriate sources of information such as an ontology \autocite[147]{mit0}. The following examples illustrate the selection of candidates repeated in the paragraph of the current pronoun. 

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
Do not operate \emph{iron} with a damaged cord or if \emph{the iron} has been dropped or damaged. To avoid a risk of electric shock, do not disassemble \emph{the iron$^{1}$}, take \emph{it$^{1}$} to a qualified serviceman for examination and repair. Incorrect reassembly can cause a risk of electric shock when \emph{the iron} is used \autocite[2]{iron}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
\emph{Techniques} include on-board wear detection, vibration analysis and oil debris analysis [...]. \emph{Such techniques$^{1}$} provide maintenance engineers with extremely valuable data/information under actual flight conditions/loads  [...]. \emph{They$^{1}$} also provide the means for continuous, usage based, condition monitoring of critical systems [...]. Increasingly, \emph{such techniques} are featuring in safety regulations and design specifications.  \autocite[BP2:792-795]{bncw}
\xe
%1)	2	iron	2:9:20:416
%2)	BP2:792-795	BNC3	10:26:93:2964
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent In (3.24) \emph{the iron} is repeated three times in this paragraph and rewarded with a score of +2. It is the correct antecedent of the subsequent pronoun \emph{it}. \emph{Such techniques} in (3.25) is repeated twice and also assigned a score of +2. In both examples the presence or absence of a determiner does not impede a successful match because the occurrence of the head noun in some form also counts as valid repetition.  

For the purpose of string matching for this indicator, both the string of the candidate noun phrase and the lemma of its head are retrieved. Both the string of the entire noun phrase and the head string can result in a match. Lemma retrieval is handled analogously to the indicators \emph{Section Heading Preference} and \emph{Collocation Pattern Preference} above. For each candidate, the frequency of the matches is counted and decreased by one to implement the concept of repetition. Paragraphs in manuals can be very short. Thus, candidates from a preceding paragraph may have to be evaluated. In order to accommodate this, more than one paragraph can be considered relevant for determining the frequency of repetition of a candidate. 

Mitkov also mentions the exclusion of matches where coreference is not involved, for example, due to strings of ordinal numbers such as \emph{first} and \emph{second}. However, the criteria for a string to be a member of a list of such strings are not explained \autocite[146-147]{mit0}. Thus, this exception is not a part of the implementation of the indicator \emph{Lexical Reiteration}. 


%#####################
%#####################
%#####################


%---
\subsubsection{Immediate Reference}
%---
The indicator \emph{Immediate Reference} \hyphenquote{UKenglish}{is highly genre-specific and occurs frequently in imperative constructions} commonly used in manual texts \autocite[148]{mit0}. Mitkov describes these constructions as occurring with the following form: \hyphenquote{UKenglish}{\enquote*{... (You) V$_{1}$ NP ... \emph{con} (you) V$_{2}$ it (\emph{con} (you) V$_{3}$ it)}, where \emph{con} $\in$ \{ and/or/before/after/until ...\}} \autocite[147]{mit0}. The candidate noun phrase occurring at the beginning of this construction is rewarded with a score of +2 \autocite[147]{mit0}. The acquisition of the required structural information is not explained. The examples below illustrate the application of this indicator.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
To avoid a risk of electric shock, do not disassemble \emph{the iron$^{1}$}, take \emph{it$^{1}$} to a qualified serviceman for examination and repair \autocite[2]{iron}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
However, you can generate \emph{images$^{1}$} in most image-editing applications and then print \emph{them$^{1}$} on the TruPhoto Digital Photo Printer \autocite[23]{print}.
\xe
%3)	2	iron	2:9:20:416	To avoid a risk of electric shock, do not disassemble (the iron):2, take (it):2 to a qualified serviceman for examination and repair
%1)	23	print	31:175:366:7416	However, (you):unknown:EXCL:DEIX can generate (images):36 in most image-editing applications and then print (them):36 on the TruPhoto Digital Photo Printer
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent The examples (3.26) and (3.27) are both rather short matches for the pattern above, but they also show that often no conjunction is present, such as with only a comma separating the clauses in (3.26). In addition, (3.27) shows how a prepositional phrase can precede the conjunction of the clause with the pronoun. There is also a premodifying adverb in the clause with the pronoun \emph{them}, and a prepositional phrase following the candidate noun phrase. Furthermore, a lax way of implementing the pattern described by Mitkov can also lead to the effect that the noun phrases occurring in introductory clauses similar to \emph{To avoid a risk of electric shock} are rewarded instead of the first noun phrase occurring in the first imperative clause. Consequently, the regular expression in Figure 3 below was used to implement this indicator.
%\newpage

\begingroup
\setstretch{1}
%\begin{small}
\begin{figure}[th]
\centering
\begin{tabular}{c}
\begin{small}
\begin{lstlisting}
@"^(	(it_)?(VC){1,1}_(you_)?
	(,|CC|ADVC|CC_,|ADVC_,|ADVC_CC_,)_
   )+?
   (PRT_|PC_)?
   NC:([0-9]+)_VC(_you)?
   (?!_TO)"
\end{lstlisting}
\end{small}
\end{tabular}
\caption
{\rightskip=2em\leftskip=2em Regular Expression used to implement the indicator \emph{Immediate Reference}}
\end{figure}
%\end{small}
\endgroup

This regular expression only makes sense if the order of the structural information about the part of the sentence preceding occurrences of the pronoun \emph{it} is reversed. Consequently, it requires the algorithm to acquire and format structural information from these elements preceding the pronoun, reverse their order and then concatenate these elements with an underscore. This includes the identification of conjunctions and commas, of verb phrases, adverb phrases, noun phrases, and preposition phrases. Occurrences of \emph{it} and \emph{to} also have to be identified. The current pronoun itself is not included, which is why the quantifier ? modifies (it$\_$). The second line allows for several alternative coordinating clusters to occur, not merely for coordinating conjunctions. Together, the first three lines constitute an embedded pattern matching the \hyphenquote{UKenglish}{pronominalised} instructions following the candidate. The entire pattern has to start with at least one occurrence of this embedded pattern because the entire pattern starts with {~~$\^$}, and the parentheses are followed by $+$, but the expression matches only the shortest sequence of this embedded pattern which is available ($+$?). Thus, this regular expression avoids greedy behaviour and accounts for the fact that this indicator aims at \emph{immediate} reference. The gap indicated by three dots in Mitkov's pattern above is replaced by (PRT$\_$|PC$\_$)? to allow only for two common elements following the candidate noun phrase, namely verb particles and  prepositional phrases, in order to avoid any random behaviour of the regular expression. ([0-9]$+$) enables the algorithm to retrieve the identification number of the noun phrase. Lastly, (?!$\_$TO) prevents the expression from matching introductory clauses starting with \emph{to}. Even without a chunker, the retrieval of all the necessary information makes it difficult to render this indicator knowledge-poor. Furthermore, the required formatting of structural information is language-dependent.


%#####################
%#####################
%#####################


%---
\subsubsection{Antecedent-Pointing Constructions}
%---
This indicator is taken from an earlier description of Mitkov's algorithm \autocite[137]{mit1} and also includes the renamed indicator \hyphenquote{UKenglish}{\emph{Sequential instructions}} \autocite[148]{mit0}. Candidate noun phrases occurring in it-clefts or prompts are assigned a score of +2. Similar to the description of \emph{Immediate Reference}, Mitkov does not describe any details about the inclusion of structural information \autocite[137]{mit1}. As far as the it-cleft is concerned, a simplified version of the description of the it-cleft by Biber et al. was employed \autocite*[959]{biber}. According to this description, an it-cleft \hyphenquote{UKenglish}{consists of [...] the pronoun \emph{it}[,] a form of the verb \emph{be},} and of \hyphenquote{UKenglish}{the specially focused [...] noun phrase [...]} \autocite[959]{biber}. The following examples show how this indicator can be applied.  

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
It is \emph{the latter development$^{1}$} \emph{that$^{1}$} has had great influence on the creation of the commercial AI industry \autocite[BP2:544]{bncw}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	
aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
If \emph{the secondary electrostatic filter$^{1}$}, located at the bottom of the bag cavity, is dirty, remove \emph{it$^{1}$} by pulling forward out from under the ribs (rib projections) \autocite[12]{vacu}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
To remove \emph{the cleaning tool$^{1}$} from the extension wand/wands or hose assembly, simply pull \emph{it$^{1}$} apart [...] \autocite[11]{vacu}.
\xe
%2)	BP2:544	BNC2	3:9:29:1094	(It):unknown:EXCL:PLIT is (the latter development):9 (that):9:CDR has had great influence on the creation of the commercial AI industry
%3)	12	vacu	33:69:138:3737	If \emph{the secondary electrostatic filter$^{1}$}, located at the bottom of the bag cavity, is dirty, remove \emph{it$^{1}$} by pulling forward out from under the ribs (rib projections)
%4)	11	vacu	30:63:114:3102	To remove \emph{the cleaning tool$^{1}$} from the extension wand/wands or hose assembly, simply pull \emph{it$^{1}$} apart [...]
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent In (3.28), the candidate noun phrase for \emph{that}, \emph{the latter development}, is in the focus of the it-cleft. (3.29) does not illustrate a typical prompt construction, but \emph{the secondary electrostatic filter}  can still be identified as being a focused element. Lastly, (3.30) shows how the first noun phrase in a prompt starting with \emph{to} and initiating a sequential instruction is identified as a focussed element. The regular expressions in Figure 4 constitute the basis for the implementation of this indicator.

\begingroup
\setstretch{1}
%\begin{small}
\begin{figure}[th]
\centering
\begin{tabular}{c}
\begin{small}
\begin{lstlisting}
@"^it_be_(PC:)?NC:([0-9]+)"
@"^([A-Z]*)_?(PC:)?NC:([0-9]+)_,"
@"^VC_(PRT_)?(PC:)?NC:([0-9]+)(_(PC:)?NC:([0-9]+))?_,"
\end{lstlisting}
\end{small}
\end{tabular}
\caption
{\rightskip=2em\leftskip=2em Regular Expressions used to implement the indicator \emph{Antecedent-Pointing Constructions}}
\end{figure}
%\end{small}
\endgroup

\noindent The first expression is used to identify it-clefts according to the description mentioned above. The second regular expression serves the purpose of identifying prompts not involving any verb phrase, and the last expression can identify clauses starting with \emph{to} which can usually be found at the beginning of sequential instructions \autocite[148]{mit0}. The identification number of the first noun phrase in these expressions identifies the focused candidate. Similar to the indicator \emph{Immediate Reference}, this indicator requires the algorithm to retrieve and format structural information from the sentences relevant to the resolution of the current anaphora in a language-specific way. Thus, \emph{Antecedent-Pointing Constructions} can be considered  
a knowledge-poor indicator only to a limited extent.

%#####################
%#####################
%#####################


%---
\subsubsection{Referential Distance}
%---
Since the algorithm considers noun phrases occurring in the sentence of the pronoun and in \hyphenquote{UKenglish}{the two preceding sentences}, this indicator takes into account the distance between the pronoun and these candidates \autocite[148-149]{mit0}. \emph{Referential Distance} penalises candidate noun phrases occurring with a distance of two sentences with a score of -1. Candidates with a distance of one sentence are assigned a score of 0. If the sentence of the pronoun is simple, all candidates occurring in it are assigned a score of +1. A score of +2 is assigned to those candidates which occur in the clause preceding the pronoun if the sentence in which the pronoun is found is complex \autocite[148]{mit0}. Mitkov does not specify how the boundaries of that preceding clause are identified or how a complex sentence is identified. The behaviour of this indicator is illustrated below.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	
aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
If \emph{an extension cord} is absolutely necessary, \emph{a 10 ampere cord} should be used. \emph{Cords} rated for \emph{less amperage} may overheat. \emph{Care} should be taken ( to arrange \emph{the cord$^{1}$} so ) that \emph{it$^{1}$} cannot be pulled or tripped over \autocite[2]{iron}.
\xe
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	
aboveexskip=0.6ex
% EINZELN:	aboveexskip=2.75ex
]
If \emph{the power cord$^{1}$} seems hard to unwind ( due to an \emph{improper winding}, pull firmly ) until \emph{it$^{1}$} is released \autocite[4]{iron}.
\xe
%1)	2	iron		CL:606:618	 If an extension cord is absolutely necessary, a 10 ampere cord should be used. Cords rated for less amperage may overheat. Care should be taken:606 to arrange (the cord):3 so that:618 (it):3 cannot be pulled or tripped over
%2)	%
%3)	%
%
%4)	4	iron		:CL:881:897	If (the power cord):8 seems hard to unwind:881 due to an improper winding, pull firmly until:897 (it):8 is released
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent The candidate noun phrases in (3.31) are assigned different scores according to their distance:  \emph{an extension cord} and \emph{a 10 ampere cord} are penalised with -1, \emph{Cords} and \emph{less amperage} are assigned a score of 0, \emph{Care} is assigned a score of 1, and the selected antecedent \emph{the cord} receives the highest possible score for \emph{Referential Distance} with +2 because it occurs in the clause preceding \emph{it}. The parentheses in both examples indicate the clause boundaries identified by the implemented algorithm. In (3.32) these boundaries seem odd, which is explained in the description of the implementation of the indicator below.

The implementation of the assignment of the scores -1, 0, and +1 is simple: It merely involves a comparison of the identification number of the sentence of the candidate and the identification number of the sentence of the pronoun. In order to identify complex sentences and the boundaries of the clause preceding the pronoun based on the chunker output, a heuristic solution to extract structural information is needed. Firstly, complex sentences are identified by counting the number of \emph{<VC>}-elements in the sentence of the pronoun. If there is more than one verb phrase, which are considered basic elements of clauses \autocite[120]{biber}, the sentence is considered complex. Then, a language-specific list of potential clause limiters is used to identify all clause limiters preceding the pronoun within the same sentence (see list (o), p. 20). This list comprises relative pronouns and also words such as \emph{how}, \emph{where}, or \emph{why}, for example \autocite[2-6]{santorini}. Furthermore, verb phrases are also added to this list. Verb phrases are only considered valid boundaries if they are preceded by another verb phrase because one verb phrase alone could constitute the basic element of the preceding clause. In (3.31), \emph{should be taken} is a valid boundary because it is preceded by \emph{to arrange}, and in (3.32) \emph{to unwind} is preceded by \emph{pull}. Mitkov's distinction between simple and complex sentences seems straightforward at first, but essential syntactic knowledge is needed for the indicator \emph{Referential Distance} to be applied properly by the algorithm.

% -----------------------------------------------------------------------------------------------------------------------------------------------------
\subsection{Basic Steps of Mitkov's Algorithm}
As already mentioned, the twelve indicators discussed in the previous section form the core of the algorithm. In order to show how they form a part of Mitkov's algorithm, an overview of the basic steps of the approach are described. In addition, the implementation of these steps in C\# is outlined with the help of a summary of the roles of the five basic classes of the application \emph{KPAR}. Finally, an example illustrates how the indicators cooperate in order to select the best candidate for an anaphor.

According to Mitkov, the search scope of the algorithm for a given anaphor comprises the current sentence of the pronoun as well as two preceding sentences, if preceding sentences are available \autocite[149]{mit0}. More specifically, the implemented version of this search scope only considers preceding sentences available if they occur within the current section of the pronoun. The sentences belonging to the search scope may belong to different paragraphs because paragraphs of the same section can be very short in manuals. Consequently, more than one paragraph can be considered relevant to the evaluation of the indicator \emph{Lexical Reiteration}. The first step of the algorithm is then to look \hyphenquote{UKenglish}{for noun phrases [...] only to the left of the anaphor} in the sentences complying with the search scope conditions. \autocite[149]{mit0}. 

Secondly, gender and number agreement is checked for the noun phrases retrieved within the search scope and the pronoun \autocite[149]{mit0}. The resolution of anaphors for which this second step returns only one potential candidate is considered trivial \autocite[180]{mit0}. As it has already been mentioned, the acquisition of gender and number information is conducted with the help of a preliminary solution. Thus, if this step does not return any candidates as passing the gender and number agreement filter, the set of candidate noun phrases existing before the filter is applied is passed on to the next step. This simple modification still gives the set of antecedent indicators the chance to find the correct antecedent in those cases where gender and number information may be incorrect. If the set of antecedent candidates returned after this step contains only one noun phrase, the resolution of the corresponding anaphor is considered non-critical \autocite[180]{mit0}.

Lastly, the set of candidates returned by the second step are evaluated based on the twelve indicators. The noun phrase with the highest accumulated score is selected as the best antecedent candidate \autocite[149]{mit0}. If there is more than one candidate with the highest score which has been achieved, the algorithm tries to find \hyphenquote{UKenglish}{the candidate with the higher score for immediate reference} \autocite[149]{mit0}. if this is not helpful, \emph{Collocation Pattern Preference} is checked analogously, and the last indicator similarly involved in this selection process is \emph{Verb Preference}. If this last indicator still does not single out a candidate, the most recent candidate among the remaining highscore candidates is selected \autocite[149]{mit0}. The distance between the candidates and the anaphor is calculated based on the difference between their identification numbers in the XML file. 
%---
\subsubsection{Classes of the Application \emph{KPAR}}
%---
The implementation of the basic steps outlined above is realised with the help of the five classes of the application \emph{KPAR}. These classes, namely \emph{AccessXML.cs}, \emph{Anaphora.cs}, \emph{Pronoun.cs}, \emph{Candidate.cs}, and \emph{Program.cs}, are responsible for semi-automatic pre-processing of the chunker output, data access and modelling, as well as for the handling of the sequence of steps for anaphora resolution which have already been described above. Each class is briefly described in the following. 

The class \emph{AccessXML.cs} provides methods for the creation of the XML file from the chunker output semi-automatically, as well as for providing reading or writing access to this XML file. It handles the retrieval of relevant pronouns and of context relevant to their resolution, filtering them with the help of language-specific lists (see lists (a) - (c), p. 18). These filters exclude subordinating \emph{that}, for example, and also deictic second person pronouns. The context relevant to a pronoun includes the sentences retrieved within the search scope (cf. the first step of the algorithm description above), the paragraphs to which these sentences belong, and the set of headings for the section of the pronoun. Furthermore, occurrences of pleonastic \emph{it} are excluded \autocite[141]{mit1} with the help of a regular expression (see Figure 4, p. 33) which identifies it-clefts, such as with the indicator \emph{Antecedent-Pointing Constructions}. The chunker output is also enhanced by gender and number information semi-automatically, requiring additional user input for the gender value of proper nouns (see description of \emph{Program.cs} below). In addition, it saves the results of the anaphora resolution process to the XML file with the help of the attribute \emph{corefInfo} of \emph{<NC>}-elements (see Figure 2, p. 16). These results also include information about excluded pronouns. Lastly, an easily readable text file marked up with coreference information can be created from the XML file.

The abstract class \emph{Anaphora.cs} serves as a base class for both the class \emph{Anaphora.cs} and the class \emph{Candidate.cs}. Its purpose is simple because it merely provides fields for storing all the language- and domain-specific lists mentioned above (see lists (a) - (p) in section 3.1. Pre-Processing, pp. 18 -20). Thus, the classes \emph{Pronoun.cs} and \emph{Candidate.cs}, which inherit from \emph{Anaphora.cs}, also have access to these lists of strings. As the descriptions of some indicators have already shown, not all language-dependent information could be allocated here.

With the help of \emph{Pronoun.cs} pronoun data is modelled. For each valid pronoun and its context data, an object of the type \emph{Pronoun} is created. During this creation process, the context data of the pronoun data is evaluated in order to memorise only those pieces of information which are relevant for the assignment of scores for the individual candidates and their indicators. This information includes strings of noun phrases occurring in the relevant set of headings, and strings of the lemma of their head noun, for example, which are relevant for the indicator \emph{Section Heading Preference}. Furthermore, data relevant to the context of the candidates is separated from the context data of the pronoun and passed on to the class \emph{Candidate.cs}, which in turn serves as a model of candidates and their features as far as they are relevant to the assignment of indicator scores. 

Similar to \emph{Pronoun.cs}, the class \emph{Candidate.cs} is used to model candidate data. A list of objects of the type \emph{Candidate} is created based on the candidate data passed over during the instantiation of a \emph{Pronoun}-object. This procedure also implements the gender and number agreement test specifications explained in the second step of Mitkov's algorithm above. Similar to the creation of \emph{Pronoun}-objects, for each \emph{Candidate}-object context data is evaluated and only the evaluated data relevant to the assignment of scores is stored in the fields of the \emph{Candidate}-object. Most importantly, this class provides a method for the assignment of scores for a \emph{Candidate}-object according to the twelve indicators, and for the selection of the best \emph{Candidate}-object as it is specified by the third step of Mitkov's approach above. 

Lastly, the class \emph{Program.cs} provides the \emph{Main}-method of the console application \emph{KPAR}. This method handles the user input for the path of the chunker-output file, as well as the additional input required for gender information. Furthermore it initiates the retrieval of valid pronouns with the help of the methods of the class \emph{AccessXML.cs}. Most importantly, it initiates the resolution process by performing the assignment of scores for each candidate of a pronoun with the \emph{DetermineIndicators}-method of the class \emph{Candidate.cs}. Then, the best candidate is selected by passing on the list of evaluated candidates of each pronoun on to the method \emph{SelectBestCandidate} of the class \emph{Candidate.cs}, which implements the selection procedure explained in the third step of Mitkov's algorithm above. Finally, it passes on pairs of identification numbers for each pronoun and the best candidate selected for it to the class \emph{AccessXML.cs} in order to save the results of the anaphora resolution.
%---
\subsubsection{An Example of Candidate Selection}

In (3.33), the KPAR application successfully identified the antecedent \emph{the printer} for the pronoun \emph{it} in the third sentence of the section \hyphenquote{UKenglish}{\textbf{Moving the Printer}} in the printer manual \autocite[27]{print}.

\begingroup
%\begin{small}
\setstretch{1}
\rightskip=2em
\ex
[
exnoformat=(\thesection.X),
labeltype=numeric,
numoffset=2em,
textoffset=.5em,
% OBEN:	aboveexskip=2.75ex, belowexskip=0.6ex
% MITTE:	aboveexskip=0.6ex, belowexskip=0.6ex
% UNTEN:	aboveexskip=0.6ex
% EINZELN:	
aboveexskip=2.75ex
]
\textbf{Moving the Printer}

§ You may need to move \emph{the printer$^{1}$}, from \emph{time$^{2}$} to \emph{time$^{3}$}. § When relocating \emph{your printer$^{4}$}: Always unplug \emph{the printer$^{5}$} from \emph{the power outlet$^{6}$}. Do not drop \emph{the printer$^{7}$} or knock \emph{it$^{7}$} against other objects [...] \autocite[27]{print}.
\xe
%\end{small}
%\emph{its soleplate$^{1}$}
\endgroup

\noindent Within the search scope for the pronoun \emph{it} in (3.33), seven candidate noun phrases precede this anaphor: \emph{the printer}, \emph{time}, \emph{time}, \emph{your printer}, \emph{the printer}, \emph{the power outlet}, and \emph{the printer}. Based on the distribution of scores for the twelve indicators shown in Table 1 below, the application has to select the best candidate. 

\begingroup
\begin{table}[htb]
\centering
\begin{tabular}{ l | r | r | r | r | r | r | r | r }
\textbf{Candidate Index Number} & \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} &  \textbf{7} & \\ \hline \hline
\emph{Distance (ID Difference)} 			& 44 & 38 & 34 & 28 & 20 & 15 & ~~6 & \\ 
\hline \hline		
\emph{Indefiniteness} 					&  0 & -1 & -1 &  0 &  0 &  0 &  0 & \\ \hline
\emph{Prepositional Noun Phrases} 		&  0 & -1 & -1 &  0 &  0 & -1 &  0 & \\ \hline
\emph{Givenness} 						&  0 &  0 &  0 &  1 &  0 &  0 &  1 & \\ \hline
\emph{Domain Concept Preference}		&  1 &  0 &  0 &  1 &  1 &  1 &  1 & \\ \hline
\emph{Verb Preference} 					&  0 &  0 &  0 &  0 &  0 &  0 &  0 & \\ \hline
\emph{Noun Phrase Preference} 			&  0 &  0 &  0 &  0 &  0 &  0 &  0 & \\ \hline
\emph{Section Heading Preference} 		&  1 &  0 &  0 &  1 &  1 &  0 &  1 & \\ \hline
\emph{Collocation Pattern Preference} 		&  0 &  0 &  0 &  0 &  0 &  0 &  0 & \\ \hline
\emph{Lexical Reiteration} 				&  2 &  1 &  1 &  2 &  2 &  0 &  2 & \\ \hline
\emph{Immediate Reference} 				&  0 &  0 &  0 &  0 &  0 &  0 &  2 & \\ \hline
\emph{Antecedent-Pointing Constructions} 	&  0 &  0 &  0 &  0 &  0 &  0 &  0 & \\ \hline
\emph{Referential Distance} 				& -1 & -1 & -1 &  0 &  0 &  0 &  2 & \\ 
\hline \hline
\textbf{Total} & \textbf{3} & \textbf{-2} & \textbf{-2} &  \textbf{5} &  \textbf{4} &  \textbf{0} &  \textbf{9} & \\ 
\end{tabular}
\caption{Indicator scores for the candidates in (3.33)}
\end{table}
\endgroup

\noindent Obviously, the candidate noun phrase with the index number $7$, \emph{the printer}, is selected because it is the only candidate with the highscore 9. Because of both \emph{Referential Distance} and \emph{Immediate Reference} the score of the most recent candidate preceding the anaphor \emph{it} is almost twice as high as the score of the second best candidate. Even if there was another candidate with a score of 9, \emph{the printer} with the index number $7$ would still be selected because it would be the only candidate with a score of +2 for \emph{Immediate Reference} (cf. the third step in the description of Mitkov's algorithm). In fact, the indicator \emph{Immediate Reference} is identified as the \hyphenquote{UKenglish}{most \enquote*{confident}} indicator, which is why it is considered the most relevant indicator when a selection based on equally high highscores has to be made \autocite[148, 149, 183]{mit0}.
%~Moving the Printer~
%
%§ (You):unknown:EXCL:DEIX may need to move the printer, from time to time. 
%
%§ When relocating (your printer):unknown:EXCL:DEIX : Always unplug the printer from the power outlet. Do not drop (the printer):47 or knock (it):47 against other objects.
%---
% ##########################################################################
\newpage
\setcounter{excnt}{1}
\section{Evaluation}
% -----------------------------------------------------------------------------------------------------------------------------------------------------
\subsection{Testing}
In the following, the results of testing the application \emph{KPAR} with three manuals and two scientific texts are presented. The manuals include a steam iron manual \autocite[]{iron}, a manual for a vacuum cleaner \autocite[]{vacu}, and a photo printer manual \autocite[]{print}. The two additional texts taken from the \emph{BNC} concern expert systems and aircraft maintenance \autocite[BP2]{bncw}. In order to evaluate the success of the application, the following definition of the \hyphenquote{UKenglish}{\textbf{success rate} for an anaphora resolution algorithm} by Mitkov was adopted \autocite[180]{mit0}:

\begingroup
\belowdisplayskip=30pt
\belowdisplayshortskip=30pt
\begin{align*}
      SR = \frac{c~(correctly~resolved~anaphors)}{n~(all~anaphors)}
\end{align*}
\endgroup

\noindent For the critical success rate \emph{$SR_{CRIT}$} in Table 2 below, the number of trivial anaphors $t$ (only one candidate occurs in the search scope) and the number of non-critical anaphors $s$ (only one candidate remains after number and gender agreement tests) is subtracted from both the variable $c$ and the variable $n$ above because this additional success rate reflects the success for anaphors which had to be resolved only based on the twelve indicators \autocite[180-181]{mit0}. The success rates preceded by \emph{Base} refer to the success rates of a baseline model, choosing the most recent candidate as the antecedent for an anaphor \autocite[152]{mit0}.

\begingroup
\begin{table}[htb]
\centering
\begin{tabular}{ l | c | c | c | c | c | c | c | c | c }
& $n$ & $c$ & $SR$ & Base $SR$ & $t$ & $s$ & $SR_{CRIT}$ & Base $SR_{CRIT}$ & \\ \hline \hline
\emph{Steam Iron} 			&  43 	& 30 	& 69.77\% 	&  58.14\%	&  1 	&  1 	& 68.29\% & 56.10\% & \\ \hline  
\emph{Vacuum Cleaner} 		&  64 	& 35 	& 54.69\% 	&  71.88\% 	&  2 	&  3 	& 50.85\% & 69.49\% & \\ \hline  
\emph{Printer} 				&  59 	& 39 	& 66.10\% 	&  64.41\% 	&  1 	&  0 	& 65.52\% & 63.79\% & \\ \hline \hline
\textbf{Total Manuals} 		&  \textbf{166} 	& \textbf{104} 	& \textbf{62.65\%} 	&  \textbf{65.66\%} 	&  \textbf{4} 	&  \textbf{4}	& \textbf{60.76\%} & \textbf{63.92\%} & \\ 
\hline \hline
\emph{BNC 1} 				&  64 	& 26 	& 40.63\% 	&  51.56\% 	&  0 	&  1 	& 39.68\% & 50.79\% & \\ \hline  
\emph{BNC 2} 				&  56 	& 30 	& 53.57\% 	&  48.21\% 	&  1 	&  6 	& 46.94\% & 40.82\% & \\ \hline \hline
\textbf{Total BNC} 			&  \textbf{120} 	& \textbf{56} 	& \textbf{46.67\%} 	&  \textbf{50.00\%} 	&  \textbf{1}	&  \textbf{7} 	& \textbf{42.86\%} & \textbf{46.43\%} & \\ 
\end{tabular}
\caption{Comparison of \emph{KPAR} success rates with a baseline model, Version 1}
\end{table}
\endgroup

\noindent The results presented in Table 2 above were significantly worse than expected at first because the anaphora resolution application seems to be even worse than the baseline model. Nevertheless, the first suspicion involved the question whether the set of anaphoric pronouns targeted by Mitkov's algorithm included demonstrative and relative pronouns or not. Although Mitkov lists these two types of pronouns among the anaphoric pronouns (cf.~the list of anaphoric pronoun types on p.~6) in a chapter of his book which precedes the description of the algorithm implemented with \emph{KPAR} \autocite[]{mit0}, he does not clearly specify which pronoun types are relevant for the implemented algorithm. Consequently, the total number of anaphoric demonstrative and relative pronouns was subtracted from $n$ for each text, and the number of correctly resolved anaphoric demonstrative and relative pronouns was subtracted from $c$ for each text. The occurrences of trivial and non-critical anaphors were also adapted. The total number of anaphoric demonstrative and relative pronouns amounted to a total of 50 occurrences for the manual texts and to a total of 58 occurrences for the texts taken from the \emph{BNC}. For the manuals, 42 of 50 demonstrative or relative pronouns resulted in resolution errors by \emph{KPAR}, and 40 of the 58 occurrences of these types of pronoun were resolved incorrectly for the \emph{BNC} texts. Table 3 shows the test results which were adapted according to this suspicion.

\begingroup
\begin{table}[htb]
\centering
\begin{tabular}{ l | c | c | c | c | c | c | c | c | c }
& $n$ & $c$ & $SR$ & Base $SR$ & $t$ & $s$ & $SR_{CRIT}$ & Base $SR_{CRIT}$ & \\ \hline \hline
\emph{Steam Iron} 			&  34 	& 28 	& 82.35\% 	&  64.71\%	&  1 	&  1 	& 81.25\% & 62.60\% & \\ \hline  
\emph{Vacuum Cleaner} 		&  43 	& 33 	& 76.74\% 	&  62.79\% 	&  2 	&  3 	& 73.68\% & 57.89\% & \\ \hline  
\emph{Printer} 				&  39 	& 35 	& 89.74\% 	&  66.67\% 	&  1 	&  0 	& 89.47\% & 65.79\% & \\ \hline \hline
\textbf{Total Manuals} 		&  \textbf{116} 	& \textbf{96} 	& \textbf{82.76\%} 	&  \textbf{64.66\%} 	&  \textbf{4} 	&  \textbf{4}	& \textbf{81.48\%} & \textbf{62.04\%} & \\ 
\hline \hline
\emph{BNC 1} 				&  24 	& 14 	& 58.33\% 	&  50.00\% 	&  0 	&  0 	& 58.33\% & 50.00\% & \\ \hline  
\emph{BNC 2} 				&  38 	& 24	& 63.16\% 	&  47.37\% 	&  1 	&  6 	& 54.84\% & 35.48\% & \\ \hline \hline
\textbf{Total BNC} 			&  \textbf{62} 	& \textbf{38} 	& \textbf{61.29\%} 	&  \textbf{48.39\%} 	&  \textbf{1}	&  \textbf{6} 	& \textbf{56.36\%} & \textbf{41.82\%} & \\ 
\end{tabular}
\caption{Comparison of \emph{KPAR} success rates with a baseline model, Version 2}
\end{table}
\endgroup

\noindent Whereas the findings presented in Table 2 render \emph{KPAR} even worse than the baseline model which selects the most recent noun phrase as the best antecedent candidate, Table 3 shows a different result. For the baseline model, the success rates do not change significantly but the success rates for \emph{KPAR} do. 

Concerning manual texts, with a base $SR$ values of 65.66\%, the findings presented here are almost identical with the 65.9\% Mitkov reports for the baseline model with the same behaviour in his comparison \autocite[152]{mit0}. The $SR$ value for \emph{KPAR} amounts to 82.76\%, with an only slightly lower critical success rate $SR_{CRIT}$ of  81.48\%. Thus, after the exclusion of demonstrative pronouns from the resolution process, the critical success rate of \emph{KPAR} exceeds the one of the baseline model by almost 20\%. The \enquote*{standard}  success rate reported by Mitkov for his knowledge-poor approach is 89.7\% \autocite[152]{mit0}, which is only approximately 7\% more than the success rate of \emph{KPAR}. Furthermore, in a comparison of his approach with similar approaches, Mitkov reports a critical success rate of 82\% \autocite[153]{mit0}, which exceeds the critical success rate of \emph{KPAR} presented above only to a very small extent. 

The test results for the scientific texts are worse than for the manual texts for both the baseline model and for \emph{KPAR}. Both success rates of \emph{KPAR} are even worse than those of the baseline model. With a success rate of only 46.67\%, \emph{KPAR} performs significantly less well than Mitkov's algorithm for \hyphenquote{UKenglish}{research papers}, for which Mitkov reports a success rate of 77.9\% \autocite[153]{mit0}. 
% -----------------------------------------------------------------------------------------------------------------------------------------------------
\subsection{Main Problems and Future Work}
In addition to the testing results presented in the previous section, the evaluation of the implementation of Mitkov's knowledge-poor approach also includes a brief summary of the main problems encountered. These problems mainly target Mitkov's description of the general features of the algorithm, which need to be reviewed and differentiated in the light of the complexity of the heuristics involved in the implementation of pre-processing tasks and also of some of the individual indicators.

Anaphora resolution often includes the acquisition of gender and number information and the identification of pleonastic it or collective nouns \autocite[195]{mit0}, and these elements also form a part of Mitkov's approach \autocites[138, 141]{mit1}[871]{mit2}. These elements are not described in detail, however, and their implementation resulted in a preliminary solution requiring user input in the case of gender acquisition, and in a language-specific acquisition and formatting of structural information from a sentence in the case of the filter for non-anaphoric pronouns. These problems pose certain limitations. On the one hand, even if the solution for the non-anaphoric filter, for example, can still be considered heuristic, it is not entirely inexpensive and unsophisticated. On the other hand, given that the \hyphenquote{UKenglish}{multilingual nature} of Mitkov's approach is emphasised \autocite[153]{mit0}, language-specific solutions are problematic because they constitute bottlenecks for the adaptability of the approach for other languages. Even though the limitations of so-called \hyphenquote{UKenglish}{anaphora resolution \emph{task-specific preprocessing tools}} do not only pertain to Mitkov's approach \autocite[194-195]{mit0}, these problems obviously conflict with some of the basic features attributed to this anaphora resolution strategy.

Similarly, although the implementation of most antecedent indicators is truly simple, the indicators \emph{Immediate Reference}, \emph{Antecedent-Pointing Constructions}, and \emph{Referential Distance} have incomplete descriptions and contradict the basic features listed for Mitkov's algorithm to some extent. Despite vague descriptions, the solutions applied for the implementation of these indicators can be considered heuristic. Nevertheless, they exceed the simplicity of string matching, for example, as it is applied for some of the other indicators such as \emph{Section Heading Preference}. Furthermore, the heuristics employed require the acquisition of syntactic information to some extent, such as for the identification clause boundaries for \emph{Referential Distance}. This constitutes a contradiction with the statement that \hyphenquote{UKenglish}{Mitkov's approach [...] has no information about the syntactic structure of the sentence} \autocite[152]{mit0}. Even though the pre-processing for Mitkov's approach does not involve extensive parsing \autocite[146]{mit0}, the heuristics involved in the implementation of the three indicators listed above involve some form of syntactic analysis, even if this analysis is only realised with the help of formatted strings which are validated with the help of regular expressions. Consequently, not all indicators are equally \hyphenquote{UKenglish}{knowledge-poor [and] inexpensive} \autocite[145]{mit0}. In addition, formatting these strings for the validation with the help of regular expressions constitutes another bottleneck for the multilingual implementation of the approach because the formatting could not be implemented with the help of the language specific-lists of strings stored in the fields of the class \emph{Anaphora.cs}.

Lastly, minor problems occurred during pre-processing because among other expected minimal errors, the \emph{TreeTagger} chunker did not always recognise lists contained in manual texts properly. Although these errors were not post-edited, they did not seem to influence the test results of the application significantly.
 
Based on these points of criticism, future work mainly concerns the improvement of pre-processing and the elimination of language-dependent elements for some of the indicators employed by \emph{KPAR}. A part-of-speech tagger which is also able to confidently identify gender and number information could eliminate language-specific information involved in pre-processing to a significant extent, for example. Furthermore, if a revision of the indicators \emph{Immediate Reference}, \emph{Antecedent-Pointing Constructions}, and \emph{Referential Distance} can successfully simplify or entirely eliminate the language-specific heuristics involved, the implementation of Mitkov's algorithm for another language could be accomplished with less necessary adaptations. Then, the fields provided with the class \emph{Anaphora.cs} for required, language-specific information could be sufficient to modify the application for that language. Whereas modifications for Polish, Arabic, and French already exist \autocite[153-164]{mit0}, German could be a suitable option for this task, for example.
% ##########################################################################
\newpage
\setcounter{excnt}{1}
\section{Conclusion}
Although the implementation of Mitkov's knowledge-poor approach to anaphora resolution revealed some inconsistencies, this approach can still be considered reliable and successful for the limited set of manual texts. The results of the test of the application \emph{KPAR}, implemented based on Mitkov's description of this knowledge-poor algorithm, revealed an overall critical success rate of approximately 81.5\% for three different manuals, which is only marginally less than the critical success rate of 82\% reported for Mitkov's approach \autocite[153]{mit0}. For some of the pre-processing tasks and some of the indicators employed by the algorithm, the implementation had to deal with incomplete descriptions and resulted in either preliminary solutions or heuristics which were language-specific and discernibly more sophisticated and thus less knowledge-poor than the heuristics involved in the implementation of other indicators. Nevertheless, most indicators work based on genuinely knowledge-poor heuristics only, involving little more than the comparison of identification numbers and string matching.

Despite the unsatisfying results of experimental tests with scientific texts, its high critical success rate renders Mitkov's algorithm a promising approach to anaphora resolution for syntactically less complex manual texts \autocite[152]{mit0}. By removing scientific texts entirely from the set of targeted text types, the indicator \emph{Antecedent-Pointing Constructions} could be simplified, for example, rewarding only sequential instructions as a \hyphenquote{UKenglish}{highly genre specific} construction \autocite[148]{mit0}. Furthermore, if the problems originating from incomplete explanations for some indicators and resulting language-dependent heuristics are solved, a version of the algorithm for another language such as German could be implemented and tested in a feasible way.
% ##########################################################################
\newpage
%****************************************************************************************************
% Literaturverzeichnis ************************************************************************************
%****************************************************************************************************
\chead{\footnotesize{References}}
\begingroup
%\raggedright
\sloppy
\printbibliography[heading=bibintoc]
\endgroup
%\end{flushleft}
% ##########################################################################
\newpage
\chead{\footnotesize{Attachment}}
\setcounter{excnt}{1}
\section*{Attachment}
\addcontentsline{toc}{section}{Attachment}

\end{document}

% Darauf verzichte ich nie:
%\usepackage[babel]{csquotes}
%\MakeAutoQuote{«}{»}