%This chapter is for the detail of the implementation
%and results of experiments

%\documentclass[oneside]{dit-phd-thesis}
%\usepackage{listings}
%\usepackage{textcomp}
%\usepackage{longtable}

%\begin{document}
%\addtocounter{chapter}{5}

\chapter{Evaluating the terminological shadow approach}
\label{ch:result}
Chapter \ref{sec:sno-ehrcontext} has described the conceptualisation of terminological shadows and the architecture of
a framework for creating shadows from archetypes. It raises the necessity to validate
the approach by evaluating the proposed framework.  
This chapter will discuss the implementation details of the methods and evaluation plans 
that are described in the previous chapter and present results of these quantitative studies. 
The implementation includes the creation of frameworks 
and software to set up the investigation environment. The evaluation of
the terminological shadow approach used in this work mainly consists of three stages: 
\begin{itemize} 
  \item Implementing the terminological shadow framework.
  \item Assessing a Lucene based terminological shadow generation algorithm by comparing results
    to pre-mapped SNOMED-CT concepts.
  \item Quantitative investigation of the binding algorithm by measuring the output of
    parameterised binding processes.
\end{itemize}


\section{Implementation of the terminological shadow framework}
This section describes the necessary steps to set up the experimental environment. The steps include
the acquisition and set up of the selected terminology resource, SNOMED-CT and
the creation of a generic framework that can be used for testing by generating terminological
shadows. The implementation is coherent with the design strategy that is described in Chapter \ref{sec:sno-ehrcontext}. 

\subsection{Acquisition of terminology resource}
With so many, sometimes overlapping terminologies available for different purposes, a
decision needs to be made about which clinical terminology should be used. Frequently referred as a
``reference terminology'', SNOMED-CT is a multi-purpose terminology with hundreds of thousands of
concepts that provide wide coverage over many disciplines. Table \ref{codingschemes} lists some standard coding
schemes and terminologies that are used in clinical information systems. The coding schemes that are
gathered here for comparison include: the International statistical classification of diseases and
related health problems (ICD) \parencite{icd10manual}, the International Classification for Nursing
Practice (ICNP) \parencite{nielsen1996architecture} , the Current Procedural Terminology (CPT)
\parencite{beebe2006cpt}, 
the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD9-CM), 
the Drug Identification Number for the Drug Product Database (DPD) \parencite{dpdcode}, the Logical Observation Identifiers Names and Codes
(LOINC) \parencite{loincman}. They are referred to as coding schemes
because their usage is associated with specific coding requirements. From the table it is clear that SNOMED-CT
covers more clinical area than most of the other terminologies. Additionally, as part of its core
functionality, SNOMED-CT can be used as a cross-referencing terminology to map terminologies.
% check !htbpp for tables
\begin{table}[!htbp]\footnotesize \begin{center} \begin{tabular}{ l l l l l l l l}

			 & ICD & ICNP & CPT & ICD9-CM & DPD & LOINC & SNOMED-CT\\
			 \hline
			 For nurses & no & yes & no & no & no & no & yes\\
			 Surgical Procedures only & no & no & yes & no & no & no & yes\\
			 Diseases & yes & no & no & no & no & no & yes\\
			 Laboratory & no & no & no & no & no & yes & some\\
			 Drugs & no & no & no & no & yes & no & some\\
			 Billing & no & no & yes & yes & no & no & no\\
			 Epidemiology and Statistics & yes & no & no & no & no & no & yes\\
			 All medicine & no & no & no & no & no & no & yes\\

			 \hline


		 \end{tabular} \end{center} \caption{Coverage of different standard terminologies
		 and coding schemes with respect to different medical purposes} \label{codingschemes}
	       \end{table}



% check acronym style consistency
SNOMED-CT is owned and administered by the \emph{International Health Terminology Standards Development
Organisation} (IHTSDO)\footnote{The SNOMED-CT release used in this thesis was obtained from U.S. National 
Library of Medicine, which is the U.S. distributor of SNOMED-CT.}. Some member countries of
IHTSDO have their own license policy governing the distribution of terminology content. Country 
specific versions of SNOMED-CT also exist and are maintained and extended at a local context.

The 2008 standard release of SNOMED-CT is used for all experiments in this thesis. The reason for
using such a specific version is due to the time frame of clinical archetypes creation,
most of which were among 2007 to 2009. Considering that clinical experts were using
SNOMED-CT 2008 as the reference terminology at that time, it is used in terminological shadow
creation process to produce comprehensive results.


The core components of the SNOMED-CT release are three text files that each contains
SNOMED-CT concept related data. They are essentially three tables: \emph{Concepts},
\emph{Descriptions} and \emph{Relationships}. Figure \ref{coretbls} shows the content and the
relationship of the three tables.
The format of each file is suitable for importing 
its content into a relational database. The \emph{Concepts} table stores information about each
SNOMED-CT concept including a formally defined name, a numeric code to represent the concept and other
additional information concerning the characteristic of the concept. The \emph{Descriptions} table
stores more descriptive text of each concept, such as synonyms and preferred terms. The
\emph{Relationships} stores the information of how two concepts are related.
All three tables are imported directly and are used in the
experiments to look up a SNOMED-CT concept and display its fully specified name
% todo-- SHOULD I USE PAST TENSE????
(a full description of a medical concept). 
%Table \ref{sno_numbers} lists the number of concepts that each first level category contains to 
%indicate the different sized branches of the terminology.


\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.7]{../res/coretbls}
	\end{center}
	\caption{The three core tables of SNOMED-CT}
	\label{coretbls}
\end{figure}


% todo-- move to next chapter
% \begin{table}[!htbpp]\footnotesize \begin{center} \begin{tabular}{ |l|l| }
% 			\hline
% 			\textbf{First level} & \textbf{Number of}\\ \textbf{category name}&
% 			\textbf{concepts}(size)\\
% 
% 			 \hline
% 
% 			 Clinical finding (finding) &    109311\\
% 			 Special concept (special concept)&      67342\\
% 			 Procedure (procedure)&  53854\\
% 			 Body structure (body structure) &       31837\\
% 			 Organism (organism)&    27952\\
% 			 Substance (substance)&  23456\\
% 			 Pharmaceutical / biologic product (product)&    19084\\
% 			 Qualifier value (qualifier value)&      8904\\
% 			 Event (event)&  8447\\
% 			 Observable entity (observable entity)&  7834\\
% 			 Social context (social concept)&        5252\\
% 			 Situation with explicit context (situation)&    4912\\
% 			 Physical object (physical object)&      4515\\
% 			 Environment or geographical location (environment / location)&  1741\\
% 			 Linkage concept (linkage concept)&      1136\\
% 			 Staging and scales (staging scale)&     1113\\
% 			 Specimen (specimen)&    1055\\
% 			 Record artifact (record artifact)&      202\\
% 			 Physical force (physical force)&        172\\
% 			 
% 			 total concepts: & \textbf{378111}\\
% 			 \hline
% 			 SNOMED-CT version: & Jan 2008\\
% 			 \hline
% 		 \end{tabular} \end{center} \caption{The number of concepts in SNOMED-CT first level
% 		 categories} \label{sno_numbers}
% 	       \end{table}



\subsection{Building index}
The method of creating a terminology index for later access has been introduced in section
\ref{sec:term-comp}. Described as the terminology component of the framework, the implementation of the \emph{Terminology Index
Builder} imports all three core tables of the SNOMED-CT release into a relational database and
builds indices from the content of these tables. 
Table \ref{sno_totals} shows the statistics of the SNOMED-CT release that is used in the study. 
As a result, a total of $378,111$ SNOMED-CT 
concepts were imported together with the descriptions of concepts and the relationships as shown in
Table \ref{sno_totals}. 
\begin{table}[!htbp] \begin{center} \begin{tabular}{ |l|l| }
 			
 			 \hline
 			 Total concepts: & $378111$\\
 			 Total descriptions: &    $1068278$\\
 			 Total relationships: &  $1357719$\\
 		 
 			 \hline
 			 SNOMED-CT version: & Jan 2008\\
 			 \hline
 		 \end{tabular} \end{center} \caption{The detail of imported SNOMED-CT
 		 data} \label{sno_totals}
 	       \end{table}
The \emph{Descriptions} table is used mainly for searching SNOMED-CT with queries
due to the wealth of the terms in the table that not only includes the fully specified names but also
synonyms and variants of terms for the same concept. For example in the description table of
SNOMED-CT, the concept \emph{Diabetes mellitus (disorder)} is associated with the following
different terms listed in Listing \ref{lst-dm}.
{\small
\lstset{basicstyle=\ttfamily,
stringstyle=\ttfamily
}
\begin{lstlisting}[caption={Synonyms in SNOMED-CT}, label=lst-dm]
Fully Specified Name: 	Diabetes mellitus (disorder)
Preferred: 	Diabetes mellitus
Synonym: 	DM - Diabetes mellitus
Synonym: 	Diabetes mellitus, NOS
\end{lstlisting}
}
It is indicated that the term ``Diabetes mellitus'' is a preferred term for displaying this
condition in a medical application. However all descriptions are considered when searching for a
SNOMED-CT concept. In order to quickly and
efficiently query and search concepts of SNOMED-CT, indices were built to aid the retrieval of
concepts. Two types of index were created and compared: 
\begin{enumerate}
  \item Indices generated by the database management system (DBMS)
  \item Indices generated by the Lucene tool set
\end{enumerate}

It is relatively easy to create indices for a relational database. 
For example, a database application such as MySQL can create indices
and store them in B-trees \parencite{widenius2002mysql}. The B-tree data structure is widely employed by many
database applications and file systems because it keeps the data sorted
for quick data accessing and large-volume data read and write operations. In this thesis, the indices for
the \emph{FullySpecifiedName} column of the ``Concepts'' table and the \emph{Term} column of 
the ``Descriptions'' table have been created by using MySQL's ``FULLTEXT'' index option. 

On the other hand, these two tables have been indexed using the Lucene tool set.
The highly acclaimed Apache project\footnote{http://lucene.apache.org/} has been adopted in many
applications and software suites as a general-purpose indexing-searching engine. In the work
reported in this thesis, all
rows in the ``Concepts'' and ``Descriptions'' tables have been processed by Lucene, to form
indices that can be used for searching concepts with search terms like ``diabetes + mellitus''.

The behaviours of searching concepts with both methods are compared. Results show that indices created by
the DBMS are good for string matching, particularly when the user knows part of the term of the
concept. For example, if a user wants to search for the SNOMED-CT concept that is related to type 1 diabetes, the
SQL string comparison function ``LIKE'' with ``Diabetes mellitus\%'' or ``Diabetes\%type 1\%''
will find relevant concepts accurately. However the performance of the search in terms of
query time varies. The MySQL server
evaluates the SQL statements and performs optimisation where necessary. For instance, 
when using wildcard ``\%'', MySQL server will choose to use available indices if a large number of data
can be ignored. If ``\%'' appears at the start of the query, MySQL uses the Turbo Boyer-Moore
algorithm \parencite{Boyer1977str} to improve the search time.  Therefore, the elapsed time for each search varies from query to
query. The upper limit on query response time may approach 300 milliseconds for the worst cases.

The indices produced by Lucene are a type of inverted index \parencite{gospodnetic2005lucene} in a binary format. The 
search speed for Lucene indices is relatively faster than the indices produced by MySQL. The
average time of a Lucene query is around 5 to 10 milliseconds. Searching with the Lucene index utilises a
weighting scheme called the \emph{term frequency–inverse document frequency} (TF-IDF)
\parencite{BaezaModernIR}. By using this scheme, a query term is evaluated and the results are ranked according to their relevance to the
query term. For instance when searching with a term ``blood pressure'', the tool will calculate a
relevance score for every record that contains ``blood pressure'' and return the results in an order
that starts with the most relevant one respectively. This search behaviour is favoured in this
thesis and the Lucene tool set is used in the implementation. The effectiveness of the \emph{tf-idf} 
weighting scheme will be further explored and investigated in section \ref{sec:eva_method}.

\subsection{Implementation of the framework components}
\label{sec:impl-fw}
In the description of the terminological shadow creation method the terminology
component and binding component together with the archetype traverser package, 
are implemented by the author in the Java programming language. These software artefacts constitute
a suite that is used to create terminological shadows and execute evaluation plans. The suite is
referred to in this thesis generally as the ``framework'', 
which is used as the foundation of experiments. Setting up an experiment will
involve changing configuration and instantiating objects from the framework to in order to be able
to process different data sets or to get results. A diagram that depicts the implementation
of the framework and the collaboration between different packages is shown as Figure \ref{package}.
\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.6]{../res/package}
	\end{center}
	\caption{The implementation of the ``Terminological shadow framework'' that is used as the base of experiments}
	\label{package}
\end{figure}
The diagram shows the internal relations between the implemented software packages. The experiments that are carried out in
the following sections mainly use the binding component to process archetypes, which relies on both
the archetype traverser and the terminology component to access the SNOMED-CT resource.


\section{Initial evaluation of terminological shadows}
\label{sec:init-eva}
As introduced in section \ref{sec:initeva_plan}, the experiment in the initial evaluation stage is the first step to
validate the terminological shadow approach. Essentially this step aims to find out whether the
terminological shadows can support the automation of suggesting SNOMED-CT concepts that should be
the closest to the manually bound SNOMED-CT concepts created by experts. The rationale of the
initial evaluation is to verify whether terminological shadows can be created to represent
the original clinical meanings correctly.

\subsection{Method of the initial evaluation}
\label{sec:init_method}
This experiment uses the terminological shadow framework to process selected archetypes that contain manually created
terminology bindings. The archetype traverser goes through the data nodes and uses the text
description in the node to search for SNOMED-CT concepts. The resulting concepts together with
information about the archetype nodes have been stored; hence the terminological shadow creation is
complete. Once the terminological shadows of the archetypes have been obtained, the evaluation
begins: the SNOMED-CT
concepts generated by the framework are compared to the manually bound ones that are pre-defined in
the ADL. The accuracy of the terminological shadows in terms
of the relevance of the automatically suggested concepts, are measured by calculating the \emph{recall} and
\emph{precision} of the process, with the manually bound SNOMED-CT concepts as the ground truth. 
\emph{Recall} and \emph{precision} are measures that are common in the area of information
retrieval. They are used to evaluate and reflect the quality of the retrieval performance, in a
scenario where the results of an information retrieval system are evaluated against a gold standard.
Given an set of documents in all available documents as the relevant document
set (the gold standard) to a search query, and the documents that have been retrieved by the system
\parencite{BaezaModernIR}, \emph{recall} and \emph{precision} are defined in Equation \ref{eq:recall}
and \ref{eq:precision}:
\begin{itemize}
  \item \textbf{Recall} is the fraction of the relevant documents which has been
    retrieved.
	\begin{equation}\label{eq:recall}
    Recall = \frac{RelevantDocuments \bigcap RetrievedDocuments}{RelevantDocuments}
  	\end{equation}
	\myequations{Equation number \ref{eq:recall}}
  \item \textbf{Precision} is the fraction of the retrieved documents which is relevant.
	\begin{equation}\label{eq:precision}
    Precision = \frac{RelevantDocuments \bigcap RetrievedDocuments}{RetrievedDocuments}
  	\end{equation} 
	\myequations{Equation number \ref{eq:precision}}
\end{itemize}
The objective of the evaluation is to investigate the quality of automatically suggested SNOMED-CT
concepts by the terminological shadow creation framework.

\subsubsection{Archetypes for the experiment}
The archetypes that are used in this experiment are selected from openEHR's archetype repository.
A total of seven archetypes in the form of ADL file, which have been modelled and developed by
clinical experts. As mentioned in section \ref{sec:lifecyc}, reviews and revisions often happen intermittently 
within the archetype development life cycle. Consideration has been given to selecting
archetypes that are mature and stable for the experiment. The selection criteria also include:

\begin{enumerate}[(a)]
  \item selecting only those archetypes where the ratio of bound data nodes to unbound nodes is relatively large compared to other archetypes, 
  \item selecting only those archetypes where the clinical content of the archetypes are across different areas and not related.
  \end{enumerate}
Table \ref{arch_paper1} lists all the archetypes with their full names.
\begin{table}[!htbp]\small \begin{center} \begin{tabular}{ |r|l| }

  \hline
  Archetype full name & No.\\
  \hline
  openEHR-EHR-CLUSTER.symptom.v1.adl & 1\\
  openEHR-EHR-OBSERVATION.blood\_pressure.v2.adl &    2\\
  openEHR-EHR-EVALUATION.activities\_of\_daily\_living.v2.adl &  3\\
  openEHR-EHR-CLUSTER.checklist\_item-learning\_disability\_referral.v1.adl & 4\\
  openEHR-EHR-CLUSTER.body\_site.v2.adl & 5\\
  openEHR-EHR-EVALUATION.waterlow\_pressure\_ulcer\_prevention\_score.v1.adl & 6\\
  openEHR-EHR-OBSERVATION.hearing.v1.adl & 7\\
			 \hline
 		 \end{tabular} \end{center} 
		 \caption{Selected archetypes for the initial evaluation of terminological shadow
		 creation framework} \label{arch_paper1}
\end{table}
The framework takes ADL files as input and process these archetypes one by one. Each ADL file
will be parsed to obtain the archetype node tree which will be traversed by the archetype traverser.
When traversing the archetype node tree, text from the data nodes have been extracted and used to
search for appropriate SNOMED-CT concepts by the binding component, which is using the Lucene index
to help find the most relevant concepts.


\subsubsection{Shadow creation}
\label{sec:shadow_creation}
The process by which terminological shadows can be generated from ADL files has been mentioned in
the previous
section. As the main work flow of the framework, experiments including the initial evaluation
and the rest of the investigation are all based on the terminological shadow creation with various
extensions and modification. The technical detail of the process involves the following tasks:
% check lists to make appropriate use of enumerate or itemize
\begin{itemize}
  \item Use the ADL parser to generate archetype node trees
  \item Traverse the archetype node tree
  \item Extract text from nodes that are of interest
  \item Perform searches to find SNOMED-CT concepts
  \item Store search results and node information (the reference model type, path and
    text of the node)
\end{itemize}
The openEHR Java implementation of the ADL parser\footnote{http://www.openehr.org/projects/java.html}
is used in the archetype traverser package to parse the ADL files to obtain archetype node trees.
Once the ADL syntax has been parsed, an archetype node tree exists in
the form of a java object tree. The archetype traverser then applies particular visitors to extract
information of interest to be processed. In this experiment, the text of a data node in the
archetype has been extracted and used to search for SNOMED-CT concepts. An archetype node that has a specific meaning
often contains an ``archetype term'', which has a short description of what clinical concept it
represents in the context of the archetype. Figure \ref{bp_tree} is a visualisation of the blood
pressure observation archetype node tree created with the LinkEHR
editor\footnote{http://www.linkehr.com/}. The node with code in the form of ``[at0000]'' contains an
archetype term.  
\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.8]{../res/bp_tree1}
	\end{center}
	\caption{Extracting text from archetype terms to create terminological shadow}
	\label{bp_tree}
\end{figure}
The text that follows the code is a description of the archetype term. The \emph{binding component}
uses the textual description for searching SNOMED-CT concepts.
For example the phrase ``blood pressure'' is used as a query term to search for appropriate
SNOMED-CT concepts. The result of the query and the information about the node is stored as part of
the terminological shadow.

\subsubsection{Evaluation method}
\label{sec:eva_method}
The evaluation strategy is to use the pre-defined, bound SNOMED-CT concepts as the ``ground truth'' to
judge the accuracy of the resulting terminological shadows. The output of the 
%todo check if component need to be emph 
\emph{binding component}, which are SNOMED-CT concepts returned by the framework, 
are compared to pre-defined SNOMED-CT bindings. 

In this experiment, each archetype term text is sent as a query to the Lucene index built by the
\emph{terminology component}. The number of concepts that it is allowed to return is limited to 10.
The concepts that are returned by the search query are the \emph{candidate concepts}. If the archetype
term has a SNOMED-CT binding, the bound SNOMED-CT concept that has been selected by clinical experts
for inclusion in the archetype is compared to the candidate concepts to
see if it is among them. When a candidate concept set contains the bound concept, it is a
\emph{common match}. If the candidate concepts have not matched the bound SNOMED-CT concept, but
their parents or children match the bound concept, it is a \emph{partial match}. If the candidate
concept set contains both bound concept and a number of its children and parent concepts, it is
named an \emph{additional match}.
Recall and precision values have been calculated for the queries performed in the seven archetypes.
Instead of computing the recall and precision for each query, an averaged recall and precision value 
for each archetype has been calculated as defined in Equation \ref{eq:recallavg} and
\ref{eq:precisionavg}. 
\begin{equation}\label{eq:recallavg}
Recall_{avg}=\frac{1}{n}\sum_{i=1}^{n}Recall_i
\end{equation}
	\myequations{Equation number \ref{eq:recallavg}}
\begin{equation}\label{eq:precisionavg}
Precision_{avg}=\frac{1}{n}\sum_{i=1}^{n}Precision_i
\end{equation}
	\myequations{Equation number \ref{eq:precisionavg}}
In Equation \ref{eq:recallavg} and \ref{eq:precisionavg}, $n$ denotes the total 
number of bound concepts in one archetype; therefore the average
recall or precision is an arithmetic mean of all recall or precision values. The average recall
and precision of both \emph{common matches} are the main results of the initial
evaluation. 

The reason for choosing the mean values as a representation of overall recall
and precision of all queries is because this statistic (average of all recall
and precision values) is only descriptive and will not be used in further
calculation and analysis. The mean values are only intended to show how well in
general the queries performed.


\subsection{Results of the initial evaluation}
\label{sec:init_result}
Having performed the experiment that has been described in the previous section for the
selected archetypes, the results are obtained and presented in this section. Table \ref{tbl1_paper1}
lists the first part of the evaluation results, which reports the average recall and precision of
each archetype based on the common match situation. The numbers in the first column refer
to the archetypes listed in Table \ref{arch_paper1}. The second column `Total nodes of archetype'
represents the total number of archetype nodes in that archetype. `Total bound SNOMED-CT'
is the number of existing bound SNOMED-CT concepts in the archetype. The fourth column `Common
matches' denotes the occurrence of common matches in this archetype. The fifth and sixth column are
the calculated average recall and precision of all queries in the archetype respectively.
\begin{table}[!htbp]\small \begin{center} \begin{tabular}{ |l|l|l|l|l|l| }

  \hline
	Archetype  & Total nodes  & Total bound  & Common   & Avg. recall   & Avg. precision\\
       No. & of archetype & SNOMED-CT    & matches  & of archetype & of archetype \\
  \hline
		 1 & 58 	  & 21		 & 13	    & 0.619 	   & 0.0619 \\
		 2 & 47		  & 28 		 & 21	    & 0.75 	   & 0.075 \\
		 3 & 68 	  & 28 		 & 6 	    & 0.214	   & 0.0214 \\
		 4 & 24 	  & 15 		 & 14	    & 0.93	   & 0.093 \\
		 5 & 13		  & 6		 & 6	    & 1		   & 0.1 \\
		 6 & 81		  & 32		 & 10 	    & 0.312	   & 0.0312 \\
		 7 & 33 	  & 17		 & 11	    & 0.647	   & 0.0647 \\
			 
  \hline 		 
\end{tabular} 
\end{center} 
\caption{Results of the initial evaluation Part I: average recall and precision of 
queries that use archetype terms to search SNOMED-CT concepts} \label{tbl1_paper1}
\end{table}
In archetype No.~4, which is \textsf{
CLUSTER.checklist\_item-learning\_disability\_referral} there are 
originally 17 bound SNOMED-CT concepts. Among them there are two
SNOMED-CT extension concepts\footnote{Locally defined SNOMED-CT concepts to
compensate the standard release.}that do not belong to the standard 2008 release,
therefore they are excluded from the table. One of the selection criteria of archetypes in this
experiment is the ratio of total nodes and the nodes bound with SNOMED-CT concepts. As seen in the
table most archetypes have a ratio around $\frac{2}{5}$ of bound nodes over total nodes. The reason
is to present a balanced averaged recall and precision rather than that of a sparsely-bound
archetype. The average precision is ten times smaller than the recall value because of the one to
one archetype term SNOMED-CT concept binding. The average recall values range from 1 to 0.214, with
an observation that the No.~5 archetype does not have many bound concepts which can be attributed
to the high recall value.

Besides common matches, the experiment also produced partial matches and additional matches. These
situations mean that the candidate concepts, which are the result of a single query, may contain
parent or child concepts of the bound concept. Table \ref{tbl2_paper1} shows the number of
additional, partial and no match occurrences that can be found in the results of the queries.
\begin{table}[!htbp]\small \begin{center} \begin{tabular}{ |l|l|l|l|l| }

  \hline
	Archetype  & Total bound  & Additional  & Partial     & No \\
       No. & SNOMED-CT    & matches     & matches       & matches\\
  \hline                           
		 1 & 21		 & 6 	  & 1		    & 7 \\
		 2 & 28 	 & 2	  & 0 		    & 7 \\
		 3 & 28 	 & 2 	  & 0 		    & 3 \\
		 4 & 15 	 & 0 	  & 0 		    & 1 \\
		 5 & 6		 & 0    & 0		    & 0 \\
		 6 & 32		 & 5	  & 9		    & 13 \\
		 7 & 17		 & 3 	  & 0	    	    & 6 \\
			 
  \hline 		 
\end{tabular} 
\end{center}
% check caption with or without period
\caption{Results of the initial evaluation Part II: additional, partial and no matches in the
queries} \label{tbl2_paper1}
\end{table}
Similar to Table \ref{tbl1_paper1} the second column is the number of bound SNOMED-CT concepts in one
archetype. This number therefore denotes the number of queries. It is shown in the table that
additional matches in the seven archetypes are more frequent than partial matches. It also indicates
a considerable amount of no matches, where the candidate concepts do not contain the bound concepts.
These situations are discussed and analysed in the following section. The additional matches and partial
matches could potentially contribute to boost the recall and precision since they are very
close semantically to the bound SNOMED-CT concepts. With the inclusion of additional and partial
matches, the overall recall for all queries would appear higher than only with common matches. 


\subsection{Analysis of the result}
Not to confuse with the evaluation of the algorithm that finds appropriate SNOMED-CT concepts, 
the initial evaluation is to validate the shadow approach by assessing the
correctness of the representation of terminological shadows. The shadow framework is designed to
facilitate the integration between EHR information models and clinical terminologies. It is capable to
adopt and incorporate more advanced algorithms to aid terminological shadow creation. Hence the framework
should be separated from the development of efficient algorithms that bind clinical content to
clinical terminology.
This section gives an interpretation of the results and discusses problems of the technique
used through analysing the output. 

A key method of the experiment is therefore to compare the returned result and the
manually bound SNOMED-CT concepts. The evaluation results reflect the quality of the
automatic SNOMED-CT searching algorithm that was used in this experiment. 
As introduced in section \ref{sec:eva_method}, the average recall or precision is an arithmetic mean of
recall and precision of all queries that used archetype terms as the search term in one archetype.
The reason for using an averaged value to represent the quality of queries in one archetype is
because one manually bound SNOMED-CT is assigned to one archetype term / node. This means that when
calculating the recall or precision, the number of relevant documents is 1. That makes the recall
value of an individual query to be either 1 or 0. 
Therefore recall value for a single archetype term only represents that whether the candidate
concept set contains the bound concept or not. An averaged recall value for all queries in
one archetype can better reflect the quality of the algorithm.
The results presented in section \ref{sec:init_result} provide material for analysing the framework in order to
validate the terminological shadow approach.

Table \ref{tbl1_paper1} shows that when the candidate concept set is limited to 10 concepts per
query, the average recall varied from $0.21$ to $1$. The archetype achieving the highest average
recall is archetype No.~5 \textsf{CLUSTER.body\_site}. Archetype No.~3 
\textsf{EVALUATION.activities\_of\-\_daily\_\-living}
 archetype has the lowest recall. Given the difficulty of having only one bound SNOMED-CT concept
for each archetype term, the resulting terminological shadows are to a moderate extent acceptable to
represent the clinical concepts in archetypes.
The varying recall values are primarily linked to the
variety of text descriptions of archetype terms that were used in the queries. As the input of the
searches and the source of the terminological shadow, the characteristics of archetype terms largely
defines the resulting shadow. The following example discusses how archetype terms affect the result
of terminological shadow and identifies potential improvement.


One problem with some archetype terms is that they tend to be ambiguous.
In Listing \ref{hearing}, 
a fragment of the ADL of \textsf{OBSERVATION.hearing} archetype which contains three
archetype terms is illustrated below: 
\lstset{basicstyle=\small\ttfamily,
stringstyle=\ttfamily
}
\begin{lstlisting}[caption={Ambiguous archetype terms in the hearing archetype}, label=hearing]
      ELEMENT[at0018] occurrences matches {0..1} 
      	matches { -- Rinne Test
	      value matches {
		      DV_CODED_TEXT matches {
			      defining code matches {
				  [local::
				  at0019, -- Negative
				  at0020] -- Positive
				  }
\end{lstlisting}
These terms, \textsf{at0018 -- Rinne Test}, \textsf{at0019 -- Negative}, \textsf{at0020 -- Positive} are bound to 
SNOMED-CT concepts \emph{84636009 -- Rinne test (procedure), 370377000 -- Rinne's test negative
(finding), 370378005 -- Rinne's test positive (finding)} respectively. However the recall values 
of the queries for these terms are 1, 0, 0; which means that only the candidate concepts returned by
the first query contains the bound concept.

Through manually inspecting the ADL fragment, the author found
that the archetype term of leaf nodes are likely to be a qualifying concept of the
parent node. 
This pattern has been observed in many archetypes. Further investigation of
identifying archetype modelling patterns will be conducted in Chapter \ref{ch:apply} section
\ref{discu}. In this case, the text description of the qualifier is relatively brief and simple.
However its bound SNOMED-CT concept is very specific. 
The context in the example implies that they are negative and positive results of the Rinne Test. 

One possible explanation is that archetype terms `Negative' and `Positive' are not
descriptive enough for a medical concept such as 
`Rinne Test'. The information is insufficient for the algorithm to retrieve desired SNOMED-CT
concepts. Instead, generic SNOMED-CT concepts that express the negative and positive meaning 
were returned by the search. From observation of the
results it seems clear that a possible improvement could be an inclusion of an inference
engine that intelligently modifies the query term according to the context. For instance if an
archetype term does not meet certain criteria it will be expanded to be more distinctive.
Additionally, a filtering process may be added to filter out the candidate concepts that are not
suitable for the situation.
% mention in future chapter that filter is somewhat done by removing inactives

The precision found in the results is arguably quite low. This is partially due to the fact that
according to Equation \ref{eq:precision} \emph{RelevantDocuments} in this experiment is 1 (one archetype term is precisely
bound to exact one SNOMED-CT concept).
The second part of the evaluation results shows that where the algorithm failed to return
the exact SNOMED-CT bound concepts, it sometimes retrieved their
parents or child concepts. Since exact terminology binding is under debate and sometimes can not
be agreed upon, a terminological shadow may fill in the gap as it is an approximation of the clinical
meaning of a part of the EHR. 
% should this be emphasised in previous chapter?
Although not calculated in the thesis, if certain parents or children concepts are approved and 
considered as alternative binding concepts for an archetype node by clinical experts, it implies
that higher precision could be achieved (\emph{RelevantDocuments} will be greater than 1).



The framework uses Lucene's searching and ranking algorithm to associate archetype terms with
appropriate SNOMED-CT concepts.
At the heart of the Lucene toolkit is the \emph{tf-idf} weighting scheme to provide ranked
results. The \emph{tf-idf} function is a generic approach that was originally designed to
measure similarity between query terms and documents. In this experiment the descriptions of
all SNOMED-CT concepts have been indexed by Lucene and therefore each description of a SNOMED-CT
concept is considered as a document, which may not contain enough words in the sense of a regular
document. The weighting scheme itself, and the algorithm under the hood of Lucene, is the subject of
further investigation in section \ref{sec:perf-eva}. 

\subsection{Conclusion of the initial evaluation}
The initial evaluation has demonstrated that a generic framework has been implemented to create
terminology shadows from archetypes.
The experiment has generated results that to a moderate extent prove the validity of
terminological shadows to represent the approximation of clinical concepts in archetypes.
The statistics showed that the average recall peaked in the \textsf{CLUSTER.body\_site} archetype
and exhibited the lowest value in the \textsf{EVALUATION.activities\_of\-\_daily\_\-living} archetype. If
the additional and partial matches are included the recall and precision could be higher than
the calculated values. The author believes that though the bound SNOMED-CT concepts are regarded as the gold standard, they
do not entirely reflect the true representation of terminological shadows. A more rigorous
evaluation method and metric needs to be developed, with further and deeper investigation into the underlying
mechanism. One such foundation is the \emph{tf-idf} weighting scheme used by Lucene. The experiment
that investigated the performance of the \emph{tf-idf} function to associate archetype terms with
SNOMED-CT concepts was carried out in the following section.

%It is worth noting that the framework separates the search algorithm with the terminological shadow
%creation. The framework intends to provide a
%testing platform which facilitates the improvement and optimisation of shadow creation algorithms.


\section{\emph{tf-idf} performance evaluation}
\label{sec:perf-eva}
In section \ref{sec:init-eva} an initial evaluation was performed to verify the concept of a terminological
shadow. The terminological shadow creation process is based on an automatic archetype term-SNOMED-CT
association method. The framework that generates terminological shadows from archetypes uses the
Lucene toolkit, which uses the
\emph{tf-idf} function as its primary searching and ranking algorithm. 
However the relation between the results and the algorithm is not very clear. Thus the author
believes that it is necessary to contemplate and investigate the effectiveness of the underlying algorithm.
This section presents experiments to evaluate the performance of the \emph{tf-idf} function used in
finding the most appropriate SNOMED-CT concepts that can be associated to archetypes.


In order to assess the performance of the \emph{tf-idf} based algorithm, all available archetypes
with bound SNOMED-CT concepts have been retrieved from the NHS repository, the largest archetype
repository of the four repositories that are under
consideration. The repository, maintained by the openEHR community, contains archetypes that have been
developed for the NHS project\footnote{http://openehr.org/wsvn/knowledge/archetypes/dev-uk-nhs/}.
%More details about the repository are discussed in section [ref] of chapter [ref].
A total of 1133 archetypes have been extracted from the archetype
repository to obtain the ``manually bound'' SNOMED-CT concepts that were defined by clinical
experts. A sensitivity analysis has been performed to assess the algorithm: terminological
shadows are created by using the same method as described in section \ref{sec:shadow_creation} with
different parameters. The resulting candidate SNOMED-CT concepts that are generated by the algorithm 
are compared to the bound concepts. Also another type of comparison is carried out: the analysis
also compares the SNOMED-CT category that the candidate concepts belong to with the bound concept's 
category. 

\subsection{Rationale of the performance evaluation}
\label{sec:rationale}
In section \ref{sec:init-eva}, the terminological shadow approach has been assessed by performing an initial
evaluation on the shadow creation process. The shadow creation framework
has been described by the author to be comprised of components that parse ADL files, find
SNOMED-CT concepts based on archetype terms, and compare the results with pre-defined manually bound SNOMED-CT concepts.
These components have made use of the standard SNOMED-CT release, the openEHR archetype parser, and
the Lucene toolkit for searching SNOMED-CT concepts with archetype terms.  
The relevance of the candidate SNOMED-CT concepts of the resulting terminological
shadows is largely determined by the archetype terms and the algorithm that finds SNOMED-CT concepts
based on archetype terms. Since archetype terms have already been created and are fixed during the
assessment, the performance evaluation attempts to evaluate the effectiveness of the algorithm. The
search algorithm used by the Lucene toolkit is a \emph{tf-idf} based weighting function. The process
is described by the diagram in Figure \ref{nece_eva2} to show how the algorithm works to find SNOMED-CT concepts.
\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[width=\textwidth]{../res/nece_eva2}
	\end{center}
	\caption{The role of \emph{tf-idf} in the terminological shadow creation process}
	\label{nece_eva2}
\end{figure}
However there is a lack of study to assess the suitability of the algorithm, that is, whether the
algorithm is efficient enough for obtaining the most relevant SNOMED-CT concepts with archetype terms as 
queries. Therefore the ability to retrieve relevant SNOMED-CT concepts with Lucene, in these
circumstances should be
evaluated and studied. Since the framework uses Lucene extensively and the \emph{tf-idf} weighting scheme is employed
as its searching and ranking algorithm, it is necessary to conduct a sensitivity analysis of
the function in the context of terminological shadow generation.



The performance evaluation in this section aims to explore the characteristics of the \emph{tf-idf} function 
to further the validation of the terminological shadow creation method. The simplicity of the
framework, in contrast to other more sophisticated systems such as the MoST system mentioned in
section \ref{sec:most}, which incorporates
multiple tools and services, makes it easier to analyse the effectiveness of the algorithm. 
The terminological shadow framework is only influenced by the input parameters (archetype terms and a set of
threshold values) and the \emph{tf-idf} function. The motivation of the evaluation also comes from
an observation of candidate SNOMED-CT concepts that are returned by the framework. When examining
the results that failed to match any bound concept as listed in Table \ref{tbl2_paper1}, it is found that
in some cases a common match could be obtained out of the top 10 returned candidate concepts.  For
example a common match may occur in position $12$ of the ranked candidates. 
Also it has been observed that at times the candidate
concepts belong to the same SNOMED-CT category. It was acknowledged by the author that a combination
of search algorithms can improve the results in terms of recall and precision. However
this evaluation focuses on the effectiveness of particular methods that are based on \emph{tf-idf}
These observations led to the following hypotheses:
\begin{itemize}
  \item \emph{Hypothesis 1}: The recall and precision of queries using a \emph{tf-idf} based 
   method are influenced by the number of the returned candidates. Optimal results may be achieved by
    tuning the threshold.
  \item \emph{Hypothesis 2}: The category of the dominating candidate concepts returned by the
    \emph{tf-idf} based ranking algorithm is likely to match
    the bound concept's category in SNOMED-CT.
\end{itemize}

The author believes that such an evaluation can
provide a more relevant finding for further application of the terminological shadow approach.
This experiment will not only benefit the current shadow creation process but also facilitate future improvements
on the algorithm. The findings of the evaluation have been used in the application of the terminological
shadow approach in Chapter \ref{ch:apply}.  

% todo weighting not weighing!!!! check math mode for mentioned in text 
\subsection{The \emph{tf-idf} weighting scheme}
It was noted earlier in this chapter that the 
terminological shadow framework incorporates a free text indexing tool called
Lucene \parencite{gospodnetic2005lucene}, featuring the \emph{term frequency-inverse document frequency} 
(TF-IDF) weighting scheme, which allows the ranking of search results. As a classic and widely
adopted algorithm by many text processing tools, the \emph{tf-idf} function is a term-weighting
strategy which calculates a weighted value from two factors: the \emph{term frequency} and the 
\emph{inverse document frequency}. Both factors are prominently used in classic models for
information retrieval such as the vector space model \parencite{BaezaModernIR}. Let $k_{i}$ represent a word in a document, or commonly
referred to as a term and $d_{j}$ be one of the documents in the collection, the definition
of the term frequency of the term $k_{i}$ of document $d_{j}$, known as $\textit{tf}_{i,j}$, is the number of times
the term appears in the document. The \emph{idf} factor is the inverse of
the frequency of a term among all documents in the collection. Then the inverse document frequency
of a term $k_{i}$, denoted as $\textit{idf}_{i}$ could be given by Equation \ref{eq:idf}:
\begin{equation}\label{eq:idf}
  	\text{idf}_{i} = \log(\frac{N}{n_{i}})
      \end{equation}
	\myequations{Equation number \ref{eq:idf}}
where $N$ is the total number of documents and $n_{i}$ is the number of documents in which the term
$k_{i}$ appears. The \emph{tf} factor typically reflects how well the term represents the content of
the document. That is, how important the term is inside the document (intra-document characterisation).
The \emph{idf} factor reflects the term's importance across all documents (inter-document
characterisation), which is derived from the principle that frequent terms across many documents are
not distinctive enough to distinguish relevant documents from non-relevant ones.  
The most basic form of calculating a weight for a term in a document is given by Equation
\ref{eq:tfidf}:
\begin{equation}\label{eq:tfidf}
  \text{tf-idf}_{i,j} = \text{tf}_{i,j}\times \text{idf}_{i}
\end{equation}
	\myequations{Equation number \ref{eq:tfidf}}
The $\text{tf-idf}_{i,j}$ value is obtained by multiplying  the $\text{tf}_{i,j}$ and
$\text{idf}_{i}$ factors. It represents a simplified composite weight for a term $k_{i}$ in a document
$d_{j}$. The ranking algorithm implemented by the Lucene tool is based on Equation \ref{eq:tfidf}.
There are many variations of this weighting function. Virtually all algorithms
that are based on this strategy try to balance the two factors to achieve better optimisation.
The \emph{idf} factor was heuristically obtained and proven a robust measure for information
retrieval purposes \parencite{church1999inverse}. Therefore the \emph{tf-idf} weighted values are rather
universal and not domain specific. The intention to adopt the Lucene tool set is also due to the
nature of a commonly applied weighting function. The following section will discuss more about the
effect of the \emph{tf-idf} function in the terminological shadow creation process.

\subsection{Method of the performance evaluation}
The \emph{binding component} is treated as an information retrieval system with archetype terms and
a set of parameters as input and candidate SNOMED-CT concepts as output. Therefore the performance
of the system is considered to be influenced solely by the \emph{tf-idf} weighting function, which
is the core of the Lucene toolkit that empowers the \emph{binding component}. This experiment is an
extension to the initial evaluation that has been described in section \ref{sec:init-eva}. The two factors \emph{recall} and
\emph{precision} are used as the main measure the performance of the system.

The evaluation of an information retrieval system typically features a
collection of documents and a set of queries with answers that are the relevant documents that have
been prepared by experts. This set of queries with known answers are regarded as the ground truth when evaluating a 
new IR method. An example is the Text Retrieval Conference (TREC) collection for general free text
document retrieval and the Cystic Fibrosis collection for medical
text retrieval \parencite{BaezaModernIR}. Therefore in an analogy to evaluating a new information
retrieval system, also similar to the initial evaluation in section \ref{sec:init-eva}, every SNOMED-CT
concept is regarded as a document. All the concepts that have been indexed are considered as the whole 
collection of documents. All archetype terms with bound SNOMED-CT concepts are regarded as the
``gold standard'', that is the collection of queries with answers that are manually created by the
clinical experts.


\subsubsection{Bound SNOMED-CT concepts}
In order to maximise the benefit of the evaluation, all available archetypes with
SNOMED-CT bindings in openEHR's open repository have been gathered to contribute to the
``gold standard'' collection. Including archetypes that were used in the initial evaluation,
21 additional archetypes have been added \footnote{Different versions of the same archetype
have been excluded }; although the SNOMED-CT bindings among them are quite
sparse. 
%Their full names are listed in appendix [ref]. 
It is worth pointing out that many archetypes inherit the same bindings from their ancestor archetypes. This
partly explains that with the additional archetypes, the unique bound SNOMED-CT concepts have not
increased in a great deal. The total number of bound SNOMED-CT concepts is 162. These bindings are, in
the author's opinion, the best evaluation resource available and 
the author accepts the correctness of the bound SNOMED-CT
concepts ``as is''.


\subsubsection{Experiment setting}
The procedure of the experiment uses similar settings of the initial evaluation. 
All the archetypes that have been collected are passed to the framework 
to re-generate the terminological shadows. In the performance evaluation experiment however, a set
of carefully selected parameters have been introduced as additional input, in order to generate different
results. This means parameters are provided to allow the shadow creation process to generate varied
candidate SNOMED-CT concepts for the same archetype term. The performance of the underlying
weighting algorithm, \emph{tf-idf}, is manifested by analysing the accuracy of the results. In the
implementation of the experiment, the framework takes the following parameters which purposely cause
changes in the resulting candidate concepts:
\begin{itemize}
  \item $N_{top}$ a threshold to control the maximum number of candidate SNOMED-CT concepts that the
    algorithm can return.
  \item $M_{maj}$ a threshold to set the minimum number of candidate SNOMED-CT concepts belonging
    to the same SNOMED-CT category that can be recorded as a ``majority'' in a query result.
\end{itemize}
For instance if the value of $N_{top}$ is set to 5, the resulting terminological shadow can contain
at most five candidate SNOMED-CT concepts for an archetype term. By changing this parameter, the
number of the resulting candidate concepts changes. The purpose of the $N_{top}$ parameter is to generate
different numbers of candidate concepts for a query and observe the different recall and
precision values caused by different $N_{top}$.
It is expected to be used to test Hypothesis $1$ mentioned in section \ref{sec:rationale}. $M_{maj}$ is a
parameter to record the number of candidate concepts whose category is dominating the candidate
set. For example, the archetype term ``Oral hygiene'' will yield the following candidates listed in
Listing \ref{lst-oral}.
{\small
\lstset{basicstyle=\ttfamily,
stringstyle=\ttfamily
}
\begin{lstlisting}[caption={Candidate set SNOMED-CT concepts for query term ``oral hygiene''},
  label=lst-oral]
	Oral hygiene status (observable entity)
	Oral hygiene finding (finding)
	Poor oral hygiene (finding)
	Oral hygiene education (procedure)
	Ability to perform mouthcare activities (observable entity)
	Oral hygiene status (observable entity)
	Good oral hygiene (finding)
	Oral hygiene status (observable entity)
	Complication of personal oral hygiene, function (observable entity)
	Oral hygiene status (observable entity)
\end{lstlisting}
}
The information in the parentheses refer to the category of a SNOMED-CT concept. 
This result indicates that the candidate set contains \emph{observable entity} as a dominating
category and $M_{maj} = 6$. This parameter is primarily used to test Hypothesis $2$ in section
\ref{sec:rationale}. 


\subsubsection{Evaluation method}
The evaluation method is similar to that of the initial evaluation. Recall and precision values are
calculated by comparing the candidate concepts with the bound concept for each archetype term query.
In the initial evaluation, the averaged recall and precision have been calculated 
for each archetype as a measure of the quality of the terminological shadow. This is achieved by 
calculating the average recall or precision for a specific archetype.
In the performance evaluation process however, the recall and precision of each query 
are not associated with archetypes but are calculated as a whole. As a result, the recall or precision value
in the presence of a parameter is the averaged recall or precision across all queries. 
Therefore the recall and precision are not associated with any specific archetype. 

The evaluation consists of two parts: 
\begin{enumerate}
  \item Calculate recall and precision of all the queries with different values of $N_{top}$
  \item Calculate percentage of matches between category of the majority in a candidate set and
    the category to which the bound concept belongs with different values of $M_{maj}$
\end{enumerate}
The second evaluation task is referred to as ``category matching'', which generates 
results under two conditions:
\begin{enumerate}
  \item The comparison between the dominating candidate concepts' category with the bound concept's.
  \item The comparison between the category of the number one ranked concept of the candidate set
    and the bound concept's.
\end{enumerate} 
A total 162 archetype terms and their bound SNOMED-CT concepts were extracted
from the archetype collection to perform these two evaluation tasks.



\subsection{Results of the performance evaluation}
\label{sec:perf_result}
Table \ref{tbl1_paper1.5} shows the performance measurement with different values of
parameter $N_{top}$ which limits the size of candidate concepts that are returned by the framework.
\begin{table}[!htbp] \begin{center} \begin{tabular}{ |l|l|l|l| }
  		 
 			 \hline
			 Value of     & Number of 	  & Avg. recall 	& Avg. precision\\
			 $N_{top}$    & common matches    & of all queries	& of all queries\\	
 			 \hline
 			 1&   62 	& 0.3827 	& 0.3827  \\
 			 2&   75	& 0.46296	& 0.23148  \\
 			 5&   86	& 0.53086	& 0.10617  \\
 			 10&  92	& 0.5679	& 0.05679  \\
			 15&  100	& 0.61728	& 0.04115  \\
			 20&  101	& 0.62346	& 0.03117  \\
			 30&  103	& 0.6358	& 0.02119  \\
			 40&  105	& 0.64815	& 0.0162  \\
			 50&  105	& 0.64815	& 0.01296  \\
			 100& 120	& 0.74074	& 0.00741  \\
			 \hline
		       \end{tabular} \end{center} \caption{The average recall and precision of all
		       queries with different values of $N_{top}$ } \label{tbl1_paper1.5}
 	       \end{table}
For each row, with the size of candidate concepts set to $N_{top}$, the number of common matches
denotes how many candidate sets contain the bound SNOMED-CT concept out of 162 archetype terms. The
average recall and precision are calculated as per the description in section \ref{sec:eva_method}. Figure \ref{plot1}
and Figure \ref{plot2} show the trends of the recall and precision with respect to the varying size
limit of the candidate concepts. As shown in these figures, the average recall value of all queries
starts low when only one candidate is allowed to be returned. The average recall improves when more
candidate concepts are allowed but reaches a plateau after the candidate set contains more
than 40 concepts. The average precision follows an opposing trend, as the precision values decline
as the number of candidate concepts increases. 
\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.6]{../res/plot1}
	\end{center}
	\caption{The average recall trend of all queries based on different number of the
	candidate concepts that are returned by the \emph{tf-idf} weighting function.}
	\label{plot1}
\end{figure}
\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.6]{../res/plot2}
	\end{center}
	\caption{The average precision trend of all queries based on different number of the
	candidate concepts that are returned by the \emph{tf-idf} weighting function.}
	\label{plot2}
\end{figure}
Figure \ref{plot1} illustrates that when more candidate concepts have been suggested by the framework,
it is more likely to match the bound SNOMED-CT concept. However the rate of recall improvement drops
after certain point when increasing number of candidates introduces less improvement of recall. 
As can be seen on Figure \ref{plot2}, with
more candidate concepts, the precision will tend to reduce, since there is but one bound concept
for each archetype term.
Additionally, another measure can be produced by calculating the harmonic mean of the recall and precision 
from above results, given:
\begin{equation}\label{fmeasure}
F=2\cdot\frac{Precision \cdot Recall}{Precision + Recall}
\end{equation}
	\myequations{Equation number \ref{fmeasure}}
which is traditionally known as the \emph{F-measure} \parencite{BaezaModernIR}, which provides a single measure that combines
recall and precision of the information retrieval method. It is to assist in the balancing of 
recall and precision in order to bring optimal retrieval results. Table \ref{tbl_f} lists the calculated
F-measure values for the average recall and precision. 
\begin{table}[!htbp] \begin{center} \begin{tabular}{ll|l|l|l|l|}
  \hline
  \multicolumn{1}{|l|}{$N_{top}$} & 1 & 2 & 5 & 10 & 15 \\
  \hline
  \multicolumn{1}{|l|}{F-measure \%}  & 38.27  & 30.864 &  17.6951 &  10.3255 &   7.7156\\   
		\hline
 		& \multicolumn{1}{|l|}{20}     & 30     & 40 & 50 & 100\\
		\cline{2-6}
		& \multicolumn{1}{|l|}{5.9372} & 4.1013 &   3.161 & 2.5412  &  1.4673\\
		\cline{2-6}
\end{tabular} \end{center} \caption{The F-measure of all
		       queries with different values of $N_{top}$ } \label{tbl_f}
 	       \end{table}
While not strictly F-measure values since the recall and precision values are
averaged across all queries, the values in Table \ref{tbl_f} could provide information that can be used for
deducing an optimal parameter value to maximise the retrieval accuracy. The importance and
interpretation of the F-measure will be discussed in section \ref{sec:fmeasure}.

The second part of the evaluation examines the categories of the dominating candidate concepts and
compares them to the bound concepts'. The number of dominating candidates that belong to the
same SNOMED-CT category has been recorded for each candidate concept set (which is also represented
by parameter $M_{maj}$). Figure \ref{plot3} shows the distribution of candidate set with the number of
concepts belonging to the same category ranging from 1 to 10, which means $1\leq M_{maj} \leq 10$.
\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.6]{../res/plot3}
	\end{center}
	\caption{The distribution of candidate sets / queries with respect to the number of concepts
	that belong to the same SNOMED-CT category}
	\label{plot3}
\end{figure}
The bars in Figure \ref{plot4} refer to the number
of candidate concepts belonging to the same category in a result set.
For instance 28 queries produced candidate concept sets that each contains 6
concepts belonging to the same SNOMED-CT category. Concept sets without a dominating group are rare.
Very few sets have $M_{maj}=1$, which means all candidate concepts in that set belong to different categories. However
these candidate sets that lack a dominating group are considered less important
because of the low occurrence and the high diversity of categories. The information about each query
and their dominating candidate concepts' category can be found available on the download page of the \emph{EHRland}
project website\footnote{\url{http://www.ehrland.ie/down_demo.html}}. The results of category matching are presented in Table \ref{tbl_cate}. 
The fractions show the number of matches from the category comparison, for example, in the candidate
sets that have 4 concepts belonging to a common category 12 out of 32 have a category match.
\begin{table}[!htbp] \begin{center} \begin{tabular}{|l|l|l|l|l|}
\hline 
$M_{maj}$ & Ratio of candidate   & & Ratio of \#1 concept's & \\
  	    & set category matches & & category matches &\\
\hline
1 & 1/2 & 0.5 & 1/2 & 0.5 \\
2 & 7/9 & 0.778 & 8/9 & 0.889 \\
3 & 5/20 & 0.25 & 9/20 & 0.45 \\
4 & 12/32 & 0.375 & 14/32 & 0.438 \\
5 & 10/22 & 0.455 & 15/22 & 0.682 \\
6 & 16/28 & 0.571 & 17/28 & 0.607 \\
7 & 4/12 & 0.333 & 8/12 & 0.667 \\
8 & 18/23 & 0.783 & 15/23 & 0.652 \\
9 & 4/5 & 0.8 & 3/5 & 0.6 \\
10 & 7/9 & 0.778 & 7/9 & 0.778 \\
\hline
\end{tabular} \end{center} \caption{The result of category matches in two cases: a) 
category of dominating concepts against category of the bound concepts b) category of number one ranked concepts
against category of the bound concepts} \label{tbl_cate}
 	       
\end{table}
The trends of the two different types of category matching are put together in Figure
\ref{plot4}. The blue line is the result of comparisons between the category of dominating candidate concepts
and the bound concepts; the green line shows the result of comparisons between the category of the number one ranked candidate
concepts and the bound concepts.
The blue line appears to be influenced by the increasing number of dominating concepts in a
candidate set. With bigger number of dominating concepts, the ratio generally grows.
\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.6]{../res/plot4}
	\end{center}
	\caption{The trend of two different types of category matching. The blue line is the  matching
	between the category of the dominating concepts and bound concept. The green line is the
	matching between the category of the \#1 concepts and the 
	bound concept.}
	\label{plot4}
\end{figure}
However the green line shows that the category matching success rate does not change too much with
different number of dominating candidate concepts. The results where $M_{maj}=1$ and 2 are not shown on the
figure because the proportion of the candidate sets and the number of dominating concepts may be 
too small to account for the matches.




\subsection{Discussion of the evaluation results}
\label{sec:diss-perfeva}
The results presented in section \ref{sec:perf_result} primarily exhibit the behaviour of the Lucene-based terminological
shadow creation algorithm to the changes of two parameters that are modified by the author. 
The results that show the sensitivity of the algorithm in the previous section  may lead to many
interpretations and discussion. However to a great extent the underlying \emph{tf-idf} weighting
scheme is playing an important role in suggesting and ranking candidate concepts. 

\subsubsection{The recall, precision and F-measure}
\label{sec:fmeasure}
The resulting average recall and precision values, as shown in Table \ref{tbl1_paper1}, represent the behaviour of
the \emph{tf-idf} function with different permitted sizes of the candidate SNOMED-CT concept set.
The trend of the recall value in Figure \ref{plot1} shows that within the range from 1 to 20
candidate concepts, the
average recall improves at a higher rate comparing to bigger candidate concepts. This result may
suggest that for a generic ranking algorithm such as the \emph{tf-idf} weighting scheme without
modifying to fit a specific purpose, the best retrieval strategy to match a single bound concept 
may require at least 10 candidate concepts. Therefor for an automatic binding process to associate
archetype terms with SNOMED-CT concepts without human intervention, one candidate concept set may
need to include at least 10 concepts. However in order to obtain a more accurate estimate more
quantitative studies are needed to establish the cause and effect.

It is arguable that under a one-to-one mapping condition (1 archetype term is mapped to 1 SNOMED-CT
concept) the recall is relatively difficult to improve since the highest number of relevant
documents is 1, see the recall definition in Equation \ref{eq:recall}. The same cause is affecting the precision
values as can be seen in Figure \ref{plot2}. The figure shows that the trend of the precision value
resembles a inverted logarithmic growth with the highest average precision value achieved at
$N_{top} = 1$. It indicates that increasing the size of the candidate set can
greatly diminish the average precision. The average recall and precision as the results of the
evaluation show the effectiveness of the \emph{tf-idf} function in two different aspects. As common
measure for evaluating information retrieval systems, achieving better recall and precision has been
the general goal of improvement in information retrieval systems. However a good balance is desired
between recall and precision for the system to perform better in information retrieval tasks. The
\emph{F-measure} as a combined score of the recall and the precision value is obtained as shown in
Table \ref{tbl_f}. The F-measure score in this experiment attempts to provide material for deducing
and identifying a $N_{top}$ value that is best suited for the terminological shadow creation. The
data in the table shows that the F-measure values are decreasing in one direction with 
increasing $N_{top}$ values. Many factors in the experiment may be responsible for causing such
result.  Equation \ref{fmeasure} which calculates the harmonic mean from recall and
precision may cause the decreasing of F-measure values in nature. As shown in Figure \ref{plot1} and
\ref{plot2}, the decline rate of precision is bigger than recall. Given that in general the average recall
values are often proportionally larger than the average precision, it is foreseeable that the
F-measure values decrease in one direction. It might be argued, however, that the F-measure should
be used directly to determine a best $N_{top}$, which is the size of the candidate concepts that an
algorithm should return. Yet it may suggest that there could exist a potential
$N_{top}$ value to be used to optimise terminological shadow creation process. The results including
average recall, precision and F-measure is supporting Hypothesis $1$ in section \ref{sec:rationale}. The F-measure in
this chapter has to some extent influenced the decision of the experiment in Chapter \ref{ch:apply}. 

\subsubsection{Category matching}
The result of the comparison between the categories of the dominating candidate concepts and the
bound concepts shows an interesting relation between terminological shadows and SNOMED-CT
categories. Figure \ref{plot3} shows that candidate sets with 4, 6, 8 dominating concepts have the
highest frequency among the results. The diagram also suggests that a candidate set generated by the
\emph{tf-idf} function will generally contain a group of concepts of the same category
that has more than three concepts. This finding makes Hypothesis 2 of section \ref{sec:rationale} feasible because
most candidate sets are likely to have a dominating group, in contrast to the low frequency of
non-dominating candidate sets as indicated by Figure \ref{plot3}. 

Figure \ref{plot4} illustrates the accuracy of two types of category matching that have described in
section \ref{sec:perf_result}. The blue line in Figure \ref{plot4} generally suggests that
the candidate set, the more dominating concepts it has, the more their category is likely to match
the category of the bound concept in SNOMED-CT. 
The detailed dataset of the candidate concept sets of the total 162 archetype terms have
been uploaded on the \emph{EHRland} project website for interested readers. The dataset includes the SNOMED-CT category 
of the dominating concepts for each set and the results of whether the category of the dominating 
concepts and the number 1 ranking concept matches the category of the bound concept or not. 
The ratio or accuracy of each category matching in Figure \ref{plot4} should be viewed with
regard to the frequencies provided by Figure \ref{plot3}. For instance, where the number of
dominating concepts equals $9$ in Figure \ref{plot4}, it exhibits high accuracy of category matches.
However the frequency of that is low in Figure \ref{plot3}. The stable growth of the category
matching accuracy supports Hypothesis 2. Furthermore, it has the potential to bring 
the \emph{tf-idf} based method to other applications. Chapter \ref{ch:apply} 
takes advantage of these findings and uses the terminological shadow approach to classify unbound archetype terms with
SNOMED-CT categories.


\subsubsection{\emph{tf-idf} score interpretation}
Although a thorough review of the \emph{tf-idf} weighting function is not the intent of this evaluation,
understanding of how results are ranked can help analyse the performance of the method. Equation
\ref{eqtfidf} is the simplified formula in the Lucene implementation to calculate
a score that is used for ranking candidate concepts.
\begin{equation}\label{eqtfidf}
	score=\sum_{t}(tf \cdot idf \cdot FieldFactor)
\end{equation}
	\myequations{Equation number \ref{eqtfidf}}
The score is computed for each query term, which is the archetype term, by calculating a sum of each word's 
\emph{tf-idf} value in the query. The equations to calculate \emph{tf}, \emph{idf} factors are the
variation of the equations mentioned in \ref{sec:init_method}. The \emph{tf} factor, given by \ref{eq2}, is the multiplicative inverse of the
term frequency, which refers to the number of times one word appears in a document (concept
description). The \emph{idf} factor is calculated as \ref{eq3}.
The \emph{FieldFactor} is a normalisation value which is returned by Lucene to give weight to
shorter descriptions. The normalisation is a method to generate a value based on the number of words
in a document (concept description). The longer the description is, the smaller the returned value
would be.
\begin{equation}\label{eq2}
	tf=termFrequency^\frac{1}{2} 
\end{equation}
	\myequations{Equation number \ref{eq2}}
\begin{equation}\label{eq3}
	idf=1+\log(\frac{numDocs}{docFrequency+1})
\end{equation}
	\myequations{Equation number \ref{eq3}}
For example, an archetype term ``blood'' will retrieve a number of candidates and ``Blood'' as the
number 1 ranked concept in a candidate set with the following score: 5.5406284.
The following snippet shows part of the candidate concepts that have been ranked by Lucene with
explanation of final scores:
\begin{enumerate}
  \item \emph{Blood}
5.5406284 = score, product of:\\
  1.0 = tf(termFreq(Term:blood)=1)\\
    5.5406284 = idf(docFreq=11394, numDocs=1068278)\\
      1.0 = fieldNorm(field=Term, doc=576717)

    \item \emph{Blood group}
3.4628928 = score, product of:\\
  1.0 = tf(termFreq(Term:blood)=1)\\
    5.5406284 = idf(docFreq=11394, numDocs=1068278)\\
      0.625 = fieldNorm(field=Term, doc=60283)

    \item \emph{Blood compatibility}
3.4628928 = score, product of:\\
  1.0 = tf(termFreq(Term:blood)=1)\\
    5.5406284 = idf(docFreq=11394, numDocs=1068278)\\
      0.625 = fieldNorm(field=Term, doc=63393)
    \end{enumerate}
    The major factors that are used for calculating the score for the top concept are:
    \emph{tf}=1.0; \emph{idf}=5.5406284;
\emph{FieldFactor}\footnote{\emph{FieldFactor} is computed by the \emph{fieldNorm} function.}=1.0
with \emph{docFrequency}=11394 and \emph{numDocs}=1068278.
It is easy to deduce that in many cases \emph{tf} will be 1 because one word in a query usually
only appears once in the description of a SNOMED-CT concept. Thus the \emph{tf} presents quite 
little impact on the resulting score in general.
The inverse document frequency is one important factor to the ranking score. 
The \emph{docFrequency} and \emph{numDocs} values denote the frequency of the
documents (concept descriptions) that the word in the query occurs and total number of documents
(concepts descriptions) respectively. This
indicates that if a word appears to be `common' across all SNOMED-CT concept descriptions, it 
will contribute less to the final score. In other words if a word is frequently appearing among
SNOMED-CT descriptions, the \emph{idf} value will be smaller wherefore it will result in a lower
rank in the candidate set. The number of words in the description of a SNOMED-CT concept description
determines the \emph{FieldFactor}, which will give shorter description larger value. In the example
the \emph{FieldFactor} of ``Blood'' is 1.0 because the description of the concept only contains one word.
Therefore as seen in the above example the final score for ``Blood'' is larger than ``Blood group''
even though both achieve the same \emph{idf} value.
In the meantime the number of words in the query will have impact on the ranking score since 
it is a sum of the \emph{tf-idf} values of all words in a query. 

As a result of the observation, it is suggested that since the \emph{idf} factor is
primarily responsible for the ranking, a future study of the distribution of
words in SNOMED-CT descriptions may improve the efficiency of the \emph{tf-idf} function.



\subsection{Issues and conclusion}
There are some limitations in the evaluation of the algorithm.
Firstly the assessment may require a better gold standard that contains much more bound 
SNOMED-CT concepts than the current version. Although not intended to be used as a gold standard for
evaluation purposes, the number of the bound concepts has great impact on the
evaluation of terminological shadows. As described in section \ref{sec:perf-eva}, the difficulty for obtaining
substantial amount of bound concepts is due to many reasons such as the architecture of the
archetype-based model and development cycle of archetypes. Nevertheless, it is believed that the
quantity and quality of bound concepts will grow as archetype development gains popularity.

Secondly the \emph{tf-idf} alone may be insufficient to process archetype terms, which are used as
queries. There are many techniques available in the area of information retrieval, which includes
techniques such as query expansion and word disambiguation. However as a highly specialised
field, associating archetype terms with candidate SNOMED-CT concepts differs from traditional text 
engineering which mainly deals with text documents. The aptness of applying these techniques is, in
the author's opinion, worth investigating. The lack of appropriate query processing techniques could
undermine the efficiency of the \emph{tf-idf} based method.

Thirdly, the candidate concepts that are returned in the experiment may need filtering. Information
that are useful for filtering the results may include the context of the archetype, for example, the
topic of the archetype. However such proposed feather may require complex rule-based processing to
make decisions when filtering candidate SNOMED-CT concepts. Other context information such as the
reference model types within the archetype could also be useful. More details about the relationship
between reference model and terminological shadows are discussed in Chapter \ref{ch:apply}.

Despite these issues that need to be addressed in future development, the evaluation completes a
quantitative study that explores the characteristics of the \emph{tf-idf} function inside the
terminological shadow framework. The results of the evaluation present evidence to support two
hypotheses that have been made: 1) The average recall and precision are influenced by the number of
candidate concepts and a potentially optimal parameter could be obtained; 2) The category of the
dominating concepts in a candidate set is related to the category of the bound concept. These
findings will be used to make decisions in the work described in Chapter \ref{ch:apply}.

%However the value of calculating these figures can be seen in the future when proper data 
%sets are complete in which case an archetype node may be associated with many SNOMED-CT candidates 1:* (based on experts' judgement). 
%Despite the small size of existing bound codes, this study can still reveal the capabilities of whether it retrieves the only relevant code or not.


\section{Summary}
The initial evaluation of the terminological shadow approach provides an overview of the usefulness
of the terminological shadow framework.
The evaluation described by section \ref{sec:perf-eva} demonstrates a sensitivity assessment of the effectiveness
of the \emph{tf-idf} based terminological shadow creation method. The evaluation successfully
identifies the general performance and some useful patterns of the \emph{tf-idf} function 
that can benefit the development of terminological shadows. There are, however, some issues
and weaknesses that should be pointed out in the method and the evaluation process.


%\end{document}
