%This chapter introduce the shadow methodology



%\documentclass[oneside]{dit-phd-thesis}

%\usepackage{listings}
%\usepackage{textcomp}
%\usepackage{longtable}



%\begin{document}
%
% manually assign it chapter 5
%\addtocounter{chapter}{4}




\chapter{The ``Terminological Shadow'' }

\label{sec:sno-ehrcontext}
% copied form contribution text, put 'impedance mis-match'
In the area of health informatics, the integration between EHR information models and medical terminology
systems has been a focus of research interests during the evolution of electronic health records.
However as noted in the previous chapter,  a number
of semantic interoperability issues are explicitly or implicitly related to differences between
EHR information models and the models and structures that are associated with clinical terminology. 
The mismatch between these ontologies is referred to as \emph{semantic
gaps} \parencite{ehrig2007ontology}. 
The differences are often manifested in certain incompatibilities that arise when it comes to incorporating
terminology with EHRs. One issue for example is that despite his/her profound clinical
knowledge, a clinical expert may find it difficult to associate appropriate concepts in a clinical terminology, to
an existing EHR information model. This is because collisions of similar concepts exist in both
systems and they use different syntax and structure. One simple solution is to include certain
concepts that are frequently used in EHR information models in a master terminology. In fact
SNOMED-CT as a large terminology has already been adopting concepts in EHR models in order to
encode clinical data properly. The SNOMED-CT category \emph{Situation with explicit context} 
has been adopted and used in the SemanticHealthNet project \parencite{schulz2013ontologies}. For example,
an EHR may include the family history of a patient, where data entries can be made to
capture information about the patient's family. It can also be recorded by using the \emph{Family history with explicit context
(situation)} concept to express the same meaning where the structure is not available in certain
EHR information models \parencite{cornet2009relationship, schulz2009snomed, spackman2008snomed}. 
While it is generally considered that the incompatibility is a result of independent development of
EHR information models and clinical terminologies, 
the author believes a generic study is required in this field.





Various projects have been carried out to attempt to address some of these semantic interoperability
issues. For example the TermInfo project produced integration guidelines for integrating HL7 and SNOMED.
However the outcome was HL7-specific. One element of the thesis proposition is to conduct a generic study to investigate
the bond between EHR models and terminologies, especially in the recently emerged two-level EHR models such
as EN13606 and openEHR. There is general evidence to show that the development of EHR and clinical
terminology does not always go hand in hand as expected. The focused area of integration is between clinical
archetypes, the EHR modelling artefacts and SNOMED-CT, a large clinical vocabulary whose adoption
has accelerated world wide.
The following statements show the dis-harmony of the two essential parts of modern e-health
environment, where the number of archetypes grows significantly in contrast to
the slow adoption of terminology bindings \parencite{snomedukadoption, Lopez19012012, wright2011comparative}.





\begin{enumerate}[(a)]
  \item There are several publicly available archetype repositories and pace of archetype development is
getting faster. Archetypes are designed for
general use in EHR information communication. Archetype modelling tends to cover as many clinical
areas as possible.

\item Archetypes do not always contain standard terminological references. Existing terminological references in archetypes are not widely utilised.

\item With the increasing number of archetypes, there is general lack of detailed guidelines for
archetype modellers to design coherently and meaningfully.
\end{enumerate}

Issues manifest themselves when users wish to find archetypes, navigate archetypes, decide to make
new archetypes, or to create the internal structure of archetypes and so on.
%


As mentioned in section \ref{sec:2lvlehr} and section \ref{sec:clinicalterm}, building terminology 
	systems is a separate process and therefore not an integral part of EHR standards
	development. As a result of the separation, semantic gaps arise when it is necessary to
	integrate EHR systems with terminology systems. A common encountered problem, is how to manage the
	similarity between EHR information models that are defined by standards such as openEHR and
	the model behind a terminology system such as SNOMED-CT.
	%In order to address this problem a number of
	%research projects attempted to provide better ways of integrating an
	%Electronic Health Record with terminology systems [ref]. %ref new snomed journal papers published in 2011
	The approach presented in this thesis
	focuses on a methodology that can support a better integration of clinical terminologies 
	with an EHR information model. 
	The concept of the approach advocated in this thesis can be
	summarised as a process that projects data points in the EHR to a terminological space that is
	comprised of clinical concepts. The result of this type of association is hypothesised to facilitate
	better integration between the
	clinical information in the EHR and clinical concepts in
	a clinical terminology. The openEHR and EN13606 EHR standards both allow clinical experts to
	model health information content by authoring clinical archetypes. The clinical terminology used in the thesis is a widely adopted
	clinical terminology called SNOMED-CT. In a summary, the approach presented in the thesis aims to establish
	connections between data points in
	archetypes and the most relevant corresponding SNOMED-CT concepts, to form a data
	representation that preserves information both from archetypes and SNOMED-CT. As the effect
	resembling the shadow  that is cast by an object, it is termed ``Terminological
	Shadow''.
	
	This chapter describes the details of the ``Terminological Shadow'' idea and the design of a
	framework that forms the basis of a system to produce shadows from archetypes.
	It also discusses the methodology that evaluates the effectiveness of the shadow approach.
	The design of two 
	terminological shadow applications is described by the end of the chapter to show how the
	proposed approach can be used to facilitate improvement in EHR modelling and clinical
	terminology integration.

	The chapter is organised as follows: section \ref{sec:shadow-overview} 
	presents an overview of the terminological 
	shadow approach and describes the conceptualisation of the terminological shadow approach.
	Section \ref{sec:constr_sh} provides details of the construction method of terminological
	shadows and description of internal functioning components. Next the author presents a set
	of plans of studies that aims to implement, evaluate terminological shadows and demonstrate
	the applicability of the shadow approach. The construction and evaluation of
	terminological shadows will be described in section \ref{val_sh}. Two example applications of
	the shadow approach will be discussed in section \ref{app_sh}.
	
\section{Overview of the approach}
\label{sec:shadow-overview}
The terminological shadow idea was inspired by various studies that indicated the issues
of using clinical terminologies and EHR information models together \parencite{markwell2008representing}. Particularly the author 
drew inspiration from a paper that aims to align archetypes with ontological representations
\parencite{bisbal2009arch-align}.
% copied from paper1
As previously described in section \ref{sec:term-binding}, the term
binding syntax in ADL provides archetype modellers the ability to create a reference to a clinical definition from a
terminology for an arbitrary data point in an archetype. 
This can help overcome the difficulty of comprehending clinical information that are
recorded in different ways but contain the same clinical meaning. By using a clinical vocabulary or
terminology system such as SNOMED-CT, standardised codes can be used to 
semantically represent the health information which may later be exchanged with different systems and
even translated to a different language. 

Figure \ref{shadow_abstr} gives an overview of how a terminological shadow can be derived from an
archetype. The archetype node tree represents the structure of data points of a parsed
archetype. Different shapes of objects indicate that data nodes in an archetype are of different
reference model types. For example the sphere objects are a symbolic representation of the data nodes at the top of the archetype
node tree, which indicate a different reference model class comparing to the cubic objects at the bottom.
The arrows denote the process to create the terminological shadow from
the archetype. Specifically, the shadow is obtained by binding data nodes of the archetype tree to appropriate concepts in
the terminology concept space.  The shaded
areas are the semantically relevant concepts that are associated to the data nodes in archetypes.
The black dots represent the concepts that are considered semantically relevant, which should be included in
the shadow, while the white dots are not quite relevant to the intended clinical meaning. In this conceptual diagram
the data node with the clinical meaning of blood pressure is bound to SNOMED-CT code 75367002, which
contains a fully specified name in SNOMED-CT as: ``Blood pressure (observable entity)''. 

\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[width=0.7\textwidth]{../res/shadow_abstr}
	\end{center}
	\caption{Overview of the Terminological Shadow methodology}
	\label{shadow_abstr}
\end{figure}

% check `A'rchetype
A terminological shadow attempts to represent the terminological image of an archetype. It is a structure to represent the
terminological meanings while ignoring the parts of an archetype that record meta-data
and uncodable information.  This means ideally a terminological shadow should try to avoid content in
archetypes that are not clinical concepts, for example the numeric value of a time stamp and part of
the record structure. However it is a difficult and complex problem to determine what part of
an archetype should be encoded
by a clinical terminology system such as SNOMED-CT. The different healthcare organisations and
hospital departments may encode clinical information differently for varied purposes and interests.
The issues of overlapping content and gaps between EHR information models and SNOMED-CT has been
described in section \ref{sec:term-equal}. Due to the overlap and difference between EHR information
models and
clinical terminologies, it is often not clear what information in EHR should or can be encoded with
SNOMED-CT concepts. 
Therefore guidance is often
needed even for human experts to reference the correct SNOMED-CT concepts to appropriate data
points when designing archetypes. 

The terminological shadow approach abstracts the process of mapping data nodes in archetypes to
SNOMED-CT (which could be expanded to other terminologies) to provide a generic framework for
integrating EHR information with terminology systems.
The final format of a shadow contains the terminological
representation and context information of the original archetype nodes. At the centre of the
framework is the binding algorithm that traverses the archetype node tree and associates
appropriate SNOMED-CT concepts to data nodes that are considered worth coding. The association
utilises a searching mechanism that searches for SNOMED-CT concepts by using attributes from 
the nodes in archetypes. The shadow framework aims to allow different search algorithms for
terminological binding results. Section \ref{sec:constr_sh} details the construction method of how a terminological
shadow can be derived from archetype definition language files known as ADL files.


%The description table in the release is
%used for indexing because it contains the human readable description of a SNOMED concept. Over
%700,000 entries of terms have been indexed and used for the later search. Archetypes from the NHS
%Connecting for Health project are selected and an ADL parser from openEHR java reference
%implementation is used. 

\section{The construction of terminological shadows}
\label{sec:constr_sh}
To make the best use of terminology and to combine with archetypes, the shadow framework extracts
the terminological resources from archetypes and constructs a shadow that holds only the medical
concepts that form the Archetype and the shape in which they are held in the archetype. The
framework starts with ADL files. It traverses the nodes in the archetype for terminologically
relevant information, such as descriptions, names of nodes and term bindings and will store this
relevant information in the shadow. 

\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[width=\textwidth]{../res/shadow_constr}
	\end{center}
	\caption{The construction process of Terminological Shadows that starts from parsing ADL
	files}
	\label{shadow_constr}
\end{figure}

Figure \ref{shadow_constr} shows the general workflow of the shadow construction process. 
The ADL files are first retrieved and parsed to obtain the archetype node/object trees. 
The framework processes each archetype node and stores the context information such as reference
model class type, path before passing  it onto
the ``Terminological shadow creator''. The shadow creator then generates the candidate SNOMED-CT
concepts using the searching mechanism and stores the result with the
context information about the archetype node. Inside the shadow creator there are two main components:
the \emph{terminology component} and the \emph{binding component}, 
which will be discussed in the next subsections.



\subsection{The terminology component}
\label{sec:term-comp}
The terminology component is a set of functions that are used to load terminological resources and
build indices for the functionality to associate archetype with terminology. The purpose of this component is to
transform and process the terminology resource, which usually comes in a raw format when released,
into a suitable data structure for later access. For example, SNOMED-CT and LOINC releases are a set of text files
with delimiter-separated values. This component does more than importing delimiter-separated files 
into a relational database. The objective of the component is to wrap up terminology
resources and present a unified interface for convenient access. 
As discussed earlier in section \ref{sec:term-service}, a common interface for various terminologies is not a new idea.  
The Common Terminology Service
specification and other terminology service model such as LexGrid are all examples of service level,
if not, application programming interfaces (API) for providing access to terminological
resources.
Although the terminology component in this work does not intend to be a generic terminology service, it
forms part of the framework to make accessing SNOMED-CT concepts easier. The purpose of the component
is to demonstrate the applicability of the shadow approach by separating the terminological
resources.


\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[width=\textwidth]{../res/term_mod}
	\end{center}
	\caption{A conceptual overview of how terminological resources are loaded by the terminology component
	and prepared for access via creating indices}
	\label{term_mod}
\end{figure}

Figure \ref{term_mod} is a conceptual diagram that illustrates the process of loading terminological resources and building
indices in the terminology component. The terminology release files are first to be stored
into a  relational database. The component then
connects to the database and provides a simple access layer for other components to read the data. 
The ``Terminology Index Builder'' represents the process that produces indices of concepts and codes by
reading the terminology database.
In the diagram the arrows from ``SNOMED-CT''
and ``Other coding systems'' imply the ability to load different coding schemes other than
SNOMED-CT.  
Indices are needed because access to the terminological resource is mainly via queries that search for
relevant concepts in the database. Indices can be built in many ways; for example the standard
release of one version of SNOMED-CT includes files that are required to build indices for searching.
Many database software support different ways of indexing data that are stored in the tables. In
this thesis however a customised index is built by a \emph{tf-idf} based toolkit called
\emph{Lucene} \parencite{gospodnetic2005lucene}.  Further implementation details and an evaluation of the method
will be discussed in Chapter \ref{ch:result}.
% todo -- insert text from hisi paper, expand why use lucene
The indices built by the component can be queried by other components of the
framework to search and 
find concepts in a large terminology such as SNOMED-CT. In the diagram, the indices are being
queried by the ``Terminology query interface'', which is an abstract interface responsible for
querying terminologies.
The component is illustrated to incorporate different coding schemes, in order to show 
that the terminological shadow approach should not only limit to
SNOMED-CT. LOINC and ICD, for example, which do not have
a sophisticated internal concept model like SNOMED-CT, should be supported by the shadow approach. 

\subsection{The binding component}
\label{sec:binding-comp}
The design of the binding component emphasises the functionality that uses the description of an
archetype data point to find SNOMED-CT candidate concepts.
The binding component consists of a set of functions that map data nodes in archetypes to appropriate
SNOMED-CT concepts. Figure \ref{binding_mod} describes how the binding component processes archetype
data nodes and creates terminological shadows.

% low quality text from paper1
Once the indices have been built by the terminology component,  the Archetype Definition Language (ADL) 
files are ready to be parsed. As a result, archetype node trees are generated and fed to the binding
component as shown in Figure \ref{binding_mod}. The descriptions of data points or nodes in archetypes are
extracted and used as the search queries. The component queries the ``Terminology query interface'' to perform the 
search that looks up concepts in the terminology indices. 
After obtaining the result from the search, the candidate concepts and its relevant information are
stored and ready for analysis and further processing.

There are other optional functions inside the binding component, including text parsing utilities
and rule-based filters. Depicted in the diagram as ``Parse text'', the text parsing utilities can be used to process the descriptions of
archetype nodes, which will be used in the queries. For instance some text utilities might be needed
to handle medical acronyms and symbols. The rule-based
filters are shown in the diagram as a ``Rule database''. They can be executed and applied during the creation of terminological
shadows. For example, rules about filtering the search result may be implemented to customise
the output of the shadow creation. There is another type of filtering that can be applied when
parsing the archetypes, which will be discussed in section \ref{sec:trav-comp}.
The details of filtering technique are left to the implementation of the searching and
filtering algorithm. The framework provides a set of abstract classes that need to be implemented to
allow customisation. These optional functions are important but not essential to the
terminological shadow approach. Each may be substituted by a state-of-art technology for
improvement. The inclusion of these functions is to show examples of necessary tools relating to the
shadow methodology. 

%Another important procedural step of the Shadow construction process is to reformat the results that
%are returned by the algorithm into a finalised Shadow structure. For the purposes of this work this
%procedure tends to be inclusive to make sure that the generated concepts from terminology systems
%such as SNOMED can cover the exact semantic meaning of this Archetype. However, this is easily
%altered. Steps may involve separating and removing information or meta-data that are clearly
%non-clinical in nature, for example aggregation links and organisation of control components. These
%non-clinical concepts are likely to be removed from the final Shadow although some part of them will
%be covered by coding systems such as SNOMED. The identification of these non-clinical concepts is
%reference model dependent. And the filtering and reformatting layer can work recursively to generate
%the final Shadow. That means that each set of search results from filtering can be inputted for
%further reformatting and after reformatting it can further constrain the next round of filtering and
%so on. The goal here is to self – extract the clinical meanings from Archetypes to form a data set
%of SNOMED concepts that correspond to local terms in Archetypes.
% low quality text

\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[width=\textwidth]{../res/binding_mod}
	\end{center}
	\caption{The process of binding archetype data nodes to
	appropriate SNOMED-CT concepts inside the binding component}
	\label{binding_mod}
\end{figure}

The framework also offers the flexibility to use different search algorithms. As indexing tools are
usually coupled with querying methods, choosing a different search algorithm means the appropriate
indexing process should be completed prior to querying. 
%Other indexing strategies such as creating indices by the database management software or string matching algorithms are compared in section [ref]. 
The default search method in the framework is based on Lucene. The SNOMED-CT database has been indexed by Lucene and queries
are processed by the Lucene toolkit to search concepts in the indices. 



\subsection{The archetype traverser}
\label{sec:trav-comp}
The archetype traverser is a package in the framework that is designed to parse ADL files and traverse archetype
node trees. The purpose of the package is to create a generic access layer to the object-tree
obtained by parsing an ADL file. This layer also provides the flexibility of
processing archetype nodes by filters based on certain rules.
Parsing the ADL syntax is done by using the openEHR ADL Java parser \parencite{chen2007openehr}. The
traverser's main purpose is to traverse all legitimately
defined archetypes and execute tasks based on rules. The archetype
traverser is used in this thesis as a base of experiments and investigations that
need to generate dataset or process information in archetypes.


\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=0.3]{../res/traverser}
	\end{center}
	\caption{The UML class diagram of the architecture of the archetype traverser}
	\label{traverser}
\end{figure}

Figure \ref{traverser} illustrates the architecture of the archetype traverser package. It follows
the ``Visitor'' design pattern which separates the algorithm from the object structure that it
operates on \parencite{palsberg1998essence}. The ``Visitor'' design pattern has advantages in the situation when the object that requires some
arbitrary processing needs to receive different implementation of the process.  Different
implementations of the process are called the ``Visitor'' objects, in contrast to the receiving objects to which
they provide various solutions. In the diagram there are four main abstract classes that represent the fundamental components
of the archetype traversing layer: \emph{IFilter}, \emph{IVisitable}, \emph{IVisitor} and
\emph{ITraverser}\footnote{Letter \emph{I} indicates a Java \emph{interface}.}.

An \emph{IFilter} is a class that once implemented, filters the archetype data nodes. Any class 
that inherits \emph{IFilter} is a specialised filtering method, which will find the archetype nodes
of interest and carry out certain operation on the node. For instance, in the class diagram a
\emph{CComplexObjectFilter} class only operates on archetype nodes that are constraint information
of complex object in the reference model. Similarly an \emph{ElementFilter} only locates nodes about
Element objects and carries out the operation specific for Elements accordingly. The idea behind this
type of class is to provide a means of reducing the volume of an archetype object tree to produce a
smaller-sized node tree that is of interest. 

\emph{IVisitable} represents a generic visitable class that its content and structure is
``visitable''. It can accept any visitor object that implements an algorithm to do some processing on the
visitable object. As a more concrete construct, a \emph{Node} object inherits the \emph{IVisitable}
class to represent an archetype node. It also provides functions for accessing and changing its parent node.

An \emph{IVisitor} represents a generic visitor class that can be accepted by visitable object such
as a \emph{Node}. This method ensures that different visitors can be created for different tasks.
For a specific task, a visitor class can inherit \emph{IVisitor} and implement its method of
how to process the node. The \emph{NodeVisitor} class in the diagram is a general representation of
many sub-classes of an \emph{IVisitor}. A typical and primary usage of a visitor object, though not its only
usage, is to process information in an archetype node by the terminological
shadow creator and associate it with the most appropriate SNOMED-CT concept(s). Other use cases
include converting archetypes to XML format and comparing archetype nodes to measure semantic
similarity.

An \emph{ITraverser} is an archetype node tree iterator that goes through each node in
the object tree and applies visitor objects to nodes. Its sub-classes inherit and implement a
\emph{traverse} 
function, in the case of \emph{NodeTraverser}, a recursive function to iterate through the tree. The
traverser object is a useful tool which can get all the nodes from the archetype, apply filter
objects to obtain a focused list, apply visitor objects to the selected list. It can also start from
any node of the tree to traverse the sub-tree under that node by setting the root node to begin
with. The combination of filter objects and visitor objects is the core part of the code used in this thesis to accomplish tasks
of generating data sets and processing archetypes on the fly.

The \emph{TermSearcher} is a blueprint class for an object to query and find terms and concepts in
the terminology. It acts as the role of ``Terminology query interface'' in Figure \ref{term_mod}. 
As a concrete implementation, sub-class \emph{SCTSearcher} has the ability to query SNOMED-CT to
find appropriate concepts. The \emph{select(algorithm:String)} function denotes the ability to swap
searching algorithms to achieve better results.

\section{The evaluation of the shadow approach}
The previous section has described the system design of the terminological shadow framework that was
developed for creating shadows. However the same system is also intended to be used for evaluating the
terminological shadow approach.
\label{val_sh}
The validation of the terminological shadow approach consists of an investigation that
evaluates the effectiveness and correctness of terminological shadow creation process. 
The investigation is a two-stage evaluation: an initial evaluation that
compares the output of the terminological shadow creator with a ``gold standard'' that is created
manually by clinical archetype modellers. The second stage evaluates the binding
algorithm that is used in the terminological shadow creator and quantitatively assesses its
performance to reveal the capability of the binding methods.
This section discusses the strategies and methods that are used in these investigation and
evaluation. The results of these investigation will be described in Chapter \ref{ch:result}.

\subsection{Initial evaluation plan}
\label{sec:initeva_plan}
It was mentioned in section~\ref{sec:term-comp} that, a \emph{tf-idf} based search toolkit named \emph{Lucene}
is adopted in the design of the framework to
automatically generate archetype terminology shadows. The
framework uses a ``gold standard'' that consists of manually bound SNOMED-CT concepts
from a set of selected archetypes. 

Figure \ref{paper1} summarises the evaluation strategy. The diagram shows that terminological shadows are
generated by the framework, which is represented by the block with label ``Terminological Shadow creator'',
then compared to the SNOMED-CT concepts that have been associated manually to the archetypes. From
the diagram it can be seen that the initial evaluation extends the shadow construction process
as depicted in Figure~\ref{shadow_constr}. It incorporates the comparison of manually annotated
SNOMED-CT concepts in the archetypes and the automatically suggested concepts from terminological
shadows.

The author will select a number of archetypes with SNOMED-CT manual bindings and create the
terminological shadows. Then the SNOMED-CT concepts from the shadows will be compared against the
manually annotated ones.
The details of the preparation and the results of the initial evaluation of the shadow approach will be
reported in section~\ref{sec:init-eva} of Chapter \ref{ch:result}.

\begin{figure}[!htbp]
	\begin{center}
	  \includegraphics[width=\textwidth]{../res/paper1}
		
	\end{center}
	\caption{Terminological shadow initial evaluation: comparing resulting shadows with manually bound
	SNOMED-CT concepts}
	\label{paper1}
\end{figure}



\subsection{Algorithm performance evaluation plan}
Following the initial evaluation, the shadow framework will be used with a large collection of archetypes to
further evaluate the performance of the binding algorithm in the shadow creation process.
The motivation for the performance evaluation is to use all available manual annotations from existing
archetypes to assess the efficiency of the tf-idf based algorithm and find the general
characteristics of this automatic binding method. 

The strategy of this evaluation is to use a set of commonly used measures in the field of
information retrieval to identify general patterns in the results of the evaluation. For instance
different input parameters of the binding algorithm will be used and the
changes in the resulting terminological shadow will be stored and analysed. 
The detailed experiment design and the results will
be reported in section~\ref{sec:perf-eva} of Chapter \ref{ch:result}.




\section{The application of terminological shadows}
\label{app_sh}
After the evaluation of the terminological shadows, the author seeks to provide two
applications that utilise the shadow approach to help modellers from both archetype and terminology
communities. The following subsections describe the outlines of the applications to be implemented.


\subsection{Coverage discovery by shadows}

 The terminological shadow approach will be used to discover the 
 clinical concept coverage of an archetype repository. In order to achieve this goal, the archetypes
 of the whole NHS repository will be downloaded and loaded into the shadow framework. 

 The strategy for calculating the clinical coverage of an archetype repository is first to generate
 terminological shadows for every unique archetype from the repository. Then all the SNOMED-CT
 concepts will be obtained from the terminological shadows. The author will use the SNOMED-CT
 network to determine the base categories of the each clinical concept. Statistics of these categories
 will be presented and discussed. The clinical coverage will be calculated as a percentage of clinical
 concepts from archetypes to all the concepts in a particular category. 
 The coverage result will also be analysed to identify unique patterns with
 respect to the relationships between the EHR information model and SNOMED-CT. The details of this
 application will be reported in section~\ref{sec:paper2} of Chapter \ref{ch:apply}.




\subsection{Archetype comparison by shadows}

The second example application of the shadow approach is an archetype comparison function that takes
two archetypes and calculate the semantic relatedness of the two by comparing their terminological shadows.
It is believed that this example can bring benefits to the development of semantic tools to improve
the management of archetypes.

The strategy for producing a semantic relatedness score for two archetypes involves calculating the semantic
similarity of the terminological shadows of the two archetypes. The semantic similarity measurement 
in graph theory will be introduced into this application to enable the comparison of archetypes.
First of all, two semantically related archetypes will be selected by the author and their terminological
shadows will be obtained. The author will then calculate a commonly used measure of semantic similarity
between the SNOMED-CT concepts in the two shadows. A semantic relatedness score will be produced between
every pair of clinical concepts and the two archetypes. The detail of this application is described
in section~\ref{sec:paper3} of Chapter \ref{ch:apply}.

\section{Summary}
This chapter has conceptualised a new mediating resource called ``Terminological Shadow'' that links the clinical
content in archetypes to a large external terminology called SNOMED-CT. The mediating resource is
designed to enhance the integration between EHR information models and clinical terminologies. This
chapter also outlines the design of a framework that constructs, evaluates and utilises the
terminological shadows. In the following chapters the author will describe the implementation of
the framework and the details of the investigations that evaluate the terminological shadow approach. 
%\end{document}
