%This chapter talks about the importance of integrating
%EHR with terminology.




%\documentclass[oneside]{dit-phd-thesis}

%\usepackage{listings}


%\begin{document}
%
% manually assign it chapter 3
%\addtocounter{chapter}{2}


\chapter{Literature review: State of the art of integrating  EHRs with terminologies}
%The work of this thesis is motivated by the movement of integrating EHR with terminology, 
%which brings semantic interoperability to e-Health.
%The following sections will expand and analyse the integration problem.



\section{Integration of EHR and clinical terminology} 

Health information models and clinical
terminologies represent two distinct approaches for the modelling and recording of health information.
Although the origin of the two approaches is different and the usage and purposes may differ,
similarity exists in terms of design patterns and the clinical concept coverage of the health
domain. The need for integrating terminology with medical records has existed since the time 
of paper-based medical
records. Figure \ref{paperform2} shows an example of a completed paper form that records
certain patient information. In a modern EHR system, an electronic form will present the same
information for selection with standard codes assigned in the application. The hard-coded
integration will benefit later communication between healthcare providers. Thus without showing the
embedded codes, unambiguous meaning can be delivered. A significant motivation for utilising health
information models such as Archetypes with terminological references in EHR applications is to
reduce ambiguity of clinical meaning across the EHR systems. For this reason, many EHR standards
functionally support embedded terminological references such as ICD or SNOMED-CT codes. The integration
of EHR and clinical terminology is not only ensuring high quality clinical data in e-health but also
the foundation that many other clinical applications rely on, such as clinical decision support
systems. 


\begin{figure}[!htbp] 
  \begin{center} 
    \includegraphics[scale=0.5]{../res/paperf2} 
  \end{center}
  \caption{Patient information recorded on paper-based medical record} 
  \label{paperform2}
\end{figure}


\subsection{Archetype terms and terminology binding} 
\label{sec:term-binding}
During the development of an EHR system, it is usually necessary to
build sets of fixed terms or codes into the system. This is due to the
need for  sets of terms to be associated with the data points in a record or information model.
For example, when choice needs to be made, a drop-down list of terms are presented to the clinician
for selection so that the clinical condition can be coded and captured. The use of such term sets is
widely adopted in many health applications. The idea of term sets or code sets are used in many
clinical systems and EHR systems. The terms codes that are built into the system 
may be external, such as standard ICD codes, or locally defined terminology. 

The openEHR specification describes a syntax in ADL called \emph{terminology binding} which let
archetype designers define a set of terms to be used in archetypes \parencite{openehr2007adl}.
The openEHR standard also specifies that terminology binding could allow dynamic queries 
to retrieve term sets ``on the fly'' from a terminology server. These dynamic queries should be
expressed in a query language that the server can parse and then return the desired terminological
resource. The purpose of a terminological query language is to adapt the fast-pace of 
standard terminology development. The
idea of selecting a group of terms or codes for systematic use inspires several
approaches.  SNOMED-CT for example allows smaller ``subsets'' of its large vocabulary to be used
independently. Software from Ocean Informatics\footnote{a company that produces software that
support openEHR specifications. http://www.oceaninformatics.com} 
that support openEHR standards already allow users to extract a ``subset''
terminology out of the whole SNOMED-CT as local terminology resources \parencite{tql2012man}. 

%Section [ref] discusses the historical aspect of the terminology binding mechanism and its impact on EHR.

The openEHR definition of a terminology binding in its simplest form is a 
set of locally defined terms that can be used in archetypes.  
In the  ADL language, they appear as ``archetype terms'' within
archetypes. The purpose of archetype terms is to allow users to create concept placeholders for 
coded values.  Archetypes, if not referring to clinical terminologies, may contain tens if
not hundreds of freely designed data nodes that express clinical meanings.  As part of the archetype
specification, the syntax of the ADL language includes a mechanism to allow annotation of clinical
concepts in archetypes by defining local terms. These local terms are specified as ``AT codes'',
where the `AT' stands for `Archetype Term'. A dedicated section is provided in each archetype to
expand the explicit meaning of these terms and occasionally a ``presentation name'' for display on a
screen. Archetype terms can also be linked to terms in external terminologies such as SNOMED-CT,
known as term binding in the ADL syntax. These bindings to local `AT' coded terms in archetypes can
be used to retrieve a commonly understood medical definition. The example in Listing
\ref{bindingsyntax} shows a snippet
of such ADL syntax where the locally defined code ``at0021'' is linked to SNOMED-CT code 162465004.

% todo--Figure, Table, Listing, UPPERCASE in text!!!!
\lstset{basicstyle=\scriptsize
}
\begin{lstlisting}[caption={Terminology binding in ADL}, label=bindingsyntax]
	ELEMENT[at0021] occurrences matches {0..1} --Severity
		matches { 
			value matches {
			1|[local::at0044], 	-- trivial 
			2|[local::at0023], 	-- mild 
			5|[local::at0024], 	-- moderate
			8|[local::at0025], 	-- severe 
			9|[local::at0045]  	-- very severe } 
			...	
	term_bindings = <
		["SNOMED-CT"] = <	
			items = <		
			["at0021"] = <[SNOMED-CT::162465004]> 
			...
\end{lstlisting}




% take out for the moment
\subsection{The Entity-Attribute-Value model and clinical data} 
\label{sec:eav}
The Entity-Attribute-Value (EAV)
model, otherwise called ``object-attribute-value'' \parencite{nadkarni1999eavcr} 
is a design pattern to build flexible and extensible
information systems such as database systems. Compared to traditional relational database design,
the advantage of the EAV modelling approach is that its flexible data structure allows newly added
information to be easily integrated with the existing system. This feature has been explicitly
exploited in building clinical databases. For example, a database that records the summary care of a
patient would have a table storing each symptom that has been described. A traditional relational
design would require  many columns of conditions and each field containing a description. Such
design will result in many \emph{null} in the fields because of the diversity of the patients'
problems. It is also not future-proof since new symptoms may appear in the long run and the table
needs to extend accordingly and periodically. A database system with EAV design typically represents
clinical data by a set of \emph{Entities} that describe subjects of care or events, a set of
\emph{Attributes} that describe the characteristics of symptoms, and \emph{Values} that indicate a
positive/negative value or other allowed content. The following example in Listing \ref{eav-pseudo} 
shows an extract of a database structure that adhere to EAV design principles:
%the following example cite nadkarni 

\lstset{basicstyle=\ttfamily,
stringstyle=\ttfamily
}
\begin{lstlisting}[caption={Pseudo code of information structure in EAV style}, label=eav-pseudo]
(<Patient ID #1>, <Presence of cough>, 'True') 
(<Patient ID #1>, <Type of cough>, 'Code4321') 
(<Patient ID #2>, <Temperature in degrees Fahrenheit>, '102') 
...  
\end{lstlisting}
Each line resembles a row in the table of this database, which describes an entry of medical condition.
The first field, ``Patient ID \#1'' in angle brackets, represents an \emph{Entity} that is stored as
a colume of the table. The second field ``Presence of cough'' is an \emph{Attribute} that can be
specified by a \emph{Value}, which is the third field ``True''. This example demonstrates the basic
idea of an EAV flavored database that stores clinical data. It is worth noting that the content of
each field in such a table will be linked to meta-data tables where the information is encoded. For
example, the value of attribute ``Type of cough'' in the second entry is an encoded value ``Code4'',
whose definition can be found in a separated meta-data table, with a description such as ``with
phlegm, yellowish, streaks of blood''.

EAV as a generic modelling approach can be found in many applications. Its advantages for data
models with sparse attributes make it suitable for building clinical applications. 
The similarity between the EHR information modelling and terminology modelling approaches and their
relationships with respect to the EAV design principles will be discussed in section
\ref{sec:term-equal}.



% todo -- change EHR in the title
% 

\subsection{Similarity between EHR and terminology modelling}
%check EN13606 make sure name consistent & consistency of Archetypes
Information models of major EHR standards such as openEHR and HL7 represent the data structures of
clinical information.  As discussed in section \ref{sec:archetypes}, a higher level knowledge representation in openEHR is
clinical \emph{Archetypes}. HL7 version 3 uses \emph{Clinical Document Architecture} for the same
purpose and the relationship between HL7 instance with terminology will be discussed in section
\ref{sec:terminfo}. The purpose of a two-level model EHR is to allow clinical experts design or ``model'' clinical
information. This means health experts are exposed to high level clinical concepts that can be used
to compose part of the EHR system. For example, a clinician can be involved to build an Apgar score
\footnote{Apgar score is used to record assessments of the state of a new born baby} archetype using
the ADL modelling language. The archetype modelling tool will allow the user to put together
information to form such a component for a future EHR system.


%todo -- check mindmap definition 

\begin{figure}[!htbp] \begin{center} \includegraphics[width=.7\textwidth]{../res/apgar} \end{center}
  \caption{Archetype of Apgar score} \label{apgar} \end{figure} 

Figure \ref{apgar} is a \emph{mind map} 
\parencite{willis2006mm} of an archetype representing the concept of the Apgar score
\parencite{casey2001apgar}, which has been built by the openEHR community
\footnote{Released on the Clinical Knowledge Manager by the openEHR community http://openehr/knowledge/}. The figure shows the clinical concepts that are encapsulated in this
archetype. It can be seen that this archetype contains information about what should be recorded in an
EHR. In the mind map, as a representation of domain knowledge, this archetype resembles a tree
structure with nodes that hold clinical data. Each node is of a specific data type in the EHR
information model as introduced in section \ref{sec:archetypes}. The construction of such a structure, referred to as the
``Modelling'' of an archetype, is a result of collaboration between clinical experts or clinical
information modellers in a community that has been mentioned in section \ref{sec:repo}.
The purpose of focusing on this archetype is to exemplify the typical structure of an archetype
that is to be used in an EHR system. Four main sections of the archetype in the example are \emph{Data},
\emph{Events}, \emph{Protocol} and \emph{Description}. The \emph{Data} section is further broken
into several data points such as ``Respiratory effort'', ``Muscle tone'' and ``Skin color'' etc.
Each of these data point will record data that are usually entered by clinicians. A more specific
layer of implementation will deal with the actual user interface such as a data entry form. 
%maybe mention templates? Or move up to ch1?  
Each data point will be able to contain coded items, which
is a general term used in this section, but is also a specific data type with regards to the reference
model of a particular EHR standard. 
%For example ``Hypotonia'' will be an illegible state of ``Muscle tone'' with the appropriate code
%from a terminology system. Similarly, ``Yellow skin'' can be an encoded item for ``Skin color''.
The coded item will typically be a local code that can be translated to standard terminologies or a
standard one such as SNOMED-CT or ICD codes. In this example, ``Normal tone'' is an illegible state
of ``Muscle tone'' that is specified by the archetype and similarly ``Completely blue'' for ``Skin
color''. As they are locally defined terms as described in section \ref{sec:term-binding}, they are open to bindings to
external standard terminologies. 

This structure resembles roughly a object-attribute-value system that records information about an
entity: ``Apgar score''. Nodes like \emph{Data}, \emph{Protocol} are reference model-specific
concepts that are imposed to form a container for clinical data capture. As mentioned in section
\ref{sec:entrycls}, 
these concepts are part of the object model of the reference model. These structures are versatile, which
may occur
frequently in archetypes and act particularly as containers of more specific clinical information.
Despite being container concepts, an Entity-Attribute-Value pattern can be observed: 
the Entity here is the Apgar score about an infant; Attributes are such as ``Muscle tone'' and
``Skin color''; Values of these attributes are local codes or standard terms that are specified in
the archetype. 


When clinicians need to encode clinical data by using an ontological terminology system such as
SNOMED-CT, the structure of the information follows a similar style. For example if a clinician wish
to encode the medical condition of \emph{a fracture of the radius and the ulna}, according to the
SNOMED-CT user guide, the following SNOMED-CT expressions in Listing \ref{sno-coding} are all valid and correct:

\lstset{basicstyle=\ttfamily,
stringstyle=\ttfamily
}
\begin{lstlisting}[caption={Different ways of SNOMED-CT coding of the same concept}, label=sno-coding]
 125605004|fracture of bone|: 
  {363698007|finding site| = 23416004 |bone structure
  of ulna|} {363698007|finding site| = 62413002 |bone 
  structure of radius|} 

 64572001|disease|: 
  {116676008|associated morphology| = 72704001|fracture|, 
  363698007|finding site| = 23416004|bone structure of ulna|} 
  {116676008|associated morphology| = 72704001|fracture|,
  363698007|finding site| = 62413002|bone structure of radius|}

 125605004|fracture of bone|: 
  363698007|finding site| = 110535000|radius AND ulna, CS|

 64572001|disease|: 
  363698007|finding site| = 110535000|radius AND ulna, CS| 
  {116676008|associated morphology| = 72704001|fracture|, 
  363698007|finding site| = 272673000|bone structure|}

 12676007|fracture of radius| + 54556006|fracture of ulna|
\end{lstlisting}

Expression 1 consists of one core concept, ``fracture of bone'', which is the main
topic of the medical condition. It is further specified by more detailed information, which are the
locations of the fracture: ``bone structure of radius'' and ``bone structure of ulna''. They are
connected by attribute ``finding site''. The other expressions are formed with different concepts
but are semantically equivalent to expression 1. Their structure roughly resembles the EAV style:
The entity, the attribute and the value (or modifier in the language of SNOMED-CT).  This is partly
because of the design of SNOMED-CT itself; SNOMED-CT has been modelled carefully to allow clinicians
to relate foundational medical concepts with surface-level clinical descriptions in a uniform
manner \parencite{campbell1994logical}. Its ontology is built by a group of clinical experts and there is a
data model associated with it. The concepts that make up a SNOMED-CT expression can be generally
categorised into three types: 

\begin{itemize} 
  \item \textbf{Core concepts:} The key concept in an
    expression of SNOMED-CT.  
  \item \textbf{Attributes:} The concepts that link and connect other
    concepts.  
  \item \textbf{Modifiers:}  The refinement of the key concepts.  
\end{itemize}
    Core concepts are the main subject of the expression such as ``fracture of bone'', ``disease'' in the
    above examples. They are then further specified by attributes such as ``finding site'' and
    ``associated morphology''. And after each attribute there is a SNOMED-CT concept to compensate
    the core concept like ``bone structure of ulna''.  The flexibility of SNOMED-CT makes it
    suitable for coding the same medical conditions under different systems.  Although SNOMED-CT is
    advantageous being flexible, it could be difficult to coherently maintain the coding style. It
    could cause problems when used with internal coding scheme of EHR standards such as HL7
    \parencite{gunther2006hl7,hammond1997call}. 





%maybe the whole lot should move to discussion
However the archetype object model (AOM) is not meant to be implemented only as
database applications. The openness of AOM allows clinical system developers to freely design
templates that represent clinical knowledge and
implement as per clinical requirement. Nor is SNOMED-CT coding strictly EAV-influenced. The point of
mentioning the EAV style here is to highlight the similarity of the information
structure when clinical domain knowledge are represented by archetypes and SNOMED-CT coding. 
In archetypes, generic sections such as \emph{Protocol} shown in Figure \ref{apgar} contain 
information that constrains the openEHR Reference Information Model. 
The purpose of the \emph{Protocol} section
is to specify the information about the protocol of an clinical observation. Being
generic means that \emph{Protocol} is used in all \emph{Entry} archetypes to cover a different clinical focus.
The data points that can be included in the \emph{Protocol} section in archetypes will change
according to different archetype definitions.   
The pattern of the information structure in the \emph{Protocol} section of the example archetype in Figure
\ref{apgar} appears to be compliant to the EAV style.  Listing \ref{eav-apgar} shows the pseudo code
of an EAV style Apgar score record.

\lstset{basicstyle=\ttfamily,
stringstyle=\ttfamily
}
\begin{lstlisting}[caption={Apgar score information structure in EAV style}, label=eav-apgar]
(<Apgar score of baby #3>, <Muscle tone>, ``Normal tone'') 
(<Apgar score of baby #3>, <Skin color>, ``Complete blue'') 
...  
\end{lstlisting}
Note that values in a EAV model can be an object rather than only
strings in the example. A value can be of any supported data type.  Archetype data points exhibit
the similar feature. Therefore instead of a string in the place of a value, in openEHR's case it
could be DV\_TEXT, it can be of any other data type. In the above simulated EAV record, the
attributes that link the entity with the values are ``Muscle tone''. In EHR information models
however, attributes are specially created and assigned to fit their own purpose in an EHR. This
creates the issue of consistently using terminology with the EHR. The issue of inconsistency of
representing clinical information by using a clinical terminology and EHR models has been discussed by a number
papers such as \textcite{bakken01072000}. Further discussion about clinical information coding in HL7
will be carried out in section \ref{sec:terminfo}.

Both methods, archetypes and SNOMED-CT expressions, can be used to record health information and
clinical phenomena. However they solve different problems and are designed for different usage. The
AOM model as a next generation EHR information model is catered for the design of an EHR by clinical
experts. SNOMED-CT is an ontology based medical vocabulary that covers large number of clinical
concepts. As a medical ontology it is used by communicating clinicians to convey unambiguous
clinical meanings. Despite their roles in the e-health environment, both span and cover more and
more clinical space. The integration of clinical terminology and EHR information models to better 
serve the community is becoming an increasingly important topic for the research community.




\subsection{Benefits of integrating clinical terminologies with EHRs} 
% describe vocabulary differ from terminology
The primary purpose of clinical terminologies usage in an EHR is to encode clinical information 
for comprehension and translation. The clinical terminologies are often used this way to 
interface human users and computers \parencite{cimino01032001}. The various EHR standards
specify the structure of the health information that is exchanged. 
In many ways they act like the role of natural languages in human communication.
The following ingredients are commonly found in any natural language to make a successful and
meaningful communication:
\begin{enumerate}
  \item Syntax
  \item Vocabulary
  \item Knowledge
\end{enumerate}
The syntax defines the structure of the communication, such as rules in grammar about how to
construct a sentence. A vocabulary is the dictionary of commonly agreed words that both
communicating parties will understand. And lastly, the knowledge to express meaningful content. With
these elements different parties should be able to communicate. In an e-Health
environment, the communication protocols are made up of similar functioning parts. In the case of
communication between EHRs, the EHR standards are the syntax that specifies the rules for composing
messages or the equivalent part of the information that are being exchanged. Clinical terminologies
are the various vocabularies that can be used by domain experts to express and comprehend the
concepts of different medical domains such as cardiology, drugs, microorganisms and so on. And most
importantly, artefacts such as clinical archetypes and openEHR templates 
% add comments why no binding in template
and HL7 CDA templates represent the domain knowledge that contain the semantics of the health information that
are being exchanged. The importance of knowledge is undeniable.  The following sentence
makes no sense with the absence of knowledge even with the correct grammar: ``Sky drinks purple
smoke''. Similarly, domain knowledge needs to be represented correctly in the communication so that 
the messages for the receiving party can be comprehend meaningfully. The responsibility falls on the
artefacts that formally represent the definition of medical knowledge such as clinical
archetypes. For example, a formally defined medical observation named ``Body mass index
(BMI)''
must include the definition of BMI, the method to calculate the BMI score and the data.
In this way, it is ensured that the information being exchanged can be interpreted semantically
meaningful. Unlike natural languages, the bond between the syntax, vocabulary and
knowledge of EHR communication is arguably not very strong \parencite{mead2006data}. 
Section \ref{sec:term-equal} will discuss some issues
and difficulty of EHR and terminology integration.  The integration is not without challenges. Since
EHR information models and terminology models are of different aims and purposes, they naturally
have differences and overlaps in structure. 


% todo -- Check if this text should be moved somewhere else

%Archetypes that are developed by healthcare professionals
%define the content of information that are being exchanged in Electronic Health Records. They are
%used to express domain specific constraints on the structure of EHR communications in the two-level
%model approach and can be developed across many medical fields. This diversity of authors means that
%the authoring pattern and modelling style of Archetypes can vary among different repositories,
%hospitals, organisations, regions, and countries. For this reason in the author's view, it is
%unlikely that one archetype set will be dominant in the short term. The sharing of health
%information using a standardised approach also requires harmonisation of clinical information models
%and the reference models of openEHR and EHRcom are the result of harmonisation of generic health
%information concepts. Archetypes provide a means of using this shared foundation to support the
%generation of reusable and sharable clinical information. However no specific guidance is available
%for developing Archetypes. For this reason -- at least in the short term, even for the same model
%and perhaps within the same repository different styles of archetype will develop. And this will
%tend to increase heterogeneity. In order to gain the best practice to share and reuse health
%information, further experience of Archetype modelling needs to be gained. In any case, it is likely
%to be some time before concepts and practices required to develop archetypes mature or stabilise.
%In order to make Archetypes meaningful and easy to search and use, standard terminology can be used
%to facilitate semantic interoperability in
%archetype-enabled EHR systems. However the binding of terminological codes to Archetype nodes is not
%mandatory and the cost of labour to discover the most appropriate code or codes to represent a piece
%of clinical information being recorded is significant.

%%todo -- check 'xxxx-enabled' as one word

The principal benefit of using codes from terminology systems such as SNOMED-CT is that they
are a standard lexical representation which can be interpreted in a 
consistent way. A clinical terminology system such as ICD  or SNOMED-CT  
contains a standardised vocabulary that contains commonly agreed medical terms.  Each term belongs
to a medical concept that represents the domain knowledge. A terminology of this type may
include hundreds of thousands of inter-related concepts that are organised as a network. The most
common relationship between concepts is \emph{is-a}. \emph{A is-a B} means the latter is a sub class
of the former. In a complex
terminology such as SNOMED-CT, relationships and concepts with their properties form a ``Semantic
Network'' of a domain. For example the concept \emph{viral pneumonia} is linked with the
parent concept \emph{pneumonia} by \emph{is-a} and its property \emph{causative agent} has been specified as
\emph{virus}. These concepts form a 
ontology \parencite{smith2005rel} in the clinical domain. Once obtained, the concepts from these terminologies
facilitate the execution of semantic tasks such as reasoning, translating as well as complex
querying and cross referencing. Depending on the complexity and purpose of the terminology, the size
and type of ontology of these terminologies may vary. 


The primary focus of medical terminology is modelling reality, so an existing health phenomenon can
be modelled using terminology while an archetype focuses on the structure in which a description of
the phenomenon or other data is conveyed as part of an EHR communication. Consequently Archetypes
and terminology both cover the health information `space' but in different ways. In late 2009, a
cooperative venture was announced between two representative organisations; IHTSDO which develops
SNOMED-CT and the openEHR community. The harmonisation of terminology and EHR information models can
benefit both fields to yield high quality health standards and produce meaningful and shareable
health information.



\subsection{The challenges of integrating terminologies with EHRs}
\label{sec:term-equal}
In his book titled \emph{Clinical Decision Support: the road ahead} \parencite{greenes2011clinical},
Greenes, Robert A stated that
encoded clinical data is the living ground for clinical decision support applications, because
unstructured free text is too ambiguous and difficult to be processed by computers. However the
representation of the clinical information may become trouble-some when clinical terminology is used together
with an EHR information model. In some cases it will complicate the clinical data to represent a
meaningful and correct clinical statement \parencite{markwell2008representing}.

Clinical terminology like SNOMED-CT is a large medical ontology. Feature like Post-coordination
\parencite{richesson2006use, schulz2009snomed} 
allows a medical coder to combine SNOMED-CT concepts to produce expression that describes a
complex medical condition. This is useful in clinical noting that ambiguous free text is reduced and
instead medical statements are coded by post-coordination expressions \parencite{lee2010coding}.
However the ability to create compounds of concepts, or composition, has its associated issues with
EHR information models. Many EHR information models have concepts that overlap with concepts in
SNOMED-CT. The following example shows some potential conflicts: 


\newcommand*{\MyIndent}{\hspace*{0.5cm}}
\begin{table}\small
  \begin{center}
    \begin{tabular}{l  l}
      \textbf{EHR structure} 		& 		\textbf{Terminology-coded structure} \\
      \hline
Subject of care problem = lobar pneumonia  & 	Subject of care finding = lobar pneumonia \{ \\
date\_clinically\_recognised = 2008-07-22  &	\MyIndent finding\_site = structure of upper lobe of lung,\\
body\_site = lung, upper lobe  & \MyIndent laterality = left, \\
laterality = left & \MyIndent causative\_agent = Streptococcus pneumoniae, \\
severity = severe & \MyIndent severity = severe \\
causitive\_agent = Streptococcus pneumoniae &  \MyIndent \} \\
date\_of\_onset = 2008-07-15 &  \\
      \hline
	\end{tabular}
	 \end{center}
	 \caption{Clinical statement expressed by EHR and clinical terminology separately}
	 \label{tbl:term-eq}
\end{table}

	

Table \ref{tbl:term-eq} was an example provided by openEHR \parencite{openehr2010termeq} 
shows similar clinical statement can be expressed by
different structures; one is based on EHR information model and the other terminology. One obvious
problem here is ``body\_site'' and ``laterality'' in the EHR structure with ``finding\_site'' and
``laterality'' in the terminological structure. When terminological concepts are to be used in the
EHR structure, ``body\_site'' can contain ``Structure of lobe of left lung'' from SNOMED-CT as a
value. However what if ``laterality'' is set to ``right''? It may cause confusion to people who try
to understand the clinical 
information. On the other side, even if one insists to code everything with SNOMED-CT expression,
what about numeric data like ``date\_of\_onset''? Section \ref{sec:paper2disc} elaborates on this topic further.


As a conclusion in chapter 14 of \parencite{greenes2011clinical}, Greenes, Robert A suggested that for the future of clinical
decision support systems, particularly in order to share decision support logic, it is recommended
to adopt a common EHR modelling language that has the mechanism to link to terminologies. The ADL
language has these traits to be used as such a modelling language as well as HL7 RIM and the Web
Ontology Language (OWL) \parencite{antoniou2004web}. It is
recognised that the integration between a clinical terminology and any of a modelling language is
essential and requires dedicated study. 







\section{Terminology integration mechanism}
As terminology systems grow more and more complicated, they each often include an 
ever-growing vocabulary and complex rules to use codes appropriately. 
Different approaches have been proposed and implemented to integrate terminologies with EHRs. In a
broad sense there are two main types of integration: 
\begin{enumerate}
	\item hard-coded approach
	\item post-binding approach
\end{enumerate}

The first approach includes methods that hard-code terminological resources into the healthcare
application. This approach is highly efficient in certain scenarios where the coded concept does not
change often, but lacks flexibility in more complicated situations. Clinical information encoding is
inherited from paper-based medical records where a form is filled out by the healthcare
professionals to describe particular patient information such as medication, assessment,
demographics and so on. It is widely used in specialised domains such as radiology and laboratory
tests. Figure \ref{paperform1} shows an example of a paper-based form that records patient
information on various assessments. The tick boxes on the form provide the physicians choices of
selections. These choices then can be encoded with standard terminologies for communication.

\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=.5, angle=90]{../res/paperf1}
	\end{center}
	\caption{Paper-based patient assessment form}
	\label{paperform1}
\end{figure}

A graphical user interface (GUI) of a healthcare application usually comprises of visual controls
that allow users to perform actions. On certain screens where the healthcare professional is
required to input patient information, the clinical application will present choices for selection
of medical conditions. Figure \ref{recov_chklist} shows an example of an electronic form in an EHR
application that records information after the patient's surgery. Items on the form can be ticked
for selection and behind the scene standard codes are hard-coded and stored in the medical record.
For example the checklist sections ``Anesthetic'' and ``Airway'' may contain the codes as shown in
table \ref{form_codes}. The integration could typically happen during the software development
phase. In a modern EHR system, the technical team that builds the software for healthcare
professionals will consider carefully how to integrate the terminological codes with the application
in a flexible way to sustain future changes. This type of integration typically exists in health
care applications that require data input and terminology display. The codes are bound before the
generation of clinical data therefore it is regarded as hard-coded approach. Two-level model EHR
standards such as EHRcom and HL7 provide certain flexibility for the integration, with a separated
layer that manages domain knowledge such as archetypes and CDA. However hard-coded approach still
presents challenges that on-going research tend to tackle. Section \ref{sec:terminfo} discusses the approaches to
provide guidance for HL7 and SNOMED-CT integration.


\begin{table}\small
	\begin{center}
		\begin{tabular}{l l l}

			\textbf{Item name} & \textbf{Code} & \textbf{Description}\\

			 \hline
			 General Anaesthetic & SNOMED-CT::50697003 & General anesthesia (procedure)\\
			 Epidural & SNOMED-CT::18946005 & Local anaesthetic epidural block (procedure)\\
			 Spinal & SNOMED-CT::231249005 & Local anesthetic intrathecal block (procedure)\\
			 Sedation & SNOMED-CT::72641008 & Administration of sedative (procedure)\\
			 ET & SNOMED-CT::26412008 & Endotracheal tube, device (physical object)\\
			 LMA & SNOMED-CT::257268009 & Laryngeal mask (physical object)\\
			 Oropharyngeal & SNOMED-CT::32667006 & Oropharyngeal airway device (physical object)\\
			 \hline
		 \end{tabular}
	 \end{center}
	 \caption{Codes bound to the form items in ``Anasthetic'' and ``Airway'' sections}
	 \label{form_codes}
 \end{table}


\begin{figure}[!htbp]
	\begin{center}
		\includegraphics[scale=.4]{../res/recov_chklst}
	\end{center}
	\caption{Electronic form of a recovery checklist after surgery}
	\label{recov_chklist}
\end{figure}
% move to ch2 before integration issues, desc general real world scenario
The post-binding approach includes methods that add terminology after the data entry process,
meaning codes are to be extracted from existing records, usually in a database or electronic
documents such as PDFs. In certain healthcare organisations jobs are spawned to hire specialised
professionals known as coders to be responsible for encoding clinical data with appropriate clinical
terminology either manually or semi-automatically. 
The manual process usually involves a coder who has the knowledge to encode
clinical information with correct codes from a controlled medical vocabulary. One common scenario to use
coders is to capture necessary codes for the purpose of reimbursement. Technology has advanced in
this area to allow semi-automated code extraction to reduce the work load. In particular natural
language processing (NLP) techniques are used to aid the process \parencite{jagannathan2009assessment}. 
The workflow of such a code extraction system
usually takes clinical documents or records as input, which are obtained from a database or EHR
system, and processes them using various lexical resources. The text in the records are lexically analysed and
parsed to generate small linguistic segments, and complex functions are executed to identify
potential information that can be coded. Section \ref{sec:mapping-nlp} discusses NLP related technology in detail with
examples.



\section{Related work of mapping clinical data to standard terminologies}
\label{sec:related-work}
%check terminological codes

There are a number of research areas that are considered relevant to the work that this thesis
describes. This section discusses the various technology and related projects that are relevant to
this work to give an overview of
the state of the art approaches of integrating clinical terminology with clinical data.

The research question of this thesis is in the intersection of a number of related fields
such as medical ontologies, EHR information modelling, text mining, information extraction and
information retrieval \parencite{krallinger2008linking}.  Different approaches have been developed to solve a variety of
problems in relation to integrating clinical data with standard terminology and mapping unstructured
data to terminological codes.  The diversity of the research domains produces solutions with
different focuses.  This chapter discusses the related work that can be summarised by three
main categories: 
\begin{itemize}
  \item \textbf{Mapping data to codes:} 
    \begin{itemize}
      \item Technologies that transform unstructured clinical data to annotated (coded) data
    \end{itemize}
  \item \textbf{Terminology services:}
    \begin{itemize}	
      \item Interface that provides access to terminological resources
    \end{itemize}
  \item \textbf{Other integration projects:}
    \begin{itemize}
      \item Research projects that aim to improve the integration of terminologies with clinical
	data
    \end{itemize}
\end{itemize}


\subsection{Mapping data to codes}
It has been acknowledged in the literature \parencite{stanfill2010systematic} that a reduction of
unstructured information and free text will result in improved data quality of health
information.  The use of free text and the associated ambiguity is common in the health industry. A
simple strategy to minimise this ambiguity involves annotating or encoding the unstructured
information with
standard clinical terminologies. The encoding process could happen before or after the data entry that
has been completed by clinicians. A clinician or coder is required to input the standard
terminological codes at runtime if this is possible to achieve. Or existing clinical data will
be gathered according to  requirements to be encoded after the data entry scenario. A large part of
the work is to be completed by human experts. Hence mapping technologies have been developed to
reduce the tedious repetitive work for domain experts.  A number of existing tools that can be used
to perform the mapping from source clinical data to the target terminology. 



\subsubsection{Biomedical concept recognition}
The need for medical concept recognition technology raises due to problems in dealing with medical
text. The primary use of medical concept recognition was to efficiently \parencite{meystre2008extracting} transform the vast
number of biomedical literature into data that can be used for research and other purposes. 
Medical Subject Headings (MeSH)
for example is a controlled vocabulary designed for the use of searching research articles and
books. Once the medical literature has been annotated with medical concepts from a clinical
terminology such as MeSH, users can query and find relevant information more efficiently.  Many
biology databases require curators to keep the database up to date with the
publication by manually annotating the literature with concepts from an ontology such as the \emph{Gene
Ontology}\footnote{http://www.geneontology.org/} (GO) so that
researchers can find the information of interest. 

In a medical or biological ontology, synonymous terms of a particular concept such as a gene,
a disease, or a medical condition, are linked to a unique code that is used as a reference term. 
Identifying and annotating these medical concepts or gene names requires experts to read and understand the text in
the literature and manually associate the free text document with a set of appropriate codes from an
ontology such as GO. This manual process is laborious and inefficient given the exponential
growth of biology literature in recent years. Bioinformaticians attempt to automate
the manual annotation process by developing text mining systems to recognise gene names (concepts
from GO) from free text in the literature in an effort to reduce the laborious work load. This work
has been partly inspired by these text mining systems and attempts to apply similar approaches to
the integration of EHR information models with clinical terminologies.

This requirement from the biologists has quickly grown and spread to other fields in the  medical
industry. Similar technology is needed for reimbursement in hospitals to calculate
cost based on diagnosis and procedures. Many types of medical documents are subject to encoding,
which is a process to map the free text content in various medical documents to standardised codes. A large number of
tools from the industry and research have emerged to solve the encoding problem.


Also commonly known as ``Automatic Coding'' or ``Computer-Assisted Coding'', \emph{Biomedical Concept
Recognition} applications are software that are capable of mapping free text in medical documents to
concepts from a clinical terminology. Text mining techniques and Natural Language
Processing (NLP) technology are commonly used to process medical text to attempt to find phrases 
that can be coded by clinical concepts in a designated terminology. Typical example applications
include \parencite{chapman2001simple} and \parencite{savova2010ctake}.
The following subsections will discuss a number of text mining techniques and examples of software
tools to map clinical data, typically free text, to clinical terminologies. 





\subsubsection{Natural Language Processing (NLP) in the medical domain}
\label{sec:mapping-nlp}
NLP techniques such as medical concept recognition for free text medical documents are relevant to
the context of this work. With a long history of dedicated research, the methods of
processing text and mining semantic meanings from text have evolved and matured. Included in the
research area and as a subset of the text processing problems, medical text processing inherits and
utilises many similar techniques that have been developed for NLP.  WordNet \parencite{miller1995wordnet} 
has been developed as an ontology for natural language processing. 
Similarly, medical ontologies such as the Gene
Ontology, UMLS \parencite{mccray1995representation} have been 
developed to solve semantic problems in the medical domain.  
UMLS is an
upper level terminology which contains many member terminologies such as SNOMED-CT, ICD and other coding
schemes. A single UMLS concept can be seen as a placeholder for several concepts from different
terminologies that express the same clinical meaning. Another
characteristic of UMLS is a unified thesaurus that includes multiple medical vocabularies and
terminologies.
%todo -- UMLS mentioned?
Most text processing techniques that have been developed for the purpose of NLP are also applicable
for medical text processing \parencite{cohen2005survey}. However, as specialised information,
text written in a medical domain have different properties than text appearing in other domains. 
Documents in the medical
domain include research papers and other literature, documents that are used in the healthcare
system, patient health records and so on. Biomedical text mining in particular has emerged as a
growing research field. The research interest is associated with the growth of biomedical and
molecular biology publication in databases such as PubMed \parencite{krallinger2005text,feldman2007text}.  
Researchers with
different research focus often have the trouble of finding publications that are closely related to
the area of their interest. Text mining helps to solve this problem by parsing the text and
extracting high quality structured data to benefit information retrieval of biomedical documents.
For instance the main application of text mining is to identify biological entities such as protein
and gene names, for the ease of access of publications to researchers.

% add intro to language model
A list of common NLP tasks is shown in table \ref{nlptasks}. These tasks are typically carried out
in text analysis that parse the text and attempt to extract information that are can be made use of.
For example topics can be extracted from research literature for information retrieval purpose by a
combination of text mining techniques. Each of the tasks specified in table \ref{nlptasks} covers a
technique that is commonly performed in text mining software. \emph{Tokenisation} deals with free
text and breaks text into smaller language unit such as words, phrases and symbols. A formal
language model is usually found in text mining software, for example a model that represents
sentences, words, and relationships between entities in a natural language. \emph{Part-of-speech
tagging} is the process to give words labels that describe grammatical properties such as ``finite
verb''. \emph{Sentence segmentation} is to identify where sentences begin and end. This is because a
punctuation mark such as a period does not always denote the end of a sentence, instead it can be
ellipsis, decimal point and so on.  \emph{Named entity extraction} is a information extraction
technique that identifies pre-defined categories such as geographical locations, names of persons,
time and so on. \emph{Coreference resolution} is used to identify the entity that has multiple
expressions in different places in a sentence or paragraph. \emph{Sentiment analysis} is used for
review systems to determine positive or negative point of view. \emph{Word sense disambiguation} is
the technique that determines the intended meaning of an ambiguous word or expression in a context.
\emph{Spelling suggestion} is the technique to deal with misspelled words. 


There are many NLP tools and frameworks that have been developed become available to solve these
tasks. Software suites such as the General
Architecture for Text Engineering\footnote{http://gate.ac.uk/} (GATE) software suite, the  Unstructured Information Management
Architecture\footnote{http://uima.apache.org/} (UIMA) and Apache
OpenNLP\footnote{http://opennlp.apache.org/} are widely adopted frameworks
to solve a number of problems in text mining.
\begin{table}\small
	\begin{center}
		\begin{tabular}{ l  l }
			 Task & Description\\
			 \hline
				Tokenisation & Break text into words, phrases and other lexical elements\\
				Part-of-speech (POS) tagging & Label words with syntactic property \\
				Sentence segmentation & Determine sentence boundary\\
				Named entity extraction & Identify mentioned entities such as person, 
				location in the text\\
				Coreference resolution & Determine different expressions that refer to 
				the same entity \\
				 & in a sentence\\
				Sentiment analysis & Classify opinions as 'positive' or 'negative'\\
				Word sense disambiguation & Determine the intended meaning in a given context\\
				Spelling suggestion & Suggest corrections to spelling mistakes\\
			 \hline
		 \end{tabular}
	 \end{center}
	 \caption{Common tasks of NLP}
	 \label{nlptasks}
 \end{table}

\subsubsection{MetaMap}
\emph{MetaMap} is a highly-configurable, versatile medical concept recognition program that has been
developed by the National Library of Medicine (NLM). It is able to map arbitrary free text in the
biomedical field to concepts that are defined in a large medical thesaurus named the \emph{Unified
Medical Language System} (UMLS)
Metathesaurus \parencite{aronson2001metamap}.  The UMLS Metathesaurus is a superset of controlled vocabularies which
contains the vast majority of biomedical terminologies. 
Like many other medical concept recognition software, the technology
at the centre of MetaMap is largely natural language processing techniques. Although can be used as
a stand alone application, developers who want to build text mining platforms often use a local
version of MetaMap as a sub-system to perform NLP tasks \footnote{http://mmtx.nlm.nih.gov/MMTx/}.


Over years of development, many new functions have been added to the program, for instance one of the
new features is a NLP technique named \emph{Word
Sense Disambiguation} (WSD) \parencite{aronson2010metamaprev}, which aims to map text to concepts with the correct
intention.  By design, MetaMap is tightly coupled with the UMLS
\parencite{Bhatia2009} when it is used to map free text to biomedical concepts. 
The program itself bundles with the pre-computed database of UMLS concepts. There are many major
coding systems included by the UMLS Metathesaurus, therefore MetaMap is able to encode a large
categories of text and produces mapping to standard terminologies such as ICD10 and LOINC. 

Its
algorithm is intended to work with UMLS concepts.  Thus MetaMap makes it easy to map free text to
UMLS concepts.  Other terminologies that have not been incorporated into the UMLS (such as locally
defined ones) are not well supported by the tool.  The focus of these tools is to provide a generic
means to map unstructured clinical data, often free text, to standard terminologies such as LOINC
and SNOMED-CT.

\begin{figure}[!htbp] 
  \begin{center} 
    \includegraphics[scale=0.4]{../res/metamap} 
  \end{center}
  \caption{The output of MetaMap in a command-line environment}
  \label{mmout}
\end{figure}


Figure \ref{mmout} is a screenshot of the output of the MetaMap program in a command-line environment in a normal processing mode. 
MetaMap can process text in a number of different modes with many options
and flags. The example in the screenshot demonstrates the mapping process for the input phrase:``blood pressure''. The MetaMap program
attempts to associate appropriate medical concepts with the input text. The tagged concepts come
from the UMLS Metathesaurus. Metamap produces 
a result set that contains suggested candidate concepts such as \emph{Blood
Pressure [Organism Function]}. The number in front of each suggested concept is a score that denotes the relevance of the candidate. 
When processing a sentence or a
paragraph, the program will parse and process every word to generate their candidate concepts.

%However it is worth mentioning that there is subtle difference between MetaMap and some other commercial automatic coding software. The 

The MetaMap algorithm consists of several stages that include parsing input text, generating variants,
finding concepts and evaluating the candidate concepts \parencite{aronson2001metamap}. The program
uses NLM's SPECIALIST lexicon and NLP tool set to process the medical text and create the mapping. 
The details of the computational steps of the mapping process has been described in
\textcite{aronson2010metamaprev}.

The performance of MetaMap has been well evaluated and approved in numerous publications. Wanda P has
conducted a MetaMap verse human study in \textcite{pratt2003study} and Stephane and Peter evaluated the performance of
MetaMap on extracting medical problems from electronic documents \parencite{meystre2006natural}. 
The MetaMap program has been compared to another popular concept recogniser called
Mgrep, created by the University of Michigan \parencite{aronson2010metamaprev}.

MetaMap as a reputable biomedical text mapping program represents a class of biomedical text
processing tools that map clinical information to standard terminologies. Built upon natural
language processing technologies, these tools have a focus on the biomedical field. Therefore they
are more domain specific comparing to generic text mining and NLP frameworks such as the previously
mentioned GATE and UIMA. Potentially developers who want to utilise MetaMap to build medical text mining
applications could combine stand alone NLP tools and MetaMap to achieve more
complicated tasks.

However the research problem that this thesis attempts to answer can not be directly and completely
addressed by these tools. The main target of biomedical concept recognition is free text in the
medical and biology field. The objective of this work is to improve the linkage between EHR
information model and the model of sophisticated clinical terminologies. 
New approaches need to be developed to facilitate better integration of clinical information in
the EHR with clinical terminologies.


\subsection{Terminology service}
\label{sec:term-service}
Modern controlled vocabularies and clinical terminologies are often being updated over time. For
instance releases of standardised codes may change over time when use of certain terms are not
appropriate or accurate any more. Access to up-to-date information of terminological resources is
often managed by dedicated terminology services. 
In order to manage a large number of codes and support different versions of many terminologies, a
few projects have been created to implement services that provide access to terminological
resources.

LexGrid is a project and later implemented as a service created by the Mayo Clinic Division to
provide access to many biomedical terminologies \parencite{pathak2009lexgrid}. A specification of the information model of 
the terminology service was also published. The LexGrid model acts as a unified
representation of all included terminologies and enables the users to
query and navigate different terminological resources of incompatible representational formats.  
The large number of biomedical ontologies in BioPortal (\url{http://bioportal.bioontology.org/})
across anatomy, disease, imaging,
chemistry and other medical terminologies are stored using the LexGrid model \parencite{noy2009bioportal}.


There are other projects such as the OpenGALEN and Lexicon Query Services, each created a query
language for accessing multiple terminologies, known as the GRAIL language and Interface Description
Language respectively \parencite{rogers2000galen}. The UMLS Knowledge
Source Server (\url{https://uts.nlm.nih.gov/home.html}) is a free terminology service for accessing
the UMLS Metathesaurus \parencite{mccray1996umls}. 
Commercial terminology server such as the Apelon Distributed
Terminology System is also available for distributing terminological resources \parencite{apelon}.

A common trait of terminology services is a common information model to represent multiple
terminologies. Therefore many biomedical ontologies can be integrated to provide a unified
representation.  Typical features include the hierarchical structure and the ``IS-A''
relationship, which are present in many terminologies including ICD and SNOMED-CT. As there will be
continuous development of biomedical ontologies, the need for mapping ontologies and use them
together with clinical data such as electronic healthcare record is likely to increase.
It is worth investigating that how these features can be used to improve the
integration between EHR information models and terminology models. 


\subsection{Other tools and projects}

\subsubsection{RELMA} 
\emph{RELMA} is a helper application released by
the Regenstrief Institute, Inc.\ for mapping local tests (and other observation codes) to Logical
Observation Identifier Names and Codes (LOINC) database on a one-at-a-time basis \parencite{relma}. This
manual process is aided by an additional component called the \emph{Intelligent Mapper} which
processes the mapping automatically and has been algorithmically improved \parencite{relmaauto}.


\subsubsection{Automatic terminological concept recognition from free text} 
\textcite{autoSno} focused on processing chunks of medical text to identify SNOMED-CT concepts by
combining two search techniques: regular expression matching and a search engine called easyIR. The
main source for concept recognition in Ruch's work is the free text that is produced in medical
institutions, such as clinical notes or a discharge letter.  Friedman et al adapted an existing NLP
system called \emph{MedLEE} to automate the process of encoding clinical documents with the UMLS
thesaurus \parencite{medlee}. Randomly selected sentences from clinical documents were used as input to
the MedLEE system to produce codes.  The evaluation of the method involved a comparison between the
result of the automated approach and the mappings created by seven experts who performed the mapping
manually.  Recall and precision of both the automatic method and manual mapping by experts were
reported to show the differences.

\subsubsection{MoST}
\label{sec:most}

A semi-automatic system called MoST that searches for SNOMED-CT code to bind to 
archetypes has been developed in Manchester University \parencite{qamar2006}.
The aim of their work was to utilise various
technologies such as filtering of the mapping results to assist the manual mapping process.
The MoST system process involves gathering the text from archetypes, performing a search on a number
of medical text databases and using several filters to shortlist the results based on rules, finally
presenting the matches to a user graphic interface for manual selection.
A prototype of the system was integrated into an archetype editor to enable SNOMED-CT code binding
\parencite{sundvall2008integration}.

However criticism of existing terminology mapping or binding tools focuses on the poor searching options and
inadequate ranking of relevant results \parencite{chiang2006reliability}. Although the MoST system addressed some
of these problems, the demand for an optimal way to find the most appropriate code for the intended
medical meaning is still high. The problem of finding a perfect match of the search query for a
SNOMED-CT code is not straight-forward \parencite{rector1999why}. Automation of this process is even
more difficult and has little guarantee of accuracy.  In order to develop better search algorithms,
a test suite for creating and evaluating terminological bindings is needed to facilitate the
research in the area of integrating terminologies with clinical information. The author attempts to
approach this problem with a generic solution which will be discussed in Chapter 4.

\subsubsection{Mapping observation archetypes to SNOMED-CT}
\label{sec:mapping-spain}

A research group at the university of Santiago de Compostela, Spain has 
conducted studies to use lexical techniques to map a selected number of
observation archetypes to SNOMED-CT concepts \parencite{meizoso2012semantic}. The primary focus of the project is to use external
terminological tools combined with context-based methods to improve the performance of mapping
archetypes to SNOMED-CT. It has similar objectives to this work in that their research interest is
also about mapping archetypes to terminologies. However their methods have been focused on using
specific lexical methods to improve a particular mapping problem. In a way this group has
produced a similar platform to the MoST system mentioned above. The author of this thesis believes
that a more general study and a framework should be created to facilitate the integration between
EHR information model and clinical terminologies.


\subsubsection{TermInfo}
\label{sec:terminfo}
The HL7 TermInfo project aims to provide 
guidance on how to use terminologies properly with
the HL7 information model \parencite{terminfowebsite}.  The project specifically focused on the use of SNOMED-CT with HL7 version
3 Clinical Statement model \parencite{terminfo2012book}. One of the issues that the TermInfo project tries to tackle is the problem that the
same clinical meaning could be represented in different ways with HL7 and SNOMED-CT. For example
``family history of asthma'' could be expressed by using HL7 ``family history'' check list with a
SNOMED-CT code ``asthma'', or a HL7 record entry with a single SNOMED-CT concept ``family history of
asthma''. It found many overlaps between SNOMED-CT and the HL7 information model. Both can express
clinical statement such as procedures, findings and observations.
The goal of the project was to produce recommendations of using SNOMED-CT coding ability
with HL7 information model properly.

The TermInfo project is relevant to this work in that both studies try to improve the integration of
EHR information models with clinical terminology. The TermInfo project created a set of useful
guidelines for encoding clinical information in a meaningful way \parencite{terminfowiki}. However it could not extend to other
terminologies easily since it consists of rules that require human experts to follow. The
recommendations are empirical and useful for practical use. A more general and systematic framework is needed to
study the overlaps and differences between the information model of EHR and terminology.

There are other specific literature review that will be described later in Chapter \ref{ch:apply}.
This arrangement is to enhance the flow of the document because those review will be specific to the
particular application of the approach presented in this thesis.


\section{Summary}
%copy from paper1 todo -- treat text
It has been mentioned that EHR information models and terminology models have been developed in parallel. 
The two communities have not yet converged the modelling approaches. 
The overlaps and differences of EHR information models and terminology models are a barrier to the
integration of terminology and EHR \parencite{markwell2008representing}. 

Both sides try to cover the dense space of medical concepts and find a way
to link and aggregate health related concepts. Although the focus of the two approaches differs,
overlaps exist in many areas.  Researchers in
medical informatics made many attempts to link medical concepts from terminology to reduce the
ambiguity in EHRs \parencite{MasarieJr1991379}.

Archetypes are expressed in a dedicated Archetype Definition Language (ADL) which includes support for binding of terms in the archetype to external
terminologies. The bound codes will be called term binding SNOMED-CT code from this point on. The current design of
Archetype term binding can only allow a few manually inserted external terminology codes and its use is left to developers of a particular system.
Recent research work emerged on how to search and find a standard code from a terminology to represent the particular meaning of a node in an
Archetype. 


This chapter has described the various related work that could potentially contribute to solving the
problem of integrating clinical information models with terminologies. However, there is by and large
no general solution for harmonising clinical information models and terminologies. The research that
has been mentioned in this chapter is in many cases specifically designed for a particular purpose
or scenario. For example, NLP based medical text processing techniques are largely designed to
capture medical concepts in free text reports and notes. The data model specification for
terminology service applications is intended to provide a better and unified access to a wide categories of 
terminological resources.

However, this poses the question: Can we utilise existing technologies to create a general framework to facilitate 
the integration of clinical information models and terminologies? The next chapter represents the
start of the author's investigation of this topic.





%\end{document}
