\documentclass{report}
\usepackage{svcon2e}

\usepackage{amsmath,amssymb,graphicx,epsfig}
\usepackage{makeidx}  % allows for indexgeneration
\usepackage{url,amsfonts,amssymb}
%

\usepackage{graphicx}
\graphicspath{{pics/}{figs/}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{document}
\pagenumbering{arabic}
%\tableofcontents
\chapter{Automatic Segmentation of Cardiac Tagged MRI}
\chapterauthors{Zhen Qian,  Xiaolei Huang, Dimitris N. Metaxas  and Leon Axel }

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{abstract}

In this chapter we present a fully automatic and accurate
segmentation framework for 2D cardiac tagged MR images. This system
consists of a semi-automatic segmentation framework to obtain the
training contours, and a learning based framework that is trained by
the semi-automatic results, and achieves fully automatic and
accurate segmentation. The semi-automatic method segments several
key frames in a 4D tMRI image set using the Metamorphs segmentation
algorithm on the Gabor filter-based tag removed images,  and
spatio-temporally propagates the contours to other neighboring
images. Therefore achieves high efficiency. The learning based fully
automatic method consists of three learning methods: a) an active
shape model is implemented to model the heart shape variations, b)
an Adaboost learning method is applied to learn confidence-rated
boundary criterions from the local appearance features at each
landmark point on the shape model, and c) an Adaboost detection
technique is used to initialize the segmentation. The set of
boundary statistics learned by Adaboost is the weighted combination
of all the useful appearance features, and results in more reliable
and accurate image forces compared to using only edge or region
information. Our experimental results show that given similar
imaging techniques, this learning based method can achieve a highly
accurate performance without any human interaction.

\end{abstract}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\label{sec:intro}
%
%It generates an MRI-visible tag pattern within the myocardium that
%deforms with the tissue during the cardiac cycle {\it in vivo}, and
%gives motion information of the myocardium normal to the tagging
%strips.


Tagged cardiac magnetic resonance imaging(MRI) is a well known
technique for non-invasively visualizing the detailed motion of
myocardium throughout the heart cycle. This technique has the
potential of early diagnosis and quantitative analysis of various
kinds of heart diseases and malfunction. This technique generates a
set of equally spaced parallel tagging planes within the myocardium
as temporary markers at end-diastole by spatial modulation of
magnetization. Imaging planes are perpendicular to the tagging
planes, so that the tags appear as parallel dark stripes in MR
images and deform with the underlying myocardium during the cardiac
cycle {\it in vivo}, which gives motion information of the
myocardium normal to the tagging stripes. See
Fig.~\ref{fig:heart}(a-c) for some examples. However, before it can
be used in routine clinical evaluations, an imperative but
challenging task is to automatically find the boundaries of the
epicardium and the endocardium.

Segmentation in tagged MRI is difficult for several reasons. First,
the boundaries are often obscured or corrupted by the nearby tagging
lines, which makes the conventional edge-based segmentation method
infeasible. Second, tagged MRI tends to increase the intensity
contrast between the tagged and un-tagged tissues at the price of
lowering the contrast between the myocardium and the blood. At the
same time, the intensity of the myocardium and blood vary during the
cardiac cycle due to the tag fading in the myocardium and being
flushed away in the blood. Third, due to the short acquisition time,
the tagged MR images have a relatively high level of noise. These
factors make conventional edge or region-based segmentation
techniques impractical. The last important reason is that, from the
clinicians' point of view, or for the purpose of 3D modeling, {\it
accurate} segmentation based solely on the MR image is usually not
possible. For instance, for conventional clinical practice, the
endocardial boundary should exclude the papillary muscles for the
purpose of easier analysis. However, in the MR images, the papillary
muscles are often apparently connected with the endocardium and
cannot be separated if only the image information is used. Thus,
manual correction, or prior statistical shape knowledge is usually
needed to improve the segmentation results.
%, if it can advise the following segmentation.

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{ }c@{ }c@{ }c}
  \includegraphics[height=2cm,width=2cm] {3057.eps}
  & \includegraphics[height=1.95cm,width=1.95cm] {sa2z5t9.eps}
  &  \includegraphics[height=2cm,width=4.5cm] {heart.eps}
  &  \includegraphics[height=2cm,width=2cm] {diag.eps}\\
    (a) & (b) & (c) & (d)
    \end{tabular}
        \caption{
      (a-c) Some examples of tagged cardiac MRI images. The task of segmentation is to find the boundaries of the epicardium and endocardium (including the LV and RV and excluding the papillary muscles.) (d) The framework of our segmentation method. }
    \label{fig:heart}
  \end{center}
\end{figure}

There have been some efforts to achieve tagged MRI segmentation.
In~\cite{AlbertMiccai:02}, grayscale morphological operations were
used to find non-tagged blood-filled regions. Then they used
thresholding and active contour methods to find the boundaries.
In~\cite{huang:04}, a learning method with a coupled shape and
intensity statistical model was proposed. In~\cite{AlbertMiccai:02},
morphological operations are sensitive to the complex image
appearance and high level image noise, and the active contour method
tends to get irregular shapes without a strong prior shape model.
In~\cite{huang:04}, their intensity statistical model cannot capture
the complex local texture features, which leads to inaccurate image
forces.


In this paper, in order to address the difficulties stated above, we
propose a novel and fully automatic segmentation method based on
three learning frameworks: 1. An active shape model(ASM) is used as
the prior heart shape model. 2. A set of confidence-rated local
boundary criteria are learned by Adaboost, a popular learning
scheme, at landmark points of the shape model, using the appearance
features in the nearby local regions. These criteria give the
probability of the local region's center point being on the
boundary, and force their corresponding landmark points  to move
toward the direction of the highest probability regions. 3. An
Adaboost detection method is used to initialize the segmentation's
location, orientation and scale. The second component is the most
essential contribution of our method. We abandon the usual edge or
region-based methods because of the complicated boundary and region
appearance in the tagged MRI. It is not feasible to designate one or
a few edge or region rules to solve the complicated segmentation
task. Instead, we try to use all possible information, such as the
edges, the ridges, and the breaking points of tagging lines, to form
a {\it complex rule}. It is apparent that at different locations on
the heart boundary, this {\it complex rule} must be different, and
our confidence in the {\it complex rule} varies too. It is
impractical to manually set up each of these {\it complex rules} and
weight their confidence ratings. Therefore, we implement Adaboost to
learn a set of rules and confidence ratings at each landmark point
on the shape model. The first and the second frameworks are tightly
coupled. The shape model deforms under the forces from Framework 2
while controlled and smoothed by Framework 1. To achieve fully
automatic segmentation, in Framework 3 the detection method
automatically provides an approximate position and size of the heart
to initialize the segmentation step. See Fig.~\ref{fig:heart}(d) for
a complete illustration of the frameworks.

Before we implement the learning based framework, we need to
generate a large amount of accurately segmented contours used as the
training data. A full set of conventional spatio-temporal(4D) tagged
MRI consists of more than one thousand images. Segmenting every
image manually and individually is a very time-consuming and
inefficient process. Therefore, we developed a semi-automatic
segmentation system that spatio-temporally propagates and requires
minimal manual interaction.


The remainder of this chapter is organized as follows: in Section 2,
we present the semi-automatic system that efficiently generates the
training data with minimal user interaction. In Section 3, we
introduce the learning based segmentation methodology, including
Frameworks 1 and 2. In Section 4, we briefly introduce the heart
detection technique of Framework 3. In Section 5 we give some
details of our experiments and show some encouraging initial
experimental results.




\section{Training Data Generated From A Semi-Automatic Segmentation System}


Training data are usually obtained by manual delineation with a
proper user interface. Semi-automatic method can vastly lower manual
workload and improve the accuracy and robustness. To address the
difficulty added by tagging lines, before the segmentation process,
a tunable Gabor filter bank technique is first applied to remove the
tagging lines and enhance the tag-patterned region
\cite{manglik:04}. Because the tag patterns in the blood are flushed
out very soon after the initial tagging modulation, this tag removal
technique actually enhances the blood-myocardium contrast and
facilitates the following myocardium segmentation.

%\subsection{The Gabor filter bank technique for tagged MRI analysis}

The 2D Gabor filter is basically a 2D Gaussian multiplied by a
complex 2D sinusoid \cite{Dunn:94}. At time 1 of the tagged MR
imaging process, when the tagging lines are initially straight and
equally spaced, we set the initial parameters of the Gabor filter to
equal the frequencies of the image's first harmonic peaks in the
spectral domain. During a heart beat cycle, the tagging lines move
along with the underlying myocardium, and the spacings and
orientations  of them change accordingly. We modify the parameters
of the Gabor filter to fit these deformed tag patterns. The original
un-tuned Gabor filter and the modified Gabor filters make up a
tunable Gabor filter bank. The input tagged MR images are convolved
with the tunable Gabor filters in the Fourier domain. Then the tag
patterned region will result in high filtering response, and enhance
the blood-myocardium contrast and facilitate myocardium
segmentation. As shown in Fig.~\ref{fig:de-tagged}, the de-tagged
images in mid-systolic phase make the boundary segmentation tasks
easier.


At each pixel in the input image, we apply the tunable Gabor filter
bank and find a set of optimal filter parameters that maximize the
Gabor filter response. From the optimal parameters, we can learn the
current pixel's relative distance with respect to the nearby tagging
lines, and approximately the displacement of the underlying tissue
over time. For conventional short axis(SA) tagged MRI sequences, we
have two sets of data whose tagging lines are initially
perpendicular to each other. When we combined them, we get the 2D
deformation of the myocardium. Therefore, we only need to do
myocardium segmentation at one time, then it can be temporally
propagated to the neighboring time frames. Spatial propagation of
the heart wall boundaries is a bit difficult due to the complex
heart geometry and the topological changes of the boundaries at
different positions of the heart. Our solution is segmenting a few
key slices first, which represent the topologies of the rest of the
slices. Then we let the key frames propagate to the remaining
slices.


%
%\subsection{The Metamorph Deformable Model for Tagged MR Image
%Segmentation}

The semi-automatic segmentation step is based on a newly proposed
deformable model, which we call "Metamorphs"
\cite{Huang-Metaxas-Chen:04}. The key advantage of the Metamorph
model is that it integrates both shape and interior texture and its
dynamics are derived coherently from both boundary and region
information in a common variational framework. These properties of
Metamorphs make it more robust to image noise and artifacts than
traditional shape-only deformable models.


The model deformations are efficiently parameterized using a space
warping technique, the cubic B-spline based Free Form Deformations
(FFD)\cite{Sederberg-Parry:86,Amini-Chen-etal:01}. The essence of
FFD is to deform an object by manipulating a regular control lattice
$F$ overlaid on its volumetric embedding space. In this paper, we
consider an Incremental Free Form Deformations (IFFD) formulation
using the cubic B-spline basis \cite{Huang-Paragios-Metaxas:03}. The
interior intensity statistics of the models are captured using
nonparametric kernel-based approximations, which can represent
complex multi-modal distributions. Using this nonparametric
approximation, the intensity distribution of the model interior gets
updated automatically while the model deforms. When finding object
boundaries in images, the dynamics of the Metamorph models are
derived from an energy functional consisting of both edge/boundary
energy terms and intensity/region energy terms.


\begin{figure}[t]
\centering
\begin{tabular}{c@{}c@{}c@{}c}
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_original.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_detagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_contourOnDetagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_contourOnOriginal.eps}\\
(1a) & (1b) &(1c) & (1d)\\
%\end{tabular}\\
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_original.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_detagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_contourOnDetagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_contourOnOriginal.eps}\\
%\begin{tabular}{cccc}
(2a) & (2b) &(2c) & (2d)
\end{tabular}
\caption{\rm\small Metamorphs segmentation on de-tagged images. (1)
segmentation at time 7, slice position 7. (2) segmentation at time
7, slice position 10. (a) original image. (b) image with tags
removed by gabor filtering. (c) cardiac contours segmented by
Metamorphs on detagged image. (d) contours projected on the original
image.} \label{fig:de-tagged}
\end{figure}

We used Metamorph models to segment heart boundaries in tagged MR
images. In Fig.~\ref{fig:de-tagged}, we show the Left Ventricle,
Right Ventricle, and Epicardium segmentation using Metamorphs on
de-tagged MR images. By having the tagging lines removed using Gabor
filtering, a Metamorph model can get close to the heart wall
boundary more rapidly. Then the model can be further refined on the
original tagged image. The Metamorph model evolution is
computationally efficient, due to our use of the nonparametric
texture representation and FFD parameterization of the model
deformations. For all the examples shown, the segmentation process
takes less than $200ms$ to converge on a 2Ghz PC station.





\subsection{Integration And The Semi-Automatic System}

We integrate the above two major techniques, the tunable Gabor
filter bank and the Metamorphs segmentation, to construct our
spatio-temporal integrated MR analysis system. By using the two
techniques in a complementary manner, exploiting specific domain
knowledge about the heart anatomy and temporal characteristics of
the tagged MR images, we can achieve efficient, robust segmentation
with minimal user interaction. The algorithm consists of the
following main steps.


\begin{figure}[h]
\centering
\includegraphics[height=6.5cm,width=8.8cm]{figurebeta.eps}
\caption{The framework of our automated segmentation in 4D
spatio-temporal MRI-tagged images. We start at a center time when
the tag lines are flushed away in the blood area while they remain
clear in the myocardium. Boundary segmentation is done in several
key frames on the de-tagged images before the boundary contours are
spatially propagated to the other positions. Then at each position,
the boundaries are temporally propagated to other times.}
\end{figure}



1. Tag removal for images at the mid-systolic phase. Given a 4D
spatio-temporal tagged MR image dataset of the heart, we start by
filtering using a tunable Gabor filter bank on images of a 3D volume
that corresponds to a particular time in the middle of the systolic
phase, which we term {\it 'center time'}. For a typical dataset in
which the systolic phase is divided into 13 time intervals, we apply
the Gabor filtering on images at time 7, when tag patterns in the
endocardium are flushed out by blood but tag lines in the myocardium
are clearly visible.

2. Metamorphs segmentation using the de-tagged images. Given the
de-tagged Gabor response images at time 7, we use Metamorphs to
segment the cardiac contours including the epicardium, the LV and RV
endocardium. The Metamorph models can be initialized far-away from
the object boundary and efficiently converge to an optimal solution.
For each image, we first segment the LV and RV endocardium. To do
this, the user initializes a circular model by clicking one point
(the seed point) inside the object of interest, then the surrounding
region intensity statistics and the gradient information
automatically drive the model to converge to the endocardium
boundaries. We then automatically initialize a Metamophs model for
the epicardial contour by merging the endocardial contours and
expanding the interior volume according to myocardium thickness
statistics. The model is then allowed to evolve and converge to the
epicardium boundary.

3. Spatial propagation at the mid-systolic center time. At the
mid-systolic phase, we do the segmentation at several key frames
which represent the topologies of the rest of the frames, then let
the segmented contours propagate to their nearby frames. In short
axis cardiac MR images, from the apex to the base, the topology of
the boundaries goes through the following variations: 1. one
epicardium; 2. one epicardium and one LV endocardium (in some cases
of the RV hypertrophy patients, one epicardium and one RV
endocardium are also possible); 3. one epicardium, one LV
endocardium and one RV endocardium; 4. one epicardium, one LV
endocardium and two RV endocardium. The key frames consist of one
center frame of the third topology and three transition frames. This
spatial propagation actually provides a quick initialization method
(rather than manually clicking the seed points as mentioned in step
2) for the rest of the non-key frames from the key frames.

4. Boundary tracking using tunable Gabor filters over time. Once we
have segmented the cardiac contours at time 7, we keep tracking the
motion of the myocardium and the segmented contours over time. This
temporal propagation of the cardiac contours significantly reduces
segmentation manual workload, since it enables us to do supervised
segmentation at only one time, then fully automated segmentation of
the complete 4D dataset can be achieved. It also improves
segmentation accuracy because we capture the overall trend in heart
deformation more accurately by taking into account the temporal
connection between segmented boundaries.


5. Boundary refinement by manual. In practice, We provide the manual
correction option to doctors during the whole segmentation process
to ensure satisfiable results.


This 4D segmentation system is developed in a Matlab 6.5 GUI
environment. The user need to load in the raw MRI data of the short
axis and long axis volumes first (Fig.~\ref{fig:sys}-1a). Then the
user is allowed to examine the whole data sets, which consist of two
short axes and one long axis, and determine the slice index of the
center time (Fig.~\ref{fig:sys}-1b,1c). The tag removal step is done
on the 3D volume at the center time (Fig.~\ref{fig:sys}-2a). Then
the user has an option to determine the indices of the key frames
and do Metamorphs segmentation on these key frames
(Fig.~\ref{fig:sys}-2b). The segmented contours are propagated
spatially (optional) and then temporally. Practically the spatial
propagation step is optional because for most clinical analysis one
typical slice is enough unless a fully 4D model is required. Manual
interaction is always available during the whole segmentation and
propagation process to make corrections in time
(Fig.~\ref{fig:sys}-2c).

\begin{figure}[ht]
\centering
\begin{tabular}{c@{}c@{}c@{}c}
(1)&
\includegraphics[height=2.7cm,width=3.5cm]{pic1c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{pic2c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{pic6c.eps} \\
(2)&
\includegraphics[height=2.7cm,width=3.5cm]{pic7c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{pic11c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{pic13c.eps} \\
%(3)&
%\includegraphics[height=3cm,width=3.5cm]{pic18c.eps} &
%\includegraphics[height=3cm,width=3.5cm]{pic15c.eps} &
%\includegraphics[height=3cm,width=3.5cm]{pic17c.eps} \\
& (a) & (b) & (c)
\end{tabular}
\caption{Screen snapshots of our segmentation and tracking system.
(1a) read in the SA and LA volumes. (1b,1c) examine the data sets.
(2a) de-tagged image at the center time. (2b) Metamorphs
segmentation based on de-tagged images. (2c) segmentation results.
The papillary muscle is excluded from the myocardium by manual
interaction. }\label{fig:sys}
\end{figure}







\section{Segmentation Based on ASM and Local Appearance Features Learning Using Adaboost}
\label{sec:method1}


After we collected enough amount of training data from the previous
system, we are able to perform learning methods to extract prior
knowledge, such as a statistical shape prior, and image local
features, to direct the future fully automatic segmentation. There
has been some previous research on ASM segmentation methods based on
local features modeling. In~\cite{Ginneken:03}, a statistical
analysis was performed, which used sequential feature forward and
backward selection to find the set of optimal local features.
In~\cite{jiao:03}, an EM algorithm was used to select Gabor
wavelet-based local features. These two methods tried to select a
small number of features, which is impractical to represent
complicated local textures such as in tagged MRI.
In~\cite{shuyuli:04}, a simple Adaboost learning method was proposed
to find the optimal edge features. This method didn't make full use
of the local textures, and didn't differentiate each landmark
point's confidence level. In our method, similarly using Adaboost,
our main contributions are: the ASM deforms based on a more {\it
complex} and robust rule, which is learned from the local
appearance, not only of the edges, but also ridges and tagging line
breakpoints. In this way we get a better representation of the local
appearance of the tagged MRI. At the same time, we derive the
confidence rating of each landmark point from their Adaboost testing
error rates, and use these confidence ratings to weight the image
forces on each landmark point. In this way the global shape is
affected more by the {\it more confident} points and we eliminate
the possible error forces generated from the {\it less confident}
points.





\subsection{ASM, The Shape Model}



Since the shape of the mid portion of the heart in short axis (SA)
images is consistent and topologically fixed (one left ventricle
(LV) and one right ventricle (RV) ), it is reasonable to implement
an active shape model~\cite{Cootes-ASM:95} to represent the desired
boundary contours.

We choose training data from SA images with different tagging line
orientations, such as ($0^{\circ}$ and $90^{\circ}$, or
$-45^{\circ}$ and $45^{\circ}$), and slightly different tag
spacings. Each data set included images acquired at phases through
systole into early diastole, and at positions along the axis of the
LV, from near the apex to near the base, but without topological
changes. Segmented contours were centered and scaled to a uniform
size. Landmark points were placed automatically by finding key
points with specific geometric characteristics. As shown in
Fig.~\ref{fig:filters}(a), the black points are the key points,
which were determined by the curvatures and positions along the
contours. For instance, $P1$ and $P2$ are the highest curvature
points of the RV; $P7$ and $P8$ are on opposite sides of the center
axis of the LV. Then, fixed numbers of other points are equally
placed in between. In this way, the landmark points were registered
to the corresponding locations on the contours. Here, we used 50
points to represent the shape.


For each set of contours, the 50 landmark points $(x_i, y_i)$ were
reshaped to form a shape vector
$X=(x_1,x_2,...,x_{50},y_1,y_2,...,y_{50})^T$. Then Principal
Component Analysis was applied and the modes of shape variation were
found. Any heart shape can be approximately modeled by $X =
\bar{X}+Pb$, where $\bar{X}$ is the mean shape vector, $P$ is the
matrix of shape variations, and $b$ is the vector of shape
parameters weighting the shape variations.

After we find the image forces at each landmark point, as in
Section~\ref{sec:ada}, the active shape model evolves iteratively.
In each iteration, the model deforms under the influence of the
image forces to a new location; the image forces are then calculated
at the new locations before the next iteration.



\subsection{Segmentation Via Learning Boundary Criteria Using Adaboost}
\label{sec:ada}

%\subsection{Feature Design}

To capture the local appearance characteristics, we designed three
different kinds of steerable filters. We use the derivatives of a 2D
Gaussian to capture the edges, we use the second order derivatives
of a 2D Gaussian to capture the ridges, and we use half-reversed 2D
Gabor filters~\cite{Daugman:85} to capture the tagging line
breakpoints.

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{ }c@{ }c@{ }c}
    \includegraphics[height=2.5cm,width=2.7cm] {controlpoints.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {dg2.eps} &
      \includegraphics[height=2.5cm,width=3cm] {ddg3.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {rg2.eps} \\
      (a) & (b) & (c) & (d)
    \end{tabular}
        \caption{
      (a) shows the automatic method used to place the landmark
    points. (b-d) are the sample sets of feature filters: (b) are the
      derivatives of Gaussian used for edge detection, (c) are the second
      derivatives of Gaussian used for ridge detection, and (d) are the
      half-reversed Gabor filters used for tag line breakpoint
      detection.
    }
    \label{fig:filters}
  \end{center}
\end{figure}


Assume $G=G((x-x_0)\cos(\theta),(y-y_0)\sin(\theta),\sigma _x,
\sigma _y)$ is an asymmetric 2D Gaussian, with effective widths
$\sigma_x$ and $\sigma_y$, a translation of $(x_0,y_0)$ and a
rotation of $\theta$. We set the derivative of $G$ to have the same
orientation as $G$: $G'=G_x \cos(\theta)+G_y \sin(\theta)$.
%\begin{equation}
%G'=G_x \cos(\theta)+G_y \sin(\theta)
%\end{equation}

The second derivative of a Gaussian can be approximated as the
difference of two Gaussians with different $\sigma$. We fix
$\sigma_x$ as the long axis of the 2D Gaussians, and set
$\sigma_{y2}>\sigma_{y1}$. Thus,
$G''=G(\sigma_{y1})-G(\sigma_{y2})$.
%
%\begin{equation}
%G''=G(\sigma_{y1})-G(\sigma_{y2})
%\end{equation}

In the previous two equations, we set $x_0=0$, and tune $y_0$,
$\theta$, $\sigma_x$, $\sigma_y$, $\sigma_{y1}$ and $\sigma_{y2}$ to
generate the desired filters.

The half-reversed 2D Gabor filters are defined as a 2D sine wave
multiplied with the 2D derivative of a Gaussian:

\begin{equation}
F=G'(x,y)\cdot \mathbb{R} \{ e^{-j[\phi+2\pi(Ux+Vy)]} \}
\end{equation}

where $G'$ is the derivative of a 2D Gaussian. $U$ and $V$ are the
frequencies of the 2D sine wave, $\psi=arctan(V/U)$ is the
orientation angle of the sine wave, and $\phi$ is the phase shift.
We set $x_0=0$,
$\sigma_x=\sigma_y=\sigma$,$-45^{\circ}\le\psi-\theta\le45^{\circ}$,
and tune $y_0$, $\theta$, $\sigma$, $\phi$, $U$ and $V$ to generate
the desired filters.

For a 15x15 sized window, we designed 1840 filters in total. See
Fig.~\ref{fig:filters}(b-d) for some sample filters.




%\subsection{Adaboost Learning} \label{sec:adalearn}


 In the learning section, each training image is scaled
proportionally to the scaling of its contours. At each landmark
point of the contours, a small window (15x15) around it was cut out
as a positive appearance training sample for this particular
landmark point. Then along the normal of the contour, on each side
of the point, we cut out two 15x15-sized windows as negative
appearance training samples for this particular landmark point. Thus
for each training image, at a particular landmark point, we got one
positive sample and four negative samples (shown in
Fig.~\ref{fig:train}(a).) We also randomly selected a few common
negative samples outside the heart or inside the blood area, which
are suitable for every landmark point. For image contrast
consistency, every sample was histogram equalized.

The function of the Adaboost algorithm~\cite{Freund:95,Schapire:02}
is to classify the positive training samples from the negative ones
by selecting a small number of important features from a huge
potential feature set and creating a weighted combination of them to
use as an accurate strong classifier. During the boosting process,
each iteration selects one feature from the potential features pool,
and combines it with the existing classifier that was obtained in
the previous iterations. After many iterations, the combination of
the selected important features can become a strong classifier with
high accuracy. The output of the strong classifier is the weighted
summation of the outputs of each of its each selected features, or,
the weak classifiers: $F=\Sigma_t{\alpha_th_t(x)}$,
%\begin{equation}
%%\vspace{.3cm} F=\Sigma_t{\alpha_th_t(x)}%\vspace{-.3cm}
%\end{equation}
where $\alpha$ are the weights of weak classifiers, and $h$ are the
outputs of the weak classifiers.

We call $F$ the boundary criterion. When $F>0$, Adaboost classifies
the point as being on the boundary. When $F<0$, the point is
classified as off the boundary. Even when the strong classifier
consists of a large number of individual features, Adaboost
encounters relatively few overfitting problems~\cite{schapire:98}.
We divided the whole sample set into one training set and one
testing set. The function of the testing set is critical. It gives a
performance measure and a confidence level that tells us how much we
should trust its classification result. Fig.~\ref{fig:train}(b, c)
shows the learning error curve versus the boosting iteration numbers
at two selected landmark points. Note that every landmark point $i$
has its own $\alpha$, $h$ and $F_i$.

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{}c@{}c}
      \includegraphics[height=3cm,width=3cm] {traingood.eps} &
      \includegraphics[height=3.3cm,width=4.1cm] {lp1.eps} &
      \includegraphics[height=3.3cm,width=4.1cm] {lp45.eps} \\
      (a) & (b) & (c)
    \end{tabular}
        \caption{
      (a) shows the method of setting the training data. The solid box
      is the positive sample around the landmark points. The four
      dashed line boxes along the normal are the negative samples.
      This way of setting the negative samples is chosen to make the classifier
      more adaptive to the particular landmark position. (b) and (c)
      show the training error (solid lines) and testing error (dash
      lines) of two landmark points versus Adaboost iteration times. (b) is a point on the LV, (c)
      is a point on the Epi. Note how the training and testing error
      decrease as Adaboost iterates. Also note the testing error of (b) is
      higher than (c): we are more confident of landmark point (c)'s
      classification result.
    }
    \label{fig:train}
  \end{center}
\end{figure}



%\subsection{Segmentation Based On Confidence Ratings}


 In the segmentation stage, we first select an initial
location and scale, and then overlay the mean shape $\bar{X}$, which
is obtained from ASM, onto the task image. In
section~\ref{sec:method2} we describe an automatic initialization
method.

At a selected landmark point $i$ on the shape model, we select
several equally spaced points along the normal of the contour on
both sides of $i$, and use their $F$ values to examine the
corresponding windows centered on these points.
In~\cite{schapire:98}, a logistic function was suggested to estimate
the relative boundary probabilities:
\begin{equation} %\vspace{.5cm}
Pr(y=+1|x)=\frac{e^{F(x)}}{e^{F(x)}+e^{-F(x)}} %\vspace{-.5cm}
\end{equation}
We find a point $j$ whose test window has the highest probability of
being on the heart boundary. Thus an image force $\vec{f}$ should
push the current landmark point $i$ toward $j$. Recall that, as
discussed in the previous subsection, Adaboost gives the errors of
the testing data $e_i$. We define the confidence rating as: %$c_i =
%\ln{\frac{1}{e_i}}$.
\begin{equation}
c_i = \ln{\frac{1}{e_i}};
\end{equation}
Intuitively, when $c_i$ is big, we trust its classification and
increase the image force $\vec{f}$, and conversely. Thus, we define
the image force at landmark point $i$ as:
\begin{equation}
\vec{f} = \mu \cdot \frac {[\vec{x}(j)-\vec{x}(i)]\cdot
c(i)}{||\vec{x}(j)-\vec{x}(i)||_2}
\end{equation}
where $\mu$ is a scale as a small step size.

The detailed algorithm to update the parameters of the ASM model
with the image force $\vec{f}$ can be found in~\cite{Cootes-ASM:95}.



\section{Heart Detection Based on Adaboost Learning}
\label{sec:method2}

The heart detection algorithm used is influenced by the Adaboost
face detection algorithm developed in~\cite{viola:01}. The reason we
adapt a face detection method is that these two problems are closely
related. Often, there are marked variations between different face
images, which come from different facial appearance, lighting,
expression, etc. In heart detection, we have the similar challenges:
the heart images have different tag patterns, shape, position,
phase, etc.

We use the same Haar wavelet features as in~\cite{viola:01}. The
training data contained 297 manually cropped heart images and 459
randomly selected non-heart images. The testing data consisted of 41
heart images and 321 non-heart images. These data were resized to
24x24 pixels and contrast equalized. Adaboost training gave a strong
classifier by combining 50 weak features. For an input task image,
the detection method searched every square window over the image,
and found a window with the highest probability as the final
detection. If we rotate the task image by a set of discrete angles
before the detection procedure, and compare the probabilities across
the discrete angles, we are also able to detect hearts in rotated
images (see Fig.~\ref{fig:detect}).

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{}c@{ }c@{ }c}
      \includegraphics[height=2.5cm,width=3cm] {trainset2.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {1977.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {14017.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {3058.eps} \\
      (a) & (b) & (c) & (d)
    \end{tabular}
    \caption{ (a) shows a few samples of the training data. (b), (c)
    and (d) are three detection results. For image (d), the image was
    rotated by a set of discrete angles before the detection, and the final
    detection is of the highest probability among all the discrete
    angles tested.
    }
    \label{fig:detect}
  \end{center}
\end{figure}


\section{Representative Experimental Results and Validation}
\label{sec:results}

We applied our segmentation method to three data sets, one from the
same subject and with the same imaging settings as the training data
(but excluding the training data), and the other two novel data sets
from two different subjects and with slightly different imaging
settings. The three data sets each contained tagged MR images with
different phases, positions and tagging orientations. Each task
image was rotated and scaled to contain a 80x80-pixel-sized
chest-on-top heart, using the detection method before the
segmentation. Each segmentation took 30 iterations to converge. Our
experiment was coded in Matlab 6.5 and run on a PC with dual Xeon
3.0G CPUs and 2G memory. The whole learning process took about 20
hours. The segmentation process of one heart took 120 seconds on
average. See Fig.~\ref{fig:seg} for representative results.


\begin{figure}[!h]
  \begin{center}
    \begin{tabular}{c@{}c@{}c@{}c@{}c}
    1) &
      \includegraphics[height=2.7cm,width=2.7cm] {1947nc.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {1967nc.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {1997nc.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {19117nc.eps}\\
      2) &
      \includegraphics[height=2.7cm,width=2.7cm] {SA2v3_z5_t5.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {SA2v3_z5_t7.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {SA2v3_z5_t8.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {SA2v3_z5_t9.eps}
    \end{tabular}
      \begin{tabular}{c@{}c@{}c@{}c@{}c@{}c@{}c}
      3) &
      \includegraphics[height=2.1cm,width=2.1cm] {m44ncn07.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m54ncn055.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m64ncn04.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m74ncn03.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m84ncn03.eps} \\
      4) &
      \includegraphics[height=2.1cm,width=2.1cm] {m144ncn07.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m154ncn055.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m164ncn04.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m174ncn03.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {m184ncn03.eps} \\
        & (a) & (b) & (c) & (d) & (e)
    \end{tabular}
    \caption{Representative segmentation results. For better representation, the
images in the first row vary in position and remain at the same
phase, while the images in the second row vary in phase but remain
at the same position. The
    solid contours are from our automatic segmentation method; the
    dashed contours are semi-automatic. Notice that the papillary muscles in
    LV are excluded from the endocardium. Semi-automatic results are not
    available for the third and fourth rows, so we compare our segmentation
    results between the the horizontal and vertical tagged images that are at same position and
    phase. Qualitatively, the contours are quite
    consistent, allowing for possible misregistration between the nominally corresponding image sets.
    In (3a), (3c) and (3e) the dashed contours are testing examples of
    poor initializations, while the final contours are solid. Although the initialization
    is fay away from the target, the shape model moves and converges well to the
    target.
        }
    \label{fig:seg}
  \end{center}
\end{figure}


For validation, we used the semi-automatically segmented contours as
the ground truth for the data sets as shown in the first and second
rows. For the other data set, because we don't have segmented ground
truth, we used cross validation, since we know that at the same
position and phase, the heart shapes in the vertical-tagged and
horizontal-tagged images should be similar. We denote the ground
truth contours as $T$ and our automatic segmentation contours as
$S$. We defined the average error distance as $\bar D_{error}=
mean_{s_i\in S}(min||T-s_i||_2)$. Similarly the cross distance is
defined as $\bar D_{cross}= mean_{s^{vertical}_i\in
S^{vertical}}(min||S^{horizontal}-s^{vertical}_i||_2)$. In a 80x80
pixel-sized heart, the average error distances between the
automatically segmented contours and the ground truth for the first
data set were: $\bar D_{error}(LV) = 1.12$ pixels, $\bar
D_{error}(RV) = 1.11$ pixels, $\bar D_{error}(Epi) = 0.98$ pixels.
For the second data set, $\bar D_{error}(LV) = 1.74$ pixels, $\bar
D_{error}(RV) = 2.05$ pixels, $\bar D_{error}(Epi) = 1.33$ pixels.
In the third dataset, the cross distances are: $\bar D_{cross}(LV) =
2.39$ pixels, $\bar D_{cross}(RV) = 1.40$ pixels, $\bar
D_{cross}(Epi) = 1.94$ pixels. The larger distance in the cross
validation arises in part from underlying mis-registration between
the (separately acquired) horizontal and vertical images. Thus, the
true discrepancy due to the segmentation should be smaller. From the
above quantitative results, we find that for a normal-sized adult
human heart, the accuracy of our segmentation method achieves an
average error distance of less than 2mm. The cross validation
results of the third data set suggest that our method is very robust
as well.






\section{Discussion}
\label{sec:discussion}



In this chapter, we have proposed a learning scheme for fully
automatic and accurate segmentation of cardiac tagged MRI data.
First we developed a semi-automatic system to achieve efficient
segmentation with minimal user interaction. Then the learning based
framework has three steps. In the first step we learn an ASM shape
model as the prior shape constraint. Second, we learn a
confidence-rated complex boundary criterion from the local
appearance features to use to direct the detected contour to move
under the influence of image forces. Third, we also learn a
classifier to detect the heart. This learning approach achieves
higher accuracy and robustness than other previously available
methods. Since our method is entirely based on learning, the way of
choosing the training data is critical. We find that if the
segmentation method is applied to images at phases or positions that
are not represented in the training data, the segmentation process
tends to get stuck in local minima. Thus the training data need to
be of sufficient size and range to cover all possible variations
that may be encountered in practice.

An interesting property of our method is that it is not very
sensitive to the initialization conditions. As shown in
Fig.~\ref{fig:seg}, even if the initial contours are far away from
the target position, it can still eventually converge to the right
position after a few iterations. This property makes automatic
initialization feasible. The detection method gives only a rough
approximation of the heart's location and size, but it is good
enough for our segmentation purposes.


\bibliographystyle{abbrv}
\bibliography{zhen}

\end{document}
