%%%%%%%%%%%%%%%%%%%% author.tex %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% sample root file for your "contribution" to a contributed volume
%
% Use this file as a template for your own input.
%
%%%%%%%%%%%%%%%% Springer %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


% RECOMMENDED %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[graybox]{svmult}

% choose options for [] as required from the list
% in the Reference Guide
\usepackage{amssymb}
\usepackage{mathptmx}       % selects Times Roman as basic font
\usepackage{helvet}         % selects Helvetica as sans-serif font
\usepackage{courier}        % selects Courier as typewriter font
\usepackage{type1cm}        % activate if the above 3 fonts are
                            % not available on your system
%
\usepackage{makeidx}         % allows index generation
\usepackage{graphicx}        % standard LaTeX graphics tool
                             % when including figure files
\usepackage{multicol}        % used for the two-column index
\usepackage[bottom]{footmisc}% places footnotes at page bottom
\usepackage{epstopdf}

% see the list of further useful packages
% in the Reference Guide


\makeindex             % used for the subject index
                       % please use the style svind.ist with
                       % your makeindex program

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{document}

\title*{Segmentation and Blood Flow Simulations of Patient-Specific Heart Data}
% Use \titlerunning{Short Title} for an abbreviated version of
% your contribution title if the original one is too long
\author{Dimitris Metaxas, Scott Kulp, Mingchen Gao, Shaoting Zhang, Zhen Qian, Leon Axel}
% Use \authorrunning{Short Title} for an abbreviated version of
% your contribution title if the original one is too long
\institute{Dimitris Metaxas \at CBIM, Rutgers University, \email{dnm@cs.rutgers.edu}
\and Scott Kulp \at CBIM, Rutgers University, \email{sckulp@cs.rutgers.edu}
\and Mingchen Gao \at CBIM, Rutgers University, \email{minggao@cs.rutgers.edu}
\and Shaoting Zhang \at CBIM, Rutgers University, \email{shaoting@cs.rutgers.edu}
\and Zhen Qian\at Piedmont Heart Institute, \email{Zhen.Qian@piedmont.org}
\and Leon Axel\at NYU School of Medicine, \email{Leon.Axel@nyumc.org}}
%
% Use the package "url.sty" to avoid
% problems with special characters
% used in your e-mail or web address
%
\maketitle

%\abstract*{}
%
%In this chapter, we present a fully automatic and accurate segmentation framework for 2D cardiac tagged MR images, a semiautomatic method for 3D segmentation from CT data, and the results of blood flow simulation using these highly detailed models. The 2D segmentation system consists of a semi-automatic segmentation framework to obtain the training contours, and a learning based framework that is trained by the semi-automatic results, and achieves fully automatic and accurate segmentation.
%
%We then present a method to simulate and visualize blood flow through the human heart, using the reconstructed 4D motion of the endocardial surface of the left ventricle as boundary conditions. The reconstruction captures the motion of the full 3D surfaces of the complex features, such as the papillary muscles and the ventricular trabeculae. We use visualizations of the flow field to view the interactions between the blood and the trabeculae in far more detail than has been achieved previously, which promises to give a better understanding of cardiac flow. Finally, we use our simulation results to compare the blood flow within one healthy heart and two diseased hearts.

\abstract{}

In this chapter, we present a fully automatic and accurate segmentation framework for 2D cardiac tagged MR images, a semiautomatic method for 3D segmentation from CT data, and the results of blood flow simulation using these highly detailed models. The 2D segmentation system consists of a semi-automatic segmentation framework to obtain the training contours, and a learning based framework that is trained by the semi-automatic results, and achieves fully automatic and accurate segmentation.

We then present a method to simulate and visualize blood flow through the human heart, using the reconstructed 4D motion of the endocardial surface of the left ventricle as boundary conditions. The reconstruction captures the motion of the full 3D surfaces of the complex features, such as the papillary muscles and the ventricular trabeculae. We use visualizations of the flow field to view the interactions between the blood and the trabeculae in far more detail than has been achieved previously, which promises to give a better understanding of cardiac flow. Finally, we use our simulation results to compare the blood flow within one healthy heart and two diseased hearts.

\section{Cardiac Tagged MRI Segmentation}

Tagged cardiac magnetic resonance imaging(MRI) is a well known
technique for non-invasively visualizing the detailed motion of
myocardium throughout the heart cycle. This technique has the
potential of early diagnosis and quantitative analysis of various
kinds of heart diseases and malfunction. This technique generates a
set of equally spaced parallel tagging planes within the myocardium
as temporary markers at end-diastole by spatial modulation of
magnetization. Imaging planes are perpendicular to the tagging
planes, so that the tags appear as parallel dark stripes in MR
images and deform with the underlying myocardium during the cardiac
cycle {\it in vivo}, which gives motion information of the
myocardium normal to the tagging stripes. See
Fig.~\ref{fig:heart}(a-c) for some examples. However, before it can
be used in routine clinical evaluations, an imperative but
challenging task is to automatically find the boundaries of the
epicardium and the endocardium.

Segmentation in tagged MRI is difficult for several reasons. First,
the boundaries are often obscured or corrupted by the nearby tagging
lines, which makes the conventional edge-based segmentation method
infeasible. Second, tagged MRI tends to increase the intensity
contrast between the tagged and un-tagged tissues at the price of
lowering the contrast between the myocardium and the blood. At the
same time, the intensity of the myocardium and blood vary during the
cardiac cycle due to the tag fading in the myocardium and being
flushed away in the blood. Third, due to the short acquisition time,
the tagged MR images have a relatively high level of noise. These
factors make conventional edge or region-based segmentation
techniques impractical. The last important reason is that, from the
clinicians' point of view, or for the purpose of 3D modeling, {\it
accurate} segmentation based solely on the MR image is usually not
possible. For instance, for conventional clinical practice, the
endocardial boundary should exclude the papillary muscles for the
purpose of easier analysis. However, in the MR images, the papillary
muscles are often apparently connected with the endocardium and
cannot be separated if only the image information is used. Thus,
manual correction, or prior statistical shape knowledge is usually
needed to improve the segmentation results.
%, if it can advise the following segmentation.

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{ }c@{ }c@{ }c}
  \includegraphics[height=2cm,width=2cm] {figs/3057.eps}
  & \includegraphics[height=1.95cm,width=1.95cm] {figs/sa2z5t9.eps}
  &  \includegraphics[height=2cm,width=4.5cm] {figs/heart.eps}
  &  \includegraphics[height=2cm,width=2cm] {figs/diag.eps}\\
    (a) & (b) & (c) & (d)
    \end{tabular}
        \caption{
      (a-c) Some examples of tagged cardiac MRI images. The task of segmentation is to find the boundaries of the epicardium and endocardium (including the LV and RV and excluding the papillary muscles.) (d) The framework of our segmentation method. }
    \label{fig:heart}
  \end{center}
\end{figure}

There have been some efforts to achieve tagged MRI segmentation.
In~\cite{AlbertMiccai:02}, grayscale morphological operations were
used to find non-tagged blood-filled regions. Then they used
thresholding and active contour methods to find the boundaries.
In~\cite{huang:04}, a learning method with a coupled shape and
intensity statistical model was proposed. In~\cite{AlbertMiccai:02},
morphological operations are sensitive to the complex image
appearance and high level image noise, and the active contour method
tends to get irregular shapes without a strong prior shape model.
In~\cite{huang:04}, their intensity statistical model cannot capture
the complex local texture features, which leads to inaccurate image
forces.


In this chapter, in order to address the difficulties stated above, we
propose a novel and fully automatic segmentation method based on
three learning frameworks: 1. An active shape model(ASM) is used as
the prior heart shape model. 2. A set of confidence-rated local
boundary criteria are learned by Adaboost, a popular learning
scheme, at landmark points of the shape model, using the appearance
features in the nearby local regions. These criteria give the
probability of the local region's center point being on the
boundary, and force their corresponding landmark points  to move
toward the direction of the highest probability regions. 3. An
Adaboost detection method is used to initialize the segmentation's
location, orientation and scale. The second component is the most
essential contribution of our method. We abandon the usual edge or
region-based methods because of the complicated boundary and region
appearance in the tagged MRI. It is not feasible to designate one or
a few edge or region rules to solve the complicated segmentation
task. Instead, we try to use all possible information, such as the
edges, the ridges, and the breaking points of tagging lines, to form
a {\it complex rule}. It is apparent that at different locations on
the heart boundary, this {\it complex rule} must be different, and
our confidence in the {\it complex rule} varies too. It is
impractical to manually set up each of these {\it complex rules} and
weight their confidence ratings. Therefore, we implement Adaboost to
learn a set of rules and confidence ratings at each landmark point
on the shape model. The first and the second frameworks are tightly
coupled. The shape model deforms under the forces from Framework 2
while controlled and smoothed by Framework 1. To achieve fully
automatic segmentation, in Framework 3 the detection method
automatically provides an approximate position and size of the heart
to initialize the segmentation step. See Fig.~\ref{fig:heart}(d) for
a complete illustration of the frameworks.

Before we implement the learning based framework, we need to
generate a large amount of accurately segmented contours used as the
training data. A full set of conventional spatio-temporal(4D) tagged
MRI consists of more than one thousand images. Segmenting every
image manually and individually is a very time-consuming and
inefficient process. Therefore, we developed a semi-automatic
segmentation system that spatio-temporally propagates and requires
minimal manual interaction.

\subsection{Training Data Generated From A Semi-Automatic Segmentation System}


Training data are usually obtained by manual delineation with a
proper user interface. Semi-automatic method can vastly lower manual
workload and improve the accuracy and robustness. To address the
difficulty added by tagging lines, before the segmentation process,
a tunable Gabor filter bank technique is first applied to remove the
tagging lines and enhance the tag-patterned region
\cite{manglik:04}. Because the tag patterns in the blood are flushed
out very soon after the initial tagging modulation, this tag removal
technique actually enhances the blood-myocardium contrast and
facilitates the following myocardium segmentation.

%\subsubsection{The Gabor filter bank technique for tagged MRI analysis}

The 2D Gabor filter is basically a 2D Gaussian multiplied by a
complex 2D sinusoid \cite{Dunn:94}. At time 1 of the tagged MR
imaging process, when the tagging lines are initially straight and
equally spaced, we set the initial parameters of the Gabor filter to
equal the frequencies of the image's first harmonic peaks in the
spectral domain. During a heart beat cycle, the tagging lines move
along with the underlying myocardium, and the spacings and
orientations  of them change accordingly. We modify the parameters
of the Gabor filter to fit these deformed tag patterns. The original
un-tuned Gabor filter and the modified Gabor filters make up a
tunable Gabor filter bank. The input tagged MR images are convolved
with the tunable Gabor filters in the Fourier domain. Then the tag
patterned region will result in high filtering response, and enhance
the blood-myocardium contrast and facilitate myocardium
segmentation. As shown in Fig.~\ref{fig:de-tagged}, the de-tagged
images in mid-systolic phase make the boundary segmentation tasks
easier.


At each pixel in the input image, we apply the tunable Gabor filter
bank and find a set of optimal filter parameters that maximize the
Gabor filter response. From the optimal parameters, we can learn the
current pixel's relative distance with respect to the nearby tagging
lines, and approximately the displacement of the underlying tissue
over time. For conventional short axis(SA) tagged MRI sequences, we
have two sets of data whose tagging lines are initially
perpendicular to each other. When we combined them, we get the 2D
deformation of the myocardium. Therefore, we only need to do
myocardium segmentation at one time, then it can be temporally
propagated to the neighboring time frames. Spatial propagation of
the heart wall boundaries is a bit difficult due to the complex
heart geometry and the topological changes of the boundaries at
different positions of the heart. Our solution is segmenting a few
key slices first, which represent the topologies of the rest of the
slices. Then we let the key frames propagate to the remaining
slices.


%
%\subsubsection{The Metamorph Deformable Model for Tagged MR Image
%Segmentation}

The semi-automatic segmentation step is based on a newly proposed
deformable model, which we call "Metamorphs"
\cite{Huang-Metaxas-Chen:04}. The key advantage of the Metamorph
model is that it integrates both shape and interior texture and its
dynamics are derived coherently from both boundary and region
information in a common variational framework. These properties of
Metamorphs make it more robust to image noise and artifacts than
traditional shape-only deformable models.


The model deformations are efficiently parameterized using a space
warping technique, the cubic B-spline based Free Form Deformations
(FFD)\cite{Sederberg-Parry:86,Amini-Chen-etal:01}. The essence of
FFD is to deform an object by manipulating a regular control lattice
$F$ overlaid on its volumetric embedding space. In this paper, we
consider an Incremental Free Form Deformations (IFFD) formulation
using the cubic B-spline basis \cite{Huang-Paragios-Metaxas:03}. The
interior intensity statistics of the models are captured using
nonparametric kernel-based approximations, which can represent
complex multi-modal distributions. Using this nonparametric
approximation, the intensity distribution of the model interior gets
updated automatically while the model deforms. When finding object
boundaries in images, the dynamics of the Metamorph models are
derived from an energy functional consisting of both edge/boundary
energy terms and intensity/region energy terms.


\begin{figure}[t]
\centering
\begin{tabular}{c@{}c@{}c@{}c}
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_original.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_detagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_contourOnDetagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos7_contourOnOriginal.eps}\\
(1a) & (1b) &(1c) & (1d)\\
%\end{tabular}\\
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_original.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_detagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_contourOnDetagged.eps} &
\includegraphics[height=2.1cm,width=2.1cm]{figures//time7-pos10_contourOnOriginal.eps}\\
%\begin{tabular}{cccc}
(2a) & (2b) &(2c) & (2d)
\end{tabular}
\caption{\rm\small Metamorphs segmentation on de-tagged images. (1)
segmentation at time 7, slice position 7. (2) segmentation at time
7, slice position 10. (a) original image. (b) image with tags
removed by gabor filtering. (c) cardiac contours segmented by
Metamorphs on detagged image. (d) contours projected on the original
image.} \label{fig:de-tagged}
\end{figure}

We used Metamorph models to segment heart boundaries in tagged MR
images. In Fig.~\ref{fig:de-tagged}, we show the Left Ventricle,
Right Ventricle, and Epicardium segmentation using Metamorphs on
de-tagged MR images. By having the tagging lines removed using Gabor
filtering, a Metamorph model can get close to the heart wall
boundary more rapidly. Then the model can be further refined on the
original tagged image. The Metamorph model evolution is
computationally efficient, due to our use of the nonparametric
texture representation and FFD parameterization of the model
deformations. For all the examples shown, the segmentation process
takes less than $200ms$ to converge on a 2Ghz PC station.





\subsubsection{Integration And The Semi-Automatic System}

We integrate the above two major techniques, the tunable Gabor
filter bank and the Metamorphs segmentation, to construct our
spatio-temporal integrated MR analysis system. By using the two
techniques in a complementary manner, exploiting specific domain
knowledge about the heart anatomy and temporal characteristics of
the tagged MR images, we can achieve efficient, robust segmentation
with minimal user interaction. The algorithm consists of the
following main steps.


\begin{figure}[h]
\centering
\includegraphics[height=6.5cm,width=8.8cm]{figs/figurebeta.eps}
\caption{The framework of our automated segmentation in 4D
spatio-temporal MRI-tagged images. We start at a center time when
the tag lines are flushed away in the blood area while they remain
clear in the myocardium. Boundary segmentation is done in several
key frames on the de-tagged images before the boundary contours are
spatially propagated to the other positions. Then at each position,
the boundaries are temporally propagated to other times.}
\end{figure}



1. Tag removal for images at the mid-systolic phase. Given a 4D
spatio-temporal tagged MR image dataset of the heart, we start by
filtering using a tunable Gabor filter bank on images of a 3D volume
that corresponds to a particular time in the middle of the systolic
phase, which we term {\it 'center time'}. For a typical dataset in
which the systolic phase is divided into 13 time intervals, we apply
the Gabor filtering on images at time 7, when tag patterns in the
endocardium are flushed out by blood but tag lines in the myocardium
are clearly visible.

2. Metamorphs segmentation using the de-tagged images. Given the
de-tagged Gabor response images at time 7, we use Metamorphs to
segment the cardiac contours including the epicardium, the LV and RV
endocardium. The Metamorph models can be initialized far-away from
the object boundary and efficiently converge to an optimal solution.
For each image, we first segment the LV and RV endocardium. To do
this, the user initializes a circular model by clicking one point
(the seed point) inside the object of interest, then the surrounding
region intensity statistics and the gradient information
automatically drive the model to converge to the endocardium
boundaries. We then automatically initialize a Metamophs model for
the epicardial contour by merging the endocardial contours and
expanding the interior volume according to myocardium thickness
statistics. The model is then allowed to evolve and converge to the
epicardium boundary.

3. Spatial propagation at the mid-systolic center time. At the
mid-systolic phase, we do the segmentation at several key frames
which represent the topologies of the rest of the frames, then let
the segmented contours propagate to their nearby frames. In short
axis cardiac MR images, from the apex to the base, the topology of
the boundaries goes through the following variations: 1. one
epicardium; 2. one epicardium and one LV endocardium (in some cases
of the RV hypertrophy patients, one epicardium and one RV
endocardium are also possible); 3. one epicardium, one LV
endocardium and one RV endocardium; 4. one epicardium, one LV
endocardium and two RV endocardium. The key frames consist of one
center frame of the third topology and three transition frames. This
spatial propagation actually provides a quick initialization method
(rather than manually clicking the seed points as mentioned in step
2) for the rest of the non-key frames from the key frames.

4. Boundary tracking using tunable Gabor filters over time. Once we
have segmented the cardiac contours at time 7, we keep tracking the
motion of the myocardium and the segmented contours over time. This
temporal propagation of the cardiac contours significantly reduces
segmentation manual workload, since it enables us to do supervised
segmentation at only one time, then fully automated segmentation of
the complete 4D dataset can be achieved. It also improves
segmentation accuracy because we capture the overall trend in heart
deformation more accurately by taking into account the temporal
connection between segmented boundaries.


5. Boundary refinement by manual. In practice, We provide the manual
correction option to doctors during the whole segmentation process
to ensure satisfiable results.


This 4D segmentation system is developed in a Matlab 6.5 GUI
environment. The user need to load in the raw MRI data of the short
axis and long axis volumes first (Fig.~\ref{fig:sys}-1a). Then the
user is allowed to examine the whole data sets, which consist of two
short axes and one long axis, and determine the slice index of the
center time (Fig.~\ref{fig:sys}-1b,1c). The tag removal step is done
on the 3D volume at the center time (Fig.~\ref{fig:sys}-2a). Then
the user has an option to determine the indices of the key frames
and do Metamorphs segmentation on these key frames
(Fig.~\ref{fig:sys}-2b). The segmented contours are propagated
spatially (optional) and then temporally. Practically the spatial
propagation step is optional because for most clinical analysis one
typical slice is enough unless a fully 4D model is required. Manual
interaction is always available during the whole segmentation and
propagation process to make corrections in time
(Fig.~\ref{fig:sys}-2c).

\begin{figure}[ht]
\centering
\begin{tabular}{c@{}c@{}c@{}c}
(1)&
\includegraphics[height=2.7cm,width=3.5cm]{figs/pic1c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{figs/pic2c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{figs/pic6c.eps} \\
(2)&
\includegraphics[height=2.7cm,width=3.5cm]{figs/pic7c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{figs/pic11c.eps} &
\includegraphics[height=2.7cm,width=3.5cm]{figs/pic13c.eps} \\
%(3)&
%\includegraphics[height=3cm,width=3.5cm]{pic18c.eps} &
%\includegraphics[height=3cm,width=3.5cm]{pic15c.eps} &
%\includegraphics[height=3cm,width=3.5cm]{pic17c.eps} \\
& (a) & (b) & (c)
\end{tabular}
\caption{Screen snapshots of our segmentation and tracking system.
(1a) read in the SA and LA volumes. (1b,1c) examine the data sets.
(2a) de-tagged image at the center time. (2b) Metamorphs
segmentation based on de-tagged images. (2c) segmentation results.
The papillary muscle is excluded from the myocardium by manual
interaction. }\label{fig:sys}
\end{figure}







\subsection{Segmentation Based on ASM and Local Appearance Features Learning Using Adaboost}
\label{sec:method1}


After we collected enough amount of training data from the previous
system, we are able to perform learning methods to extract prior
knowledge, such as a statistical shape prior, and image local
features, to direct the future fully automatic segmentation. There
has been some previous research on ASM segmentation methods based on
local features modeling. In~\cite{Ginneken:03}, a statistical
analysis was performed, which used sequential feature forward and
backward selection to find the set of optimal local features.
In~\cite{jiao:03}, an EM algorithm was used to select Gabor
wavelet-based local features. These two methods tried to select a
small number of features, which is impractical to represent
complicated local textures such as in tagged MRI.
In~\cite{shuyuli:04}, a simple Adaboost learning method was proposed
to find the optimal edge features. This method didn't make full use
of the local textures, and didn't differentiate each landmark
point's confidence level. In our method, similarly using Adaboost,
our main contributions are: the ASM deforms based on a more {\it
complex} and robust rule, which is learned from the local
appearance, not only of the edges, but also ridges and tagging line
breakpoints. In this way we get a better representation of the local
appearance of the tagged MRI. At the same time, we derive the
confidence rating of each landmark point from their Adaboost testing
error rates, and use these confidence ratings to weight the image
forces on each landmark point. In this way the global shape is
affected more by the {\it more confident} points and we eliminate
the possible error forces generated from the {\it less confident}
points.

\subsubsection{ASM, The Shape Model}

Since the shape of the mid portion of the heart in short axis (SA)
images is consistent and topologically fixed (one left ventricle
(LV) and one right ventricle (RV) ), it is reasonable to implement
an active shape model~\cite{Cootes-ASM:95} to represent the desired
boundary contours.

We choose training data from SA images with different tagging line
orientations, such as ($0^{\circ}$ and $90^{\circ}$, or
$-45^{\circ}$ and $45^{\circ}$), and slightly different tag
spacings. Each data set included images acquired at phases through
systole into early diastole, and at positions along the axis of the
LV, from near the apex to near the base, but without topological
changes. Segmented contours were centered and scaled to a uniform
size. Landmark points were placed automatically by finding key
points with specific geometric characteristics. As shown in
Fig.~\ref{fig:filters}(a), the black points are the key points,
which were determined by the curvatures and positions along the
contours. For instance, $P1$ and $P2$ are the highest curvature
points of the RV; $P7$ and $P8$ are on opposite sides of the center
axis of the LV. Then, fixed numbers of other points are equally
placed in between. In this way, the landmark points were registered
to the corresponding locations on the contours. Here, we used 50
points to represent the shape.


For each set of contours, the 50 landmark points $(x_i, y_i)$ were
reshaped to form a shape vector
$X=(x_1,x_2,...,x_{50},y_1,y_2,...,y_{50})^T$. Then Principal
Component Analysis was applied and the modes of shape variation were
found. Any heart shape can be approximately modeled by $X =
\bar{X}+Pb$, where $\bar{X}$ is the mean shape vector, $P$ is the
matrix of shape variations, and $b$ is the vector of shape
parameters weighting the shape variations.

After we find the image forces at each landmark point, as in
subsection~\ref{sec:ada}, the active shape model evolves iteratively.
In each iteration, the model deforms under the influence of the
image forces to a new location; the image forces are then calculated
at the new locations before the next iteration.



\subsubsection{Segmentation Via Learning Boundary Criteria Using Adaboost}
\label{sec:ada}

%\subsubsection{Feature Design}

To capture the local appearance characteristics, we designed three
different kinds of steerable filters. We use the derivatives of a 2D
Gaussian to capture the edges, we use the second order derivatives
of a 2D Gaussian to capture the ridges, and we use half-reversed 2D
Gabor filters~\cite{Daugman:85} to capture the tagging line
breakpoints.

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{ }c@{ }c@{ }c}
    \includegraphics[height=2.5cm,width=2.7cm] {figs/controlpoints.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {figs/dg2.eps} &
      \includegraphics[height=2.5cm,width=3cm] {figs/ddg3.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {figs/rg2.eps} \\
      (a) & (b) & (c) & (d)
    \end{tabular}
        \caption{
      (a) shows the automatic method used to place the landmark
    points. (b-d) are the sample sets of feature filters: (b) are the
      derivatives of Gaussian used for edge detection, (c) are the second
      derivatives of Gaussian used for ridge detection, and (d) are the
      half-reversed Gabor filters used for tag line breakpoint
      detection.
    }
    \label{fig:filters}
  \end{center}
\end{figure}


Assume $G=G((x-x_0)\cos(\theta),(y-y_0)\sin(\theta),\sigma _x,
\sigma _y)$ is an asymmetric 2D Gaussian, with effective widths
$\sigma_x$ and $\sigma_y$, a translation of $(x_0,y_0)$ and a
rotation of $\theta$. We set the derivative of $G$ to have the same
orientation as $G$: $G'=G_x \cos(\theta)+G_y \sin(\theta)$.
%\begin{equation}
%G'=G_x \cos(\theta)+G_y \sin(\theta)
%\end{equation}

The second derivative of a Gaussian can be approximated as the
difference of two Gaussians with different $\sigma$. We fix
$\sigma_x$ as the long axis of the 2D Gaussians, and set
$\sigma_{y2}>\sigma_{y1}$. Thus,
$G''=G(\sigma_{y1})-G(\sigma_{y2})$.
%
%\begin{equation}
%G''=G(\sigma_{y1})-G(\sigma_{y2})
%\end{equation}

In the previous two equations, we set $x_0=0$, and tune $y_0$,
$\theta$, $\sigma_x$, $\sigma_y$, $\sigma_{y1}$ and $\sigma_{y2}$ to
generate the desired filters.

The half-reversed 2D Gabor filters are defined as a 2D sine wave
multiplied with the 2D derivative of a Gaussian:

\begin{equation}
F=G'(x,y)\cdot \mathbb{R} \{ e^{-j[\phi+2\pi(Ux+Vy)]} \}
\end{equation}

where $G'$ is the derivative of a 2D Gaussian. $U$ and $V$ are the
frequencies of the 2D sine wave, $\psi=arctan(V/U)$ is the
orientation angle of the sine wave, and $\phi$ is the phase shift.
We set $x_0=0$,
$\sigma_x=\sigma_y=\sigma$,$-45^{\circ}\le\psi-\theta\le45^{\circ}$,
and tune $y_0$, $\theta$, $\sigma$, $\phi$, $U$ and $V$ to generate
the desired filters.

For a 15x15 sized window, we designed 1840 filters in total. See
Fig.~\ref{fig:filters}(b-d) for some sample filters.




%\subsubsection{Adaboost Learning} \label{sec:adalearn}


 In the learning subsection, each training image is scaled
proportionally to the scaling of its contours. At each landmark
point of the contours, a small window (15x15) around it was cut out
as a positive appearance training sample for this particular
landmark point. Then along the normal of the contour, on each side
of the point, we cut out two 15x15-sized windows as negative
appearance training samples for this particular landmark point. Thus
for each training image, at a particular landmark point, we got one
positive sample and four negative samples (shown in
Fig.~\ref{fig:train}(a).) We also randomly selected a few common
negative samples outside the heart or inside the blood area, which
are suitable for every landmark point. For image contrast
consistency, every sample was histogram equalized.

The function of the Adaboost algorithm~\cite{Freund:95,Schapire:02}
is to classify the positive training samples from the negative ones
by selecting a small number of important features from a huge
potential feature set and creating a weighted combination of them to
use as an accurate strong classifier. During the boosting process,
each iteration selects one feature from the potential features pool,
and combines it with the existing classifier that was obtained in
the previous iterations. After many iterations, the combination of
the selected important features can become a strong classifier with
high accuracy. The output of the strong classifier is the weighted
summation of the outputs of each of its each selected features, or,
the weak classifiers: $F=\Sigma_t{\alpha_th_t(x)}$,
%\begin{equation}
%%\vspace{.3cm} F=\Sigma_t{\alpha_th_t(x)}%\vspace{-.3cm}
%\end{equation}
where $\alpha$ are the weights of weak classifiers, and $h$ are the
outputs of the weak classifiers.

We call $F$ the boundary criterion. When $F>0$, Adaboost classifies
the point as being on the boundary. When $F<0$, the point is
classified as off the boundary. Even when the strong classifier
consists of a large number of individual features, Adaboost
encounters relatively few overfitting problems~\cite{schapire:98}.
We divided the whole sample set into one training set and one
testing set. The function of the testing set is critical. It gives a
performance measure and a confidence level that tells us how much we
should trust its classification result. Fig.~\ref{fig:train}(b, c)
shows the learning error curve versus the boosting iteration numbers
at two selected landmark points. Note that every landmark point $i$
has its own $\alpha$, $h$ and $F_i$.

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{}c@{}c}
      \includegraphics[height=3cm,width=3cm] {figs/traingood.eps} &
      \includegraphics[height=3.3cm,width=4.1cm] {figs/lp1.eps} &
      \includegraphics[height=3.3cm,width=4.1cm] {figs/lp45.eps} \\
      (a) & (b) & (c)
    \end{tabular}
        \caption{
      (a) shows the method of setting the training data. The solid box
      is the positive sample around the landmark points. The four
      dashed line boxes along the normal are the negative samples.
      This way of setting the negative samples is chosen to make the classifier
      more adaptive to the particular landmark position. (b) and (c)
      show the training error (solid lines) and testing error (dash
      lines) of two landmark points versus Adaboost iteration times. (b) is a point on the LV, (c)
      is a point on the Epi. Note how the training and testing error
      decrease as Adaboost iterates. Also note the testing error of (b) is
      higher than (c): we are more confident of landmark point (c)'s
      classification result.
    }
    \label{fig:train}
  \end{center}
\end{figure}



%\subsubsection{Segmentation Based On Confidence Ratings}


 In the segmentation stage, we first select an initial
location and scale, and then overlay the mean shape $\bar{X}$, which
is obtained from ASM, onto the task image. In
subsection~\ref{sec:method2} we describe an automatic initialization
method.

At a selected landmark point $i$ on the shape model, we select
several equally spaced points along the normal of the contour on
both sides of $i$, and use their $F$ values to examine the
corresponding windows centered on these points.
In~\cite{schapire:98}, a logistic function was suggested to estimate
the relative boundary probabilities:
\begin{equation} %\vspace{.5cm}
Pr(y=+1|x)=\frac{e^{F(x)}}{e^{F(x)}+e^{-F(x)}} %\vspace{-.5cm}
\end{equation}
We find a point $j$ whose test window has the highest probability of
being on the heart boundary. Thus an image force $\vec{f}$ should
push the current landmark point $i$ toward $j$. Recall that, as
discussed in the previous subsubsection, Adaboost gives the errors of
the testing data $e_i$. We define the confidence rating as: %$c_i =
%\ln{\frac{1}{e_i}}$.
\begin{equation}
c_i = \ln{\frac{1}{e_i}};
\end{equation}
Intuitively, when $c_i$ is big, we trust its classification and
increase the image force $\vec{f}$, and conversely. Thus, we define
the image force at landmark point $i$ as:
\begin{equation}
\vec{f} = \mu \cdot \frac {[\vec{x}(j)-\vec{x}(i)]\cdot
c(i)}{||\vec{x}(j)-\vec{x}(i)||_2}
\end{equation}
where $\mu$ is a scale as a small step size.

The detailed algorithm to update the parameters of the ASM model
with the image force $\vec{f}$ can be found in~\cite{Cootes-ASM:95}.



\subsection{Heart Detection Based on Adaboost Learning}
\label{sec:method2}

The heart detection algorithm used is influenced by the Adaboost
face detection algorithm developed in~\cite{viola:01}. The reason we
adapt a face detection method is that these two problems are closely
related. Often, there are marked variations between different face
images, which come from different facial appearance, lighting,
expression, etc. In heart detection, we have the similar challenges:
the heart images have different tag patterns, shape, position,
phase, etc.

We use the same Haar wavelet features as in~\cite{viola:01}. The
training data contained 297 manually cropped heart images and 459
randomly selected non-heart images. The testing data consisted of 41
heart images and 321 non-heart images. These data were resized to
24x24 pixels and contrast equalized. Adaboost training gave a strong
classifier by combining 50 weak features. For an input task image,
the detection method searched every square window over the image,
and found a window with the highest probability as the final
detection. If we rotate the task image by a set of discrete angles
before the detection procedure, and compare the probabilities across
the discrete angles, we are also able to detect hearts in rotated
images (see Fig.~\ref{fig:detect}).

\begin{figure}[ht]
  \begin{center}
    \begin{tabular}{c@{}c@{ }c@{ }c}
      \includegraphics[height=2.5cm,width=3cm] {figs/trainset2.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {figs/1977.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {figs/14017.eps} &
      \includegraphics[height=2.5cm,width=2.5cm] {figs/3058.eps} \\
      (a) & (b) & (c) & (d)
    \end{tabular}
    \caption{ (a) shows a few samples of the training data. (b), (c)
    and (d) are three detection results. For image (d), the image was
    rotated by a set of discrete angles before the detection, and the final
    detection is of the highest probability among all the discrete
    angles tested.
    }
    \label{fig:detect}
  \end{center}
\end{figure}


\subsection{Representative Experimental Results and Validation}
\label{sec:results}

We applied our segmentation method to three data sets, one from the
same subject and with the same imaging settings as the training data
(but excluding the training data), and the other two novel data sets
from two different subjects and with slightly different imaging
settings. The three data sets each contained tagged MR images with
different phases, positions and tagging orientations. Each task
image was rotated and scaled to contain a 80x80-pixel-sized
chest-on-top heart, using the detection method before the
segmentation. Each segmentation took 30 iterations to converge. Our
experiment was coded in Matlab 6.5 and run on a PC with dual Xeon
3.0G CPUs and 2G memory. The whole learning process took about 20
hours. The segmentation process of one heart took 120 seconds on
average. See Fig.~\ref{fig:seg} for representative results.


\begin{figure}[!h]
  \begin{center}
    \begin{tabular}{c@{}c@{}c@{}c@{}c}
    1) &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/1947nc.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/1967nc.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/1997nc.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/19117nc.eps}\\
      2) &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/SA2v3_z5_t5.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/SA2v3_z5_t7.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/SA2v3_z5_t8.eps} &
      \includegraphics[height=2.7cm,width=2.7cm] {figs/SA2v3_z5_t9.eps}
    \end{tabular}
      \begin{tabular}{c@{}c@{}c@{}c@{}c@{}c@{}c}
      3) &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m44ncn07.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m54ncn055.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m64ncn04.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m74ncn03.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m84ncn03.eps} \\
      4) &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m144ncn07.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m154ncn055.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m164ncn04.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m174ncn03.eps} &
      \includegraphics[height=2.1cm,width=2.1cm] {figs/m184ncn03.eps} \\
        & (a) & (b) & (c) & (d) & (e)
    \end{tabular}
    \caption{Representative segmentation results. For better representation, the
images in the first row vary in position and remain at the same
phase, while the images in the second row vary in phase but remain
at the same position. The
    solid contours are from our automatic segmentation method; the
    dashed contours are semi-automatic. Notice that the papillary muscles in
    LV are excluded from the endocardium. Semi-automatic results are not
    available for the third and fourth rows, so we compare our segmentation
    results between the the horizontal and vertical tagged images that are at same position and
    phase. Qualitatively, the contours are quite
    consistent, allowing for possible misregistration between the nominally corresponding image sets.
    In (3a), (3c) and (3e) the dashed contours are testing examples of
    poor initializations, while the final contours are solid. Although the initialization
    is fay away from the target, the shape model moves and converges well to the
    target.
        }
    \label{fig:seg}
  \end{center}
\end{figure}


For validation, we used the semi-automatically segmented contours as
the ground truth for the data sets as shown in the first and second
rows. For the other data set, because we don't have segmented ground
truth, we used cross validation, since we know that at the same
position and phase, the heart shapes in the vertical-tagged and
horizontal-tagged images should be similar. We denote the ground
truth contours as $T$ and our automatic segmentation contours as
$S$. We defined the average error distance as $\bar D_{error}=
mean_{s_i\in S}(min||T-s_i||_2)$. Similarly the cross distance is
defined as $\bar D_{cross}= mean_{s^{vertical}_i\in
S^{vertical}}(min||S^{horizontal}-s^{vertical}_i||_2)$. In a 80x80
pixel-sized heart, the average error distances between the
automatically segmented contours and the ground truth for the first
data set were: $\bar D_{error}(LV) = 1.12$ pixels, $\bar
D_{error}(RV) = 1.11$ pixels, $\bar D_{error}(Epi) = 0.98$ pixels.
For the second data set, $\bar D_{error}(LV) = 1.74$ pixels, $\bar
D_{error}(RV) = 2.05$ pixels, $\bar D_{error}(Epi) = 1.33$ pixels.
In the third dataset, the cross distances are: $\bar D_{cross}(LV) =
2.39$ pixels, $\bar D_{cross}(RV) = 1.40$ pixels, $\bar
D_{cross}(Epi) = 1.94$ pixels. The larger distance in the cross
validation arises in part from underlying mis-registration between
the (separately acquired) horizontal and vertical images. Thus, the
true discrepancy due to the segmentation should be smaller. From the
above quantitative results, we find that for a normal-sized adult
human heart, the accuracy of our segmentation method achieves an
average error distance of less than 2mm. The cross validation
results of the third data set suggest that our method is very robust
as well.

\section{3D Segmentation and Blood Flow Simulation}

Following a heart attack or the development of some cardiovascular diseases, the movement of the heart walls during the cardiac cycle may change. This affects the motion of blood through the heart, potentially leading to an increased risk of thrombus. While Doppler ultrasound and MRI can be used to monitor valvular blood flow, the image resolutions are low and they cannot capture the interactions between the highly complex heart wall and the blood flow. For this reason, with the rapid development of high-resolution cardiac CT, patient-specific blood flow simulation can provide a useful tool for the study of cardiac blood flow.

Recently, Mihalef et al.~\cite{mihalef10} used smoothed 4D CT data to simulate left ventricular blood flow, and compared the flow through the aortic valve in a healthy heart and two diseased hearts. However, the models derived from CT data in~\cite{mihalef10} were too highly smoothed to capture the local structural details and were not useful for understanding the true interactions between the blood flow and the walls.

Later, in~\cite{kulp11}, more accurate heart models were achieved by generating a triangular mesh using initial median filtering and isosurfacing of the CT data at mid-diastole. Then, motion was transferred to this model from the smooth mesh motion obtained from the same CT data to create the animation. This allowed for more realistic features to be present on the heart walls in the simulation, including the papillary muscles and some parts of the trabeculae. However, while this approach was an improvement from the smooth-wall assumption, the trabeculae were missing details and did not move accurately.

Earlier work in blood flow simulation used less refined models. For example,~\cite{jones} was the first to extract boundaries from MRI data to perform patient-specific blood flow simulations. Later,~\cite{long} and~\cite{saber} used simple models of the left side of the heart, with smooth ventricular walls, and imposed boundary conditions in the valve regions.

In this paper, we use an even further improved method of generating and moving the mesh to capture these smaller details and generate a more accurate simulation. Our approach estimates the predefined motion for the valves, whose asynchronous opening and closing provides a simple geometric mechanism for taking care of those boundary conditions. To the best of our knowledge, contrary to all previous methods, we are able to visualize blood flow in unprecedented detail.

\subsection{Heart Model Reconstruction}

\begin{figure}[t]
  \begin{center}
    \begin{tabular}{c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c}
   \includegraphics[height=1.3in] {images/normal1.jpg} &
   \includegraphics[height=1.3in] {images/abnormal.jpg} \\
   \footnotesize (a) &
   \footnotesize (b) \\
    \end{tabular}
\caption{Meshes reconstructed from CT data (valves removed). (a) Healthy heart (b) Diseased heart. }\label{fig:recon}
  \end{center}
\end{figure}

The heart models are reconstructed using a deformable model method. A semi-automatic segmentation is used to get the initial segmentation from high resolution CT data for an initial (3D) frame of data. This semi-automatic segmentation is time consuming and tedious, so it is not efficient to use it for segmentation of all the frames. The initial high resolution mesh model is generated as an isosurface of the segmentation. Geometric processing is then applied to the initial model to get a smooth and regular mesh with an appropriate number of vertices. Based on the initial model from one time frame, our method deforms it towards the boundaries on the other frames. During the formation, the topology of the model is kept unchanged. We can also get the one-to-one correspondence between frames, as a requirement for the fluid simulator in later processes. These novel and powerful methods can extract the full 3D surfaces of these complex anatomical structures. The results have been validated based on the ground truth segmented by multiple clinical experts. Valves are hard to be captured in CT images such that valve models are added to the heart meshes in the sequence. In the following subsections we write the details of our work.

\subsubsection{CT Data Acquisition}
The CT images were acquired on a $320$-MSCT scanner (Toshiba Aquilion ONE, Toshiba Medical Systems Corporation) using contrast agent. This advanced diagnostic imaging system is a dynamic volume CT scanner that captures a whole-heart scan in a single rotation, and achieves an isotropic $0.3mm$ volumetric resolution with less motion artifact than conventional $64$-MSCT scanners. A conventional contrast-enhanced CT angiography protocol was adapted to acquire the CT data in this work. After the intravenous injection of the contrast agent, the 3D+time CT data were acquired in a single heart beat cycle when the contrast agent was circulated to the left ventricle and aorta. After acquisition, 3D images were reconstructed at 10 time phases in between the R-to-R waves using ECG gating. The acquired isotropic data had an in-plane dimension of $512$ by $512$ pixels, with an effective atrio-ventricular region measuring about $300^3$ pixels.

\subsubsection{Reconstruction Framework}

We propose a framework to reconstruct the cardiac model. This framework includes: initial model construction, deformable model based segmentation, and interpolation between time frames. The initial model is generated using snake segmentation on one time frame of the CT image. The initial model needs geometry processing, such as decimating, detail-preserving smoothing and isotropic remeshing to get high-quality meshes. Based on the initial model, segmentation of the rest of the CT images is automatically performed using the deformable model. The segmentation of a sequence of CT images is interpolated in time to get a higher effective temporal resolution.



\begin{figure*}[t]
\begin{center}
\includegraphics[width=4.5in]{images/valvecycle.jpg}
\end{center}
\caption{Outside and inside views of the valves at various stages of the cardiac cycle. The mitral valve is open at first. Gradually the mitral valves closes and the aortic valves opens.}
\label{fig:valvecycle}
\end{figure*}

\subsubsection{Initial Model Construction}

\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\scalebox{0.55}{\input{flowchart.pdf_t}}
\end{tabular}
\caption{Initial model construction. }\label{fig:framework}
\end{center}
\end{figure}

The model initialization framework is illustrated in Fig.~\ref{fig:framework}. While generating the initial model, a flexible method is preferred to provide more freedom for users. Different thresholds could be set for different part of the heart. We use a semi-automatic segmentation method to get the initial model~\cite{zhu95region}. This segmentation process is very time consuming and could not be used to segment all frames. It needs a lot of tedious work during the model initialization. However, once this model has been generated, it is used to segment the rest of other frames automatically.

Isosurface detection is applied to generate the model mesh from the first segmented result. However, the resulting mesh is usually bulky, noisy and irregular. To get a better initialization model, some geometric processing should be done on that mesh, such as decimating, detail-preserving smoothing and isotropic remeshing. Such geometric processing, which leads to high-quality meshes, is essential to later model deformation.

The initial model is too large to readily modify, so we need to decimate the mesh to an appropriate size. The desirable number of vertices is given as a constraint. Edge collapses, which simply collapse one vertex into one of its neighbors, are performed during decimation. Some error metrics are used to decide the priority of the edge collapses. Finally, we get a mesh with much fewer vertices, but that still retains most of the shape details. The meshes have been decimated to about $20,000$ vertices. Those are complex enough to capture the fine details of the heart.

Detail-preserving smoothing is performed after decimation. The smoothing is restricted to the tangential direction. Instead of moving each vertex towards the centroid of its neighbors, which would smooth out the shape details and sharp features, detail-preserving smoothing ensures higher quality meshes without losing details.

Isotropic remeshing is important for the mesh quality. In irregular meshes, the vertices with high valences exert strong internal forces to drag other vertices, which can cause unrealistic results in deformable models~\cite{shen09active}. An incremental isotropic remeshing technique is used to remesh the given triangular mesh so that all edges have approximately the same target edge length and the triangles are as regular as possible. The target edge length is set to be the mean of the current edge lengths. Edge length thresholds are set around the target edge length. During the incremental isotropic remeshing process, edges longer than the higher edge bound are split until all edges are shorter than the threshold; shorter edges are collapsed if collapsing does not result in new edges longer than the higher threshold; edges are flipped to equal valences; vertices are moved to new positions to get regular triangle meshes; and finally vertices are projected back to the original surfaces to keep the shape unchanged. This process would generally be iterated several times to get the final results.

After all these geometric processing steps, we finally get a high-quality triangular mesh with an appropriate number of vertices. This mesh is used as an initialization for other frames.

\subsubsection{Deformable Model Based Segmentation}
To get the segmentation of the rest frames as well as one-to-one correspondence between frames, we deform our initial model to the boundaries during tracking. To do so, we define an energy function, including an \emph{external energy}, derived from the image so that it is smaller at the boundaries, and a \emph{model energy}, which reflects the differences between the original model and the deformed model. By minimizing the energy function, it drags the model towards the boundaries and keeps the shape of the model unchanged during deformation.

Given a gray level image $I(x,y)$, viewed as a function of continuous position variables $(x,y)$, the model $M_{t-1}$ derived from the previous frame is used to fit the current frame $M_{t}$. The energy function we want to minimize is defined as follows:
\begin{equation}
  E(M_t, I_t, M_{t-1}) = E_{ext}(M_t,I_t) + E_{model}(M_t, M_{t-1}) .
\end{equation}

The external energy $E_{ext}$ is designed to move the deformable model towards object boundaries.
\begin{equation}
  E_{ext}(M_t, I_t) = -\left| \nabla I \right|^2 ,
\end{equation}
where $\nabla$ is the gradient operator.

The model energy is defined as the differences of \textbf{vertex normals} and \textbf{attribute vectors}. An attribute vector is attached to each vertex of the model~\cite{shen00adaptive}, which reflects the geometric structure of the model from a local to global level. In 3D, for a particular vertex $V_i$, each attribute is defined as the volume of a tetrahedron on that vertex. The other three vertices form the tetrahedron are randomly chosen from the $l$th level neighborhood of $V_i$. Smaller tetrahedrons reflect the local structure near a vertex while larger tetrahedrons reflect a more global information around a vertex. The attribute vector, if sufficient enough, uniquely characterizes different parts of a surface of a boundary.

The volume of a tetrahedron is defined as $f_l(V_i)$. The attribute vector on a vertex is defined as:

\begin{equation}
F(V_i)=[f_1(V_i), f_2(V_i),...,f_{R(V_i)}(V_i)] ,
\end{equation}
where $R(V_i)$ is the neighborhood layers we want to use around $V_i$.

As we elaborated earlier in this subsection, the model energy term reflects the differences of vertex normals and attribute vectors between the original model and the deformed model.
\begin{equation}
  E_{model}(M_t, M_{t-1}) = \sum_{i=1}^N(  \alpha(n_{t,i}-n_{t-1,i})^2+\\
                                                \sum_{l=1}^{R(V_i)}\delta_l(f_{t,l}(V_i)-f_{t-1,l}(V_i))^2),
\end{equation}
where $f_{t,l}(V_i)$ and $f_{t-1,l}(V_i)$ are components of attribute vectors of the model and deformed model at vertex $V_i$, respectively. $\alpha$ determines the importance of the vertex normals compared to the attribute vectors.   $\delta_l$ here denotes the importance of the $l$th neighborhood layers. $R(V_i)$ is the number of neighborhood layers around vertex $V_i$.

A greedy algorithm is used here to minimize the energy function. The proposed algorithm is iterative. During each iteration, the first step is to minimize the external energy, moving vertices towards the minimum gradient of the image; the second step is to minimize the model energy; a neighborhood of a vertex has been examined and the point in the neighborhood with the minimum model energy would be chosen as the new location of the vertex. The iterations continue until the energy converges. While this greedy algorithm might fall into a local minimum, the experiments show satisfactory results.

During the deformation, we suggest moving a surface segment as a whole, rather than a single vertex. This would avoid this risk of getting trapped in a local minimum, and also speed up the convergence. Let $V_i$ be the vertex to be deformed during a particular iteration. The first to $R(V_i)$th neighborhood layers are about to move together as a surface segment. Suppose $V_i$ is to move to $V_i + \Delta$ as a tentative position. Then the new position of each vertex $nbr_{l,m}(V_i)$, the $m$th vertex on $l$th neighborhood layer, is set to move to

\begin{equation}
  nbr_{l,m}(V_i) + \Delta \cdot \exp(-\frac{l^2}{2\delta^2}) ,
\end{equation}
where $\delta$ is a parameter determining the locality of the transformation. We make the deformation unchanged on the boundary of the surface segment, such that the continuity has been maintained.


The parameter $R(V_i)$ that determines the locality if the deformation is chosen to be large in the initial iteration, and is then gradually reduced to 1. Therefore, initially there are more vertices involved in the deformation. More global features are used in deformation. In later states, more local deformations are performed.


\begin{figure}[t]
\begin{center}
\includegraphics[width=2.5in]{images/full.png}\vspace{0mm}
\end{center}
\caption{Visualization of blood flow from outside heart during diastole. }
\label{fig:fullvel}
\end{figure}

\subsubsection{Valves Deformation And Interpolation}
The aortic and mitral valves are thin and move fast, the CT data is not currently able to capture these details at all frames. So, we need a way to add previously-created 3D models of the valves to each mesh in the sequence, and have them open and close at the correct times. We start by fitting both valve models to the first mesh, in both their open and closed states that can be seen in CT data. Upon completion, we have four new meshes (open mitral, closed mitral, open aortic, closed aortic), which each perfectly line up to their correct position in the first mesh in the sequence.

We seek to have similar collections of four properly-fitted valve meshes in their opened and closed states for each frame in the sequence. Since the heart moves considerably in the course of the cardiac cycle, we now need a way to automatically and realistically move the valves along with the rest of the heart, so that there are no improper holes or overlap. The valves are deformed according to the following strategy: First, the part of the valves connected to the heart are deformed together with the heart movements. Then the already deformed valves would drag the rest to the appropriate positions.

Now, for each frame in our sequence, we have both opened and closed mitral and aortic valves that are correctly fitted. We next must determine which open/close state each valve must be set at for each frame. We know that in the normal cardiac cycle, the mitral valve is open during diastole, aortic valve is open during systole, and both valves are closed for a very short time between these stages. Therefore, it is simple to decide on each frame whether or not the valves are open or closed.

We now have ten meshes with share one-to-one correspondence, and that have fitted valves that open and close at the correct frames. To perform an accurate simulation, we desire more intermediate frames. While we could simply use linear interpolation to determine how each vertex moves from one frame to the next, we found that the movement appears unnatural and undesirable. So, we instead use periodic cubic spline interpolation, achieving far better results. We chose to generate a total of fifty meshes for the full animation. With this, we are ready to perform the fluid simulation.

\subsection{Fluid Simulation}

The motion of an incompressible fluid is governed by the laws of conservation of momentum and mass. These two laws are modeled by the Navier-Stokes equations

\begin{center}
$\rho(\frac{\partial \emph{u}}{\partial t} + \emph{u}\cdot\nabla \emph{u})=-\nabla P + \mu\nabla^2\emph{u},$\\
$\nabla \cdot \emph{u}=0.$
\end{center}

\noindent Here, $\rho$ is the fluid density, $u$ is the 3D velocity vector field, $P$ is the scalar pressure field, and $\mu$ is the coefficient of viscosity. The first equation enforces conservation of momentum.  The second equation states that the divergence of velocity is zero everywhere to model that there are exist sources or sinks anywhere in the flow, conserving mass.

Foster and Metaxas~\cite{foster96} were the first to develop a very fast method of solving the Navier-Stokes equations for graphics applications. They did so by applying a staggered grid across the domain and explicitly solving for the three components of velocity at the cell faces. They then used successive over-relaxation to solve for pressure and correct the velocities to maintain incompressibility.

Our fluid-solid interaction system uses a ``boundary immersed in a Cartesian grid formulation", allowing for an easy treatment of complex geometries embedded in the computational domain, which can be especially advantageous when dealing with moving boundaries. Recent work that employs such a formulation is~\cite{yokoi}. It applies the formulation of~\cite{sussman} to both graphics and medical simulations. Very recently~\cite{zelicourt} implemented the approach of~\cite{gilmanov} to obtain a system that can efficiently deal with rather complex geometric data like a system of blood vessels.
% and~\cite{kadioglu}

The 3D mesh we generate from CT data is represented by a Marker Level Set (MLS), introduced and validated in~\cite{mihalef07}. Here, markers are placed on the boundary, and are used to correct the level set at every time step. Since markers are only placed on the surface, MLS has been proven to be more efficient and significantly more accurate for complex boundaries. Additionally, our specific solver achieves efficiency by implementing an adaptive mesh refinement approach.

%, similar to the one proposed in~\cite{sussman}.

%The marker level set approach is similar to the particle level set method~\cite{enright}, but s

The heart models used here are embedded in a computational mesh of $100^3$ cells on which the full Navier-Stokes equations with a viscous component are solved using finite difference method. The blood is modeled as a Newtonian fluid, with viscosity set at 4mPa$\cdot$ s and density set at 1060kg/m$^3$, which are physiologically accepted values for a normal human heart. The heart geometric model is fed to the solver as a discrete set of meshes with point correspondences, which allows for easy temporal interpolation and also obtaining the velocity of the heart mesh at every point in time. The heart mesh and its velocity are rasterized onto the Eulerian grid as a marker level set and an Eulerian velocity respectively. The MLS and the velocity are used to impose the appropriate boundary conditions in the fluid solver.  A simulation of two complete cardiac cycles takes about four days on a machine with a Core 2 Quad processor and 8GB of RAM.
% (subsequently extrapolated onto a MAC grid)


\subsection{Visualizations}

With the fluid velocity fields and level sets generated for each time step, we use Paraview~\cite{paraview} to visualize the simulations. We analyzed a healthy heart and two diseased hearts, and we describe below our visualization methods and our results.
%~\cite{paraview}
\subsubsection{Blood Flow Velocity}

We performed a visualization of the velocity field within the entire heart, as seen in Figure~\ref{fig:velcompare}, left and middle columns. The velocity of the blood at a given point is represented by a cone pointed in the direction of the flow. The size of cone increases linearly as the magnitude of the velocity increases. Additionally, we adjust the color of a cone by first setting its hue to 160 (blue), and then linearly lowering this value to a minimum of 0 (red) as velocity increases. We also visualized cross-subsections of the heart to give a clearer picture of how each of the structures and trabeculae affect the blood flow. Screenshots of this visualization can be seen in Figure~\ref{fig:velcompare}, right column.

%Next, we examined cross-subsections of the heart, and visualized the velocities here. This way, we have a clearer picture of how each of the structures and trabeculae affect the flow of blood. We visualized the velocity field in the same way as above, representing the velocity at each point with a colored cone. We only plot the flow field very close the heart surface, since we do not want the visualization further away to obstruct the view of the trabeculae.

Streamline visualizations are shown in Figure~\ref{fig:streamlines}. The color at a point within a streamline is chosen in the same way as the cones described above. Red streamlines signify fast moving blood, while blue streamlines represent lower speeds. In order to disambiguate direction, we add a small number of arrows within the streamline to point the way it is flowing.

\begin{figure} [t]
  \begin{center}
    \begin{tabular}{c@{\hspace{1mm}}c}
   \includegraphics[width=2.25in] {images/diastole_stream_full.png} &
   \includegraphics[width=2.25in] {images/diastole_stream_apex.png} \\
   \footnotesize (a) &
   \footnotesize (b) \\
   \multicolumn{2}{c}{\includegraphics[width=2.25in] {images/systole_stream_apex.png}} \\

   \multicolumn{2}{c}{\footnotesize (c)} \\
    \end{tabular}
\caption{Visualization of streamlines within the healthy heart. (a) Streamlines of cardiac blood flow during diastole. (b) Blood flow streamlines near apex during diastole. (c) Blood flow streamlines during systole at the apex, against the trabeculae. }\label{fig:streamlines}
  \end{center}
\end{figure}

\subsubsection{Blood Residence Time}

In addition to the blood flow velocities, we wish to visualize the residence time of blood within the heart. By doing so, we can quantitatively determine regions of the heart that are at greater risk of thrombus, as slower flows are known to be a significant factor predisposing to thrombus formation.

In order to compute the residence time of blood, we must first determine which regions in the computational domain are interior to the heart. This region changes at every time step, due to the deformation of the heart. We find this interior area by determining which cells are within concave regions of the heart mesh. For each empty (non-solid) cell in the domain at index $(i,j,k)$, we check whether there exists a pair $(l_1,l_2)$ such that both $l_1,l_2>0$, and either both cells $(i+l_1,j,k)$ and $(i-l_2,j,k)$ are solid, cells $(i,j+l_1,k)$ and $(i,j-l_2,k)$ are solid, or cells $(i,j,k+l_1)$ and $(i,j,k-l_2)$ are solid.  While this method does not guarantee that all cells within concave regions are determined, our results show that it accurately determines each cell interior to the heart.


At the initial time step, ten thousand particles are generated randomly within the heart. At the beginning of each consecutive time step, new particles are generated within interior cells that are adjacent to exterior cells. Since nearly all such cells are just outside the valves, this allows fresh blood particles to enter the heart during diastole. While some particles are also generated outside the aortic valve, these never enter the heart and are completely removed during systole, and so they do not meaningfully affect the results. Each new particle has an initial age of zero, and this age is incremented at every time step.

At each consecutive time step, we determine a particle's velocity by trilinear interpolation, given the computed fluid velocities at the center of each cell. Each particle's new position is calculated using simple Euler time integration. Then, each particle that now occupies an exterior cell is removed from the system, and the average particle residence time within each cell can then be easily determined. We run this for four cardiac cycles and create volumetric visualization using Paraview, as demonstrated in Figure~\ref{fig:residence}. Here, blue colors represent regions in which average residency is less than 1 cardiac cycle, green-yellow represents between 1 and 3 cardiac cycles, and red represents between 3 and 4 cycles.



\subsection{Discussion}

\subsubsection{Comparison with Diseased Hearts}
\begin{figure}[t]
  \begin{center}
    \begin{tabular}{c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c}
   \includegraphics[width=1.45in] {images/full_normal_diastole.png} &
   \includegraphics[width=1.45in] {images/full_normal_systole.png} &
   \includegraphics[width=1.45in] {images/normal_bottom_systole_nearwall.png} \\
   \footnotesize (a) &
   \footnotesize (b) &
   \footnotesize (c) \\
   \includegraphics[width=1.45in] {images/full_slowed_diastole.png} &
   \includegraphics[width=1.45in] {images/full_slowed_systole.png} &
   \includegraphics[width=1.45in] {images/slowed_bottom_systole_nearwall.png} \\
   \footnotesize (d) &
   \footnotesize (e) &
   \footnotesize (f) \\
   \includegraphics[width=1.45in] {images/full_abnormal_diastole.png} &
   \includegraphics[width=1.45in] {images/full_abnormal_systole.png} &
   \includegraphics[width=1.45in] {images/abnormal_apex_nearwall.png} \\
   \footnotesize (g) &
   \footnotesize (h) &
   \footnotesize (i) \\
    \end{tabular}
\caption{Velocity fields at various time steps for three different hearts. Top row: Healthy Heart, Middle row: Hypokinetic heart, Bottom row: Dyssynchronous heart. Left column: Diastole, Middle column: Systole, Right column: Velocity field at trabeculae during systole }\label{fig:velcompare}
  \end{center}
\end{figure}


The simulation and visualization methods are performed described above on three different hearts. The first is a healthy heart with no visible medical problems. The second is a heart that has simulated hypokinesis, where the motion of the heart walls is decreased at the apex by a maximum of 50\%. The third comes from a patient who has post tetralogy of Fallot repair. This heart is known to suffer from right ventricle hypertrophy, significant dyssynchrony in the basal-midseptum of the left ventricle, and a decreased left ventricle ejection fraction of about 30\%.

The streamline visualizations provide detailed information on the trabeculae-blood interaction. Figure~\ref{fig:streamlines}(b), taken during diastole, demonstrates how the complex surface causes the flow to fill the empty spaces between the trabeculae. We can clearly see the development of many small vortices around the trabeculae, which previous methods of cardiac blood flow simulation have not even attempted to capture. Then, in Figure~\ref{fig:streamlines}(c), during systole, we see another example of how the blood is forcefully expelled out of the spaces between the trabeculae, rather than simply flowing directly towards the aortic valve as older methods with simpler meshes have suggested.

Estimated ejection fraction can be calculated using particles to validate our simulation. During systole, we know exactly how many particles there originally existed in the system, and how many are being expelled and deleted at each time step. To estimate the ejection fraction, we simply divide the total number of deleted particles by the original number of particles.

We performed a partial validation by comparing the estimated ejection fraction to the true ejection fraction. The computed ejection fraction is approximately 45\% for the healthy heart, 40\% for the hypokinesis heart, and 30\% for the dyssynchronous heart. These values for the healthy and dyssynchronous heart are in agreement with the true values, so we have confidence in the rest of our results.

Velocity field visualizations are illustrated in Figure~\ref{fig:velcompare}. We can see that in the healthy heart, the inflow during diastole is significant and fairly uniformly distributed, circulating blood throughout the heart. During systole, the velocity field throughout the heart remains high, and fluid in the apex moves toward the valves. In Figure~\ref{fig:velcompare}(c), we see more detail of the interactions between blood flow and the trabeculae, as the blood is visibly expelled from these regions. However, in the heart suffering from hypokinesis, we find that the velocity field is much weaker toward the apex during both diastole and systole. In Figure~\ref{fig:velcompare}(f), we also see that the trabeculae are no longer adequately expelling blood as they do in the healthy heart case. We also see in Figure~\ref{fig:velcompare}(g)-(i) that the flow patterns in the heart with dyssynchronous heart wall movement appears non-normal, with overall lower velocities and even less fluid being pushed out from the trabeculae.

\begin{figure}[t]
  \begin{center}
    \begin{tabular}{c@{\hspace{1mm}}c}
   \includegraphics[width=2.25in] {images/normal_full_residence.png} &
   \includegraphics[width=2.25in] {images/slowed_full_residence.png} \\
   \footnotesize (a) &
   \footnotesize (b) \\
   \multicolumn{2}{c}{\includegraphics[width=2.25in] {images/abnormal_full_residence.png}} \\
   \multicolumn{2}{c}{\footnotesize (c)} \\
    \end{tabular}
\caption{Visualization of average particle residence time. Colors closer to red represent longer average residence time. (a) Healthy Heart (b) Heart with Hypokinesis (c) Heart with dyssynchronous wall movement}\label{fig:residence}
  \end{center}
\end{figure}

We then compare the visualizations of the average particle residence times for each of the three simulations, as seen in Figure~\ref{fig:residence}. Each of these images were made at the same time step, at the start of systole, after four cardiac cycles. We find that in Figure~\ref{fig:residence}(a), in the healthy heart, nearly the entire domain contains blood with average residence time of less than three cycles, suggesting that the blood is not remaining stagnant, and turning over well between cardiac cycles. In contrast, Figure~\ref{fig:residence}(b) shows that in the heart suffering from hypokinesis, the average residence time is significantly higher near the walls, particularly near the hypokinetic apex. Finally, in Figure~\ref{fig:residence}(c), we find that a very significant region of the blood has a long residence time, suggesting that due to the low ejection fraction and relatively low fluid velocities, blood is not being adequately circulated and thus is remaining stagnant near the walls, again, particularly toward the apex of the heart.

All our results have been validated based on ejection fraction, and based on visual observation by experts. Note that currently there is no method based on MRI to validate our detailed results in such resolution.

\section{Conclusions}

In this chapter, we have proposed a learning scheme for fully automatic and accurate segmentation of cardiac tagged MRI data. First we developed a semi-automatic system to achieve efficient segmentation with minimal user interaction. Then the learning based framework has three steps. In the first step we learn an ASM shape model as the prior shape constraint. Second, we learn a confidence-rated complex  boundary criterion from the local appearance features to use to direct the detected contour to move under the influence of image forces. Third, we also learn a classifier to detect the heart. This learning approach achieves higher accuracy and robustness than other previously available methods. Since our method is entirely based on learning, the way of choosing the training data is critical. We find that if the segmentation method is applied to images at phases or positions that are not represented in the training data, the segmentation process tends to get stuck in local minima. Thus the training data need to be of sufficient size and range to cover all possible variations that may be encountered in practice.

We then described our new framework to generate detailed mesh sequences from CT data, and used them to run patient-specific blood flow simulations. We then created several visualizations to reveal the interactions between the complex trabeculae of the heart wall and the blood, which has never been possible before, and used them to compare the flow fields between a healthy heart and two diseased hearts, which would potentially be extremely useful to doctors to help in diagnosis and treatment plans. This is the first time to compare blood flow fields at thus level of resolution.

\bibliographystyle{abbrv}
\bibliography{refs}

%\input{referenc}
\end{document}
