\chapter{Theory}
\label{chapter:Theory}

This chapter is concerned with the theoretical framework for medical ultrasound and divided into four sections. The basic physics of ultrasound are introduced in Section 2.1. Section 2.2. then focuses on the image processing techniques used in this thesis. The tracking mechanism of the particle filter has been explained in Section 2.3 and the issue of computational efficiency through GPUs is addressed in Section 2.4.

%===================== %
\section{Ultrasound Physics}

The audible range for humans is between 16 Hz and 20 kHz. Any frequency higher than this range is referred to as \textbf{ultrasound}. In this thesis, the term \emph{ultrasound} will be used as a shorthand representation of a \textbf{medical ultrasound machine}.

The propagation of sound in a medium was first described by Feynman in 1969 through the \textbf{Acoustic Wave Equation} \cite{acoustic_wave}. Feynman established that the velocity at which an acoustic wave traverses a medium is determined by the mediums density and stiffness. The greater the stiffness, the higher the velocity, implying that acoustic waves travel faster in solids than in liquids or gases. The speed of acoustic waves in human tissue at body temperature has been estimated to be at $1540 m/s$. When traveling through a medium, it is observed that the intensity and amplitude of the acoustic waves decreases. This phenomenon is called \emph{attenuation} and is the reason why echoes from deeper structures are weaker than those from superficial areas. The major source of attenuation in soft tissue is absorption, which is the conversion of acoustic energy into heat. Other mechanisms are reflection, refraction and scattering.

When the acoustic wave encounters a boundary between two different media, a certain part of it gets reflected back towards the source as an echo, a phenomenon referred to as \emph{reflection}. The law of reflection holds true in this scenario, i.e., the angle of incidence is identical to the angle of reflection. The remaining sound wave travels through the second medium (in this case, the biological tissue), but changes its direction because of difference in densities between the media and this phenomenon is called \emph{refraction}. In this case, the angle of incidence will be different from the angle of transmission. The amount of deflection however is proportional to the difference in density or the degrees of \emph{stiffness} of the two media. Scattering occurs when acoustic waves encounter a medium with a non-homogeneous surface. A small portion of the sound wave is scattered in random directions while most of the original wave continues to travel in its original path. Figure 2.1 illustrates the various types of interaction between acoustic waves and media.

\begin{figure}[h]
	\label{ultrasound_physics}
	\centering
	\includegraphics[height=12cm]{figures/physics}
	\caption{Illustration of the various types of interaction of acoustic waves with media.}
\end{figure}

The generation of ultrasound waves is based on the so-called \emph{pulse-echo-principle}. The source of the ultrasound wave is a piezoelectric crystal, which is placed in the transducer. This crystal can transform an electrical current into acoustic waves and vice versa. Thus, a single transducer can serve as both the source and detector in ultrasound machines. This quite simple signal generation and detection leads to ultrasound being considerably less expensive than other imaging modalities, such as for example computed tomography and magnetic resonance imaging. Once the ultrasound wave is generated and travels through the medium, the crystal switches from sending to listening mode and awaits the return of ultrasound echoes. In practice, over 99\% of the time is spent "listening". This cycle is repeated several million times per second, which constitutes the \emph{pulse-echo} principle. Returning sound waves are put together to an image based on their intensity and the time it takes them to return: the higher a detected waves intensity, the higher the density of the tissue by which it was reflected, and a long return time corresponds to deep structures. Using this principle, brightness mode (B-mode) ultrasound employs a linear array of transducers to simultaneously scan a plane in the body which then can be seen as a two-dimensional image on the screen. 

Diagnostic ultrasound used for common medical imaging uses frequencies between 2 to 20 million Hertz (or MHz). Lower frequencies are able to penetrate deeper into tissue, but show poorer resolution. In contrast, higher frequency ultrasound will display more detail with better resolution in exchange for less depth penetration. Based on this critical factor, the choice for the appropriate probes and machines is made by the physicians.

An ultrasound machine (US) typically consists of a transducer array, a processing unit for real-time image reconstruction and a display unit for visualization. A couple of exemplary ultrasound slices are shown in Figure 1.1. In this thesis, 3D ultrasound probes have been used. These consist of a motorized transducer array which records 2D images at various probe positions. These 2D images are then reconstructed into a single volume using the information about the probe position. The important characteristics to note in the images are that the voxel intensities of the needle shaft aren't that different from the background and the needle tip is highlighted well with a flare. The flare at the needle tip can be explained by the higher degree of acoustic wave reflection due to the presence of a bevel. The orientation of the ultrasound beam plays a significant part in this procedure. This is due to the fact that if the beam orientation is same as that of the needle, nothing will be seen in the image. To produce the best images and thereby the optimal results, it should therefore be made sure that (as far as possible), the beam orientation remains perpendicular to the needle shaft orientation.

Once an ultrasound image has been obtained, useful information needs to be interpreted. For this, many different image processing techniques have been used which have been explained in the following sections.

%===================== %
\section{Image Processing} 

In this section, the various image processing techniques that are used to enhance and interpret the information in the US image are explained. 

\subsection{Filters}
In this study, two main filters have been used for pre-processing US images, the Gaussian and the Bilateral filters. 

\begin{enumerate}
  \item Gaussian Filter (GF)
  
  The GF is a standard filter used in image processing whose impulse response is a Gaussian function. One of the most important advantages of the GF is that it has no offshoot to a step function input and at the same time it minimizes the rise and fall time. GF smoothens a function while giving highest weight to the central value. It was traditionally used in telecommunications systems to make sure the noise level can be reduced at the receiver end but it was quickly established as a prospective white noise filter because of its weighted average nature. The main disadvantage with GF is that since it doesn't distinguish between the local distribution of the function, there is no way for it to determine whether it is smoothening the noise or characteristic function values.
  
  Mathematically, it is represented as shown in Equation \eqref{gaussian_filter}.
  
  \begin{equation}
  	g(x,y) = \frac{1}{2\pi\sigma^2} . e^{-\frac{x^2 + y^2}{2\sigma^2}} . \label{gaussian_filter}
  \end{equation}
  where, $x$ and $y$ are the coordinate axes and $\sigma$ is the standard deviation of the GF.
  
  \item Bilateral Filter (BF)
  
  The highlight of BF as a pre-processing filter is that it makes sure that edges in an image are preserved while noise is reduced. This ensures that the needle geometry gets highlighted while decreasing artifacts such as echo and speckle. With a photometric kernel, when the center of the filter rests on a pixel that constitutes a part of an edge, the neighbors that make up the other (contrasting) side of the edge have an insignificant impact on the center pixel's corresponding output pixel. Furthermore, when the center is not a border pixel, and is not a part of an edge, the neighbors contribute equally and the noise is filtered out. Hence, edges maintain their sharpness while noise in other regions is filtered, resulting in image smoothing without loss in important information. The BF combines photometric and spatial kernels and it is mathematically represented as follows \cite{Tomasi1998}:
  
  Spatial kernel: 
  \begin{align}
  	h(x)=k_{d}^{-1}(x)\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(\xi)c(\xi,x)d\xi . \label{BIL_spatial}\\
  	k_{d}(x)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}c(\xi,x)d\xi .
  \end{align}
  
  Photometric kernel: 
  \begin{align}
  	h(x)=k_{r}^{-1}(x)\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(\xi)s(f(\xi),f(x))d\xi . \label{BIL_photo}\\
  	k_{r}(x)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}s(f(\xi),f(x))d\xi .
  \end{align}
  
  The combined kernel, which is the bilateral filter, is the product
  of spatial kernel and photometric kernel and is described as: 
  \begin{align}
  	h(x)=k^{-1}(x)\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(\xi)c(\xi,x)s(f(\xi),f(x))d\xi . \label{BIL_full}\\
  	k(x)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}c(\xi,x)s(f(\xi),f(x))d\xi .
  \end{align}

  where, $f(x)$ and $h(x)$ represent the input and output images; $c(\xi,x)$ and $s(f(\xi),f(x))$ measure the \emph{geometric} and \emph{photometric} closeness between the center point $x$ and its nearby point $\xi$; $k_{d}$ and $k_{r}$ are constants. The normalization term $k(x)$ ensures that the weights for all the pixels add up
  to one.
   
\end{enumerate}

\subsection{Edge Detectors} 

Edge Detectors are kernels that detect the slope change in functions. There are different edge detectors and each is suited to a particular set of problems. In this thesis, they have been used to localize the region in which certain structures of interest might be present. A more comprehensive comparison of some of the most widely used kernels is given in \cite{edge_detectors}.

\begin{enumerate}
  \item Sobel Operator (SO)
  
  SO is based on convolving the image with a small, separable, and integer valued filter in the horizontal and vertical direction and is therefore computationally more efficient than most other detectors. On the other hand, the gradient approximation that it produces is relatively crude, in particular for high frequency variations in the image. Because of this reason, its use in a high noise imaging modality like US is a bit risky. The operator is defined using two kernels, one for each coordinate axis \cite{gonzalez} and they are given in Equation \ref{SO_kernels}.
  
  \begin{align}
  \label{SO_kernels}
  G\textsubscript{x} = \left( \begin{array}{ccc}
        -1 & 0 & +1 \\
        -2 & 0 & +2 \\
        -1 & 0 & +1 \end{array} \right) * I; \; 
  G\textsubscript{y} = \left( \begin{array}{ccc}
        +1 & +2 & +1 \\
        0 & 0 & 0 \\
        -1 & -2 & -1 \end{array} \right) * I
  \end{align}
  
  \begin{equation}
	  	G = \sqrt{G\textsubscript{x}^2 + G\textsubscript{y}^2}
  \end{equation}
  \begin{equation}
	  	\theta = \arctan\left(\frac{G\textsubscript{y}}{G\textsubscript{x}}\right)
  \end{equation}
  where, $I$ is the input image, $G\textsubscript{x}$ and $G\textsubscript{y}$ are the edge gradients along the $x$ and $y$ axes, respectively, $G$ is the overall edge gradient magnitude and $\theta$ is the gradient direction.
  
  \item Canny Detector
  
  The Canny detector uses a multi-stage algorithm to detect a wide range of edges in images \cite{canny}. The various stages involved in this detector are:
  \begin{itemize}
	  \item Noise reduction using Gaussian Filter.
	  \item Finding intensity gradient of the image using Sobel Operator.
	  \item Non-maximal suppression using the edge direction to facilitate edge-thinning.
	  \item Trace the edges through the image using hysteresis thresholding to remove outliers.
  \end{itemize}
  
\end{enumerate}

\subsection{Hough Transform}

The Hough transform (HT) is an extremely efficient algorithm which was formulated to detect specific geometries \cite{SimpleHough} in images and hence it has been well adapted for the motivation for this thesis \cite{Barva2005}. In HT, the image is modeled using the equation given below,

\begin{equation}
	\label{hough_equation}
	{r} = {x} \cos \theta + {y} \sin \theta .
\end{equation}
where, ${r}$ represents the algebraic distance between the line and the coordinate system's origin and $\theta$ is the angle of the vector orthogonal to the line and pointing toward the Upper-Half plane.

Therefore, each curve can be associated with a unique pair $(r,\theta)$ which is defining a plane, which constitutes the \emph{Hough Space}. Thus, each point in the original image corresponds to a sinusoidal wave in the Hough Space and each line in the image is denoted by a single point (in the ideal scenario) in the Hough Space. Using this formulation, lines (and other parametric curves) can be easily detected. This formulation is recognized as the Simple Hough Transform (SHT).

SHT has been very successful in detecting single curves in the image \cite{SimpleHough} but there was no way for it to actually recognize a specific object (in image processing terms, a definite collection of curves). This shortcoming was addressed by Ballard et al. \cite{GeneralizedHough}, who devised a template matching approach to be integrated with the SHT, which was termed as the Generalized Hough Transform (GHT). Basically, it requires a sample of the object (or a \emph{template}) whose set of curves it stores as reference and matches it with the detected curves, taking into account rotation and scaling. This process, though tedious, was successful at detecting random shapes and objects in the scene \cite{GeneralizedHough}.

In another type of Hough transform devised by Matas et al. \cite{Matas1998}, a voting mechanism is used in the Hough space to select the best candidate curves in the image. This method takes into account the level of noise and adjusts the threshold accordingly and forms a probability distribution for every detected curve. This enables the user to select the best matches while leaving the rest. This formulation is termed as the Probabilistic Hough Transform (PHT).

While each variation of the Hough transform has its own merits in particular problem statements, for high noise images (like US images), GHT wasn't a particularly acceptable approach since the template itself can be corrupted with noise. This was proved correct during our tests where GHT performed considerably worse than SHT and PHT, the statistical evaluation of which is presented in Table 3.1. In this thesis, it was hence decided to use PHT without any information about the expected path of the curve.

Using the different pre-processing filters (GF and BF), a lot of noise can be removed from the image. With the edge detectors, useful structures can be localized and by employing the Hough transform, these structures can be identified. All these techniques need to be cohesively used in order to facilitate a working algorithm. Also, a method needs to be employed that can use the prior information collected in a constructive manner. Owing to this, a particle filter is used, the details of which are given in the following section.

{%===================== %
\section{Particle Filter}

To model the noise, an initial set of ultrasound volumes is acquired and the variation of the pixel intensities is mapped across the volumes. The volume regions where the voxel intensity variation is lower are assumed to contain noise and when the noise area increases beyond a certain threshold (for example, $0.4 * totalArea$), a filtering operation is initiated.


The particle filter being employed in this study is based on the sequential importance re-sampling algorithm proposed by Gordon et al. \cite{Gordon1993}. Since the image acquisition is continuous, a recursive background subtraction model was the first choice for segmentation as it would facilitate removal of relatively stationary "noise" while keeping non-stationary structures (like needles) intact. Each voxel in the image is modeled using a mixture of $K$ Gaussian distributions. The probability that a certain voxel has an intensity of $\mathbf{x}_N$ at a given time $N$ can be represented as \cite{Kaewtrakulpong2001},

\begin{equation}
	p(\mathbf{x}_N) = \sum_{j=1}^{K} w_j \eta(\mathbf{x}_N; \mu_j, \Sigma_j) .
\end{equation}
where $w_k$ is the weight of the $k$\textsuperscript{th} Gaussian component and $\eta(\mathbf{x}_N; \mu_k, \Sigma_k)$ is the Normal distribution of the $k$\textsuperscript{th} component and is represented by,

\begin{equation}
	\eta(\mathbf{x}; \mu_k, \Sigma_k) = \frac{1}{\sqrt{2\pi}|\Sigma_k|^\frac{1}{2}} e^{-\frac{1}{2}(\mathbf{x}-\mu_k)^T \Sigma_k^-1(\mathbf{x}-\mu_k)} .
\end{equation}
where $\mu_k$ is the mean and $\Sigma_k$ is the covariance of the $k$\textsuperscript{th} component.

The $K$ distributions are ordered using a parameter called the \emph{fitness value} ($w_k / \sigma_k$) and the first $B$ distributions are used as a model of the background wherein $B$ is, 

\begin{equation}
	B = \argmin_b\left(\sum_{j=1}^{b}w_j > T\right)
\end{equation}

The threshold $T$ is the minimum prior probability of background voxels in the image. A foreground voxel is defined as a voxel whose intensity is greater than $2.5 * (standardDeviation)$ from any of the $B$ distributions; thereby achieving background subtraction. This is corroborated by Uherc\'{\i}k et al. \cite{Uhercik2010}, where it has been shown that the higher the value of variance of a particular voxel, the more likely it is that the voxel is a part of a specific foreground structure. When the posterior probability of a voxel belonging to the background class exceeds 0.5, it is segmented out. The Gaussian mixture model is then estimated by expected sufficient statistics update equations \cite{Kaewtrakulpong2001} as shown in Equation \eqref{GMM_1}. This provides a good estimate of the foreground and background classes at the beginning.

\begin{equation} \label{GMM_1}
\begin{split}
	\hat{w}_k^{N+1} &= \hat{w}_k^{N} + \frac{1}{N+1}\left(\hat{p}(\omega_k | \mathbf{x}_{N+1}) - \hat{w}_k^{N}\right) ,\\
	\hat{\mu}_k^{N+1} &= \hat{\mu}_k^{N} + \frac{\hat{p}(\omega_k | \mathbf{x}_{N+1})}{\sum_{i=1}^{N+1}\hat{p}(\omega_k | \mathbf{x}_{i})} \left(\mathbf{x}_{N+1} - \hat{\mu}_k^{N}\right) ,\\
	\hat{\Sigma}_k^{N+1} &= \hat{\Sigma}_k^{N} + \frac{\hat{p}(\omega_k | \mathbf{x}_{N+1})}{\sum_{i=1}^{N+1}\hat{p}(\omega_k | \mathbf{x}_{i})} \left(\left(\mathbf{x}_{N+1} - \hat{\mu}_k^{N}\right) \left(\mathbf{x}_{N+1} - \hat{\mu}_k^{N}\right)^{T} - \hat{\Sigma}_k^{N}\right) .
\end{split}
\end{equation}

Once a good approximation of prior probabilities is calculated, a stable background model can be estimated. The algorithm then moves on to the $L$-recent window version \cite{Kaewtrakulpong2001} illustrated in Equation \ref{GMM_2} where the most recent $L$ samples are processed. This ensures that highest priority is given to the most recent data, thereby making changes in the environment paramount to the tracker's responsiveness.


\begin{align} \label{GMM_2}
\begin{split}
	\hat{w}_k^{N+1} &= \hat{w}_k^{N} + \frac{1}{L}\left(\hat{p}(\omega_k | \mathbf{x}_{N+1}) - \hat{w}_k^{N}\right) ,\\
	\hat{\mu}_k^{N+1} &= \hat{\mu}_k^{N} + \frac{1}{L} \left( \frac{\hat{p}(\omega_k | \mathbf{x}_{N+1})\mathbf{x}_{N+1}}{\hat{w}_k^{N+1}} - \hat{\mu}_k^{N}\right) ,\\
	\hat{\Sigma}_k^{N+1} &= \hat{\Sigma}_k^{N} + \frac{1}{L} \left( \frac{\hat{p}(\omega_k | \mathbf{x}_{N+1})\left(\mathbf{x}_{N+1} - \hat{\mu}_k^{N}\right) \left(\mathbf{x}_{N+1} - \hat{\mu}_k^{N}\right)^{T}}{\hat{w}_k^{N+1}} - \hat{\Sigma}_k^{N}\right) .
	\end{split}
\end{align}
In both Equation \eqref{GMM_1} and Equation \eqref{GMM_2}, $\omega_k$ represents the $k$\textsuperscript{th} Gaussian component and the $\hat{\cdot}$ notation denotes the estimated  values of the respective quantities.

Once the final algorithm pipeline has been identified, it is imperative that the computational efficiency is maximized. Graphics Processing Units or GPUs have come a long way in fulfilling that role. The basics of GPU computation are explained in the following section.

%===================== %
\section{Computations on Graphics Processing Unit (GPU)}

In today's electronic scenarios, Central Processing Units (CPUs) have already reached their speed and thermal power limits. In the future, CPU improvements will only be slightly incremental in nature with emphasis on increasing memory caches; the architectural efficiency has reached a plateau based on the current understanding of semiconductor physics. Although multi-core processors have long since become mainstream in the consumer market, a lot of operating systems and software vendors have yet to tap into the potential that this complex system opens up. 

Graphics Processing Units (GPUs) were originally built to allow the fast computation of images in a frame buffer intended for output to a display. Modern computer graphics rendering is very intensive in terms of the number of polygons and pixels to be processed in a short time. Therefore, GPU architectures are designed for a highly parallel execution of code and a high data throughput. As illustrated in Figure 2.2, GPUs contain a multitude of Arithmetic Logic Units (ALUs), which can perform computations simultaneously.

\begin{figure}[h]
	\label{gpu_arch}
	\centering
	\includegraphics[height=5.5cm]{figures/gpu_arch}
	\caption{Comparison of CPU and GPU architectures. Image copyright - NVIDIA\textsuperscript{\textregistered}.}
\end{figure}

However, the design of GPU architectures only addresses problems that can be expressed as data-parallel computations. Unlike multiple CPU cores, GPU frameworks require the same program to be executed on many datasets. The prohibition of running different code at the same time comes with unique advantages: because exactly the same hardware-level instructions are executed in all cores (GPU processing elements), sophisticated low control is as unnecessary as big data caches and memory access latency can be hidden with arithmetically demanding calculations. A graph showing the performance comparison between GPU and CPU in terms of teraflops is shown in Figure 2.3.

\begin{figure}[h]
	\label{gpu_compare}
	\centering
	\includegraphics[height=6cm]{figures/gpu_cpu}
	\caption{Graphs showing a comparison of computation capabilities of GPUs and CPUs from the leading vendors. Image copyright - NVIDIA\textsuperscript{\textregistered}.}
\end{figure}

Inspired by the tremendous computational advances of modern GPU cards, the scientific community has discovered the potential of general-purpose computing on graphics hardware in recent years. By exploiting their massively parallel architecture, demanding computations can potentially be sped up by several orders of magnitude, depending on the complexity of the algorithm and its implementation. 

The medical imaging community also started to utilize GPUs in computationally intensive tasks (namely in the reconstruction phase) and it was well established by Men et al. \cite{GPU_CT} and Stone et al. \cite{GPU_MRI} that both computed tomography and magnetic resonance imaging systems which benefited from GPUs (an illutrative result from these papers is presented in Figure 2.4). Encouraged by these results, manufacturers have started to incorporate GPUs in more imaging systems. It is this widely available computational capability we want to harness in this study and provide a highly efficient algorithm to facilitate robotic surgery.

\begin{figure}[h]
	\label{gpu_medical}
	\includegraphics[height=6cm]{figures/gpu_medical}
	\caption{Time required for MRI and CT reconstruction in GPU.}
\end{figure}

Although GPUs are computationally very advantageous, some basic principles have to be followed for optimal execution of code, which are given below: 

\begin{itemize}
	\item GPU processing elements, also known as cores or stream processors, are organized in groups (multiprocessors). Each core can execute a sequential thread but all cores of a particular multiprocessor execute in a so-called SIMT (Single Instruction, Multiple Thread) fashion, i.e. all cores execute the same instruction at the same time. Therefore, forks such as conditional execution branches should be avoided. For instance, the two code blocks of an \emph{if}-statement, will be executed sequentially; first the true block for all cores in which the if condition evaluated true, and then the false block for the remaining cores.
	
	\item As for graphics rendering, single-precision floating point operations were traditionally sufficient, double-precision instructions were either not supported at all or come at the cost of bandwidth. Today, the peak double-precision throughput is usually half of the single-precision throughput. Hence, calculations may be performed in single precision, if applicable.
	
	\item Regarding memory management, each core has a limited number of very fast registers, and all cores in a multiprocessor share a small software-managed data cache commonly termed as shared memory. With a low latency and high bandwidth, this indexable memory runs essentially at register speeds. Shared memory is also the only possibility to allow communication between cores of the same multiprocessor. Parallel implementations often make extensive use of shared memory for optimal execution patterns.
	
	\item Without a cache memory hierarchy, instructions in threads issuing a device memory operation may take hundreds of clock cycles due to the long memory latency. Thus, device memory access should be avoided as much as possible. Alignment of memory access (thread 1 reads memory block 1, thread 2 reads memory block 2, etc.) can be resolved faster than random memory access (thread 1 reads block 7, thread 2 reads block 53, etc.) Writing operations to device memory may be cached and are only guaranteed to be reflected until the end of the current program execution. As a result, device memory can not be used as a means of inter-thread communication.
	
	\item A program that exploits GPU hardware will regularly copy data from host memory (regular RAM) to device memory (GPU card), execute a function (a so-called kernel) in parallel threads on the GPU, and copy resulting data back from the device to host memory, where it can be further processed.
\end{itemize}

Vendors of GPU hardware such as NVIDIA\textsuperscript{\textregistered} or ATI\textsuperscript{\textregistered} provide computing frameworks to facilitate general purpose GPU programming. In this work, the CUDA (Compute Unified Device Architecture) platform is employed for parallel calculation on graphics hardware. It allows developers to use C/C++ as high-level programming language and offers useful abstractions of hardware specifics as a minimal set of language extensions.