\documentclass{article}
\usepackage{graphicx, subfig}
\usepackage{amsmath,amsfonts,amsthm,amssymb,enumerate}
\newcommand{\bvec}[1]{\boldsymbol{#1}} % vectors in bold instead of with an arrow on top

\title{Scientific Visualization -- Medical Data Exploration}
\author{Florian Speelman \& Jannis Teunissen}
\begin{document}
\maketitle

\section{Introduction}
This report is about two visualization methods for medical datasets: contour extraction and volume rendering, both implemented with the VTK library.
We will compare these methods on two datasets, one obtained from a CT scan and the other from a MRI scan.

Although we only study two specific datasets, our findings of the advantages and disadvantages of the methods should generalize well to
any volumetric dataset of which a scalar property is observed.

First we discuss the contour extraction method, how we have implemented it and give example results. Then we do the same for
volume rendering, and compare the two methods, which leads to our conclusions.

\section{Contour extraction}
Let us begin by defining what contour extraction means. Given some data $\bvec{X}$ with a scalar $f(\bvec{X})$ for every point of $\bvec{X}$,
then the contour $f(\bvec{X}) = c$ is the boundary between the regions $f(\bvec{X}) > c$ and $f(\bvec{X}) < c$. Usually $c$ is called the isovalue. The boundary defined in this way is not necessarily connected, and some form of interpolation has to be used if $\bvec{X}$ contains discrete points.

Maybe the most famous algorithms for contour extraction are Marching Squares (for 2D data) and Marching Cubes (for 3D data). The basic idea is to compare points with their neighbours, to see whether the boundary lies between the points. If so, analyze the surrounding points to determine how
to draw the boundary.
\subsection{VTK implementation}
There are a few filters in VTK that can generate a contour from data. We decided to use \texttt{vtkContourFilter}, because it can identify
different types of input data and use specialized filters accordingly, for best performance.

For our 3D datasets we want to generate multiple contours, or isosurfaces. We also want to interactively alter the isovalue, which means the extraction of the isosurface should take little time. The simplest way to achieve this is just sampling the input data at a lower resolution, using \texttt{vtkImageShrink3D}.

A more sophisticated way to speed up the contour extraction is by creating a tree structure to efficiently lookup those regions in which the contour lies. Such a feature is supported by \texttt{vtkContourFilter}, but in this case we have not used it. This because we want to give the isosurfaces different colors and opacities, and this is most easily done by having multiple filters for contour extraction. Furthermore we could not detect a speedup using it, so it might not be implemented yet in VTK for the type of input we used.

For interactive exploration render time is important. Sampling the input at a lower resolution also speeds up the rendering, but it is nicer to use decimation
on the extracted contour. This reduces the number of triangles by removing the visually `least important' fraction. This speeds up render time but the actual decimation is quite costly, making it ill-suited for interactive control of the isovalue.

The output of \texttt{vtkContourFilter} are triangles, and the rendering of these triangles can be made faster by stripping them together. We use \texttt{vtkStripper} for this purpose, which greatly reduces render time while itself not being a too expensive algorithm.

There is also a convenient way to ensure proper interaction in VTK, by making use of \texttt{vtkLODActor}. These actors support storing multiple levels-of-detail,
and automatically switch to a lower quality when the render time becomes too large.

The speedups obtained by these methods are heavily dependent on the data being visualized; if for example there are many small scale details, then shrinking the input data could remove all of them, giving a great boost in performance -- but at a high cost.

The best way to explain the user interface is probably displaying it, see figure \ref{fig:cfinterface}. We kept the interface quite simple; one reason is that constructing a complete GUI takes considerable effort, but just as important is that
a GUI should be designed for a user -- and we as users like to use the command line. So we constructed a command line menu, with options to add/remove isovalues, set their color and opacity, add/remove a plane to the visualization and adjust the image quality. From this menu the user can choose to explore the data interactively,
for which we the use \texttt{vtkRenderWindowInteractor} and a slider widget to adjust the isovalue of the first contour. To get some feeling for appropriate values, there is an on-screen display of a histogram of the scalars in the input data.

The basic visualization pipeline is then as follows:
\begin{enumerate}
 \item Read the dataset with an appropriate reader
 \item Possibly shrink the data (\texttt{vtkImageShrink3D})
 \item Extract contours (\texttt{vtkContourFilter})
 \item Create triangle strips (\texttt{vtkStripper})
 \item Map to primitives supported by hardware (\texttt{vtkPolyDataMapper})
 \item Assign the primitives to actors (\texttt{vtkLODActor})
 \item Add the actors, together with a histogram to a renderer (\texttt{vtkRenderer})
 \item Add the renderer to the render window (\texttt{vtkRenderWindow})
 \item Set up a render window interactor (\texttt{vtkRenderWindowInteractor})
\end{enumerate}

\begin{figure}
\begin{center}
 \includegraphics[width = 12cm]{cfinterface.png}
 \caption{The interface to our contour extraction program, using a terminal and a slider widget.}
\label{fig:cfinterface}
\end{center}
\end{figure}
\subsection{Example output}
A CT scan allows for a clear distinction between for bone and skin, because the absorption of x-rays turns out to be quite different for these tissues. It does not reveal the structure of soft tissue however. An MRI scan does this quite well, but is not suited for detecting bone.

The first dataset is obtained from a CT scan of a head. Navigating through isovalues between zero and 3000 (easily done with the slider) gives some clues about what could be wrong with this person. There seems to be a hole in the bonestructure around his nose, and there is some dense, disc-shaped object in his right eyepocket, see figure \ref{fig:bonece}. We also included the possibility of drawing a plane in the visualization, which helps to clearly show the strange object, see figure \ref{fig:planece}. 

\begin{figure}
\begin{center}
 \subfloat[]{\includegraphics[width = 6cm]{onlybonece.png}}
 \subfloat[]{\includegraphics[width = 6cm]{skinboneblue.png}}
 \caption{ CT dataset. (a) The bone structure, here displayed as a white isosurface of value 1200, shows something like a hole on the right side of this person's nose.
 (b) Multiple isosurfaces are drawn, skin (500), bone (1200) and a blue surface at 2600. Something has gotten into the right eye pocket, because the
blue color is not visible in his left eye pocket.}
\label{fig:bonece}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
 \includegraphics[width = 10cm]{slicece.png}
 \caption{The head displayed together with a slice, using two isosurfaces (500, brown and 2600, red). There is a red disc pointing out of the slice, located
in his right eye. The slice has a hue coloring.}
\label{fig:planece}
\end{center}
\end{figure}

When visualizing the second dataset, obtained from a MRI scan, we noticed that for practically all isovalues there are many separate surfaces. Therefore it is difficult to clearly visualize something, because a high opacity surface hides all inside structure but an almost transparant surface is hard to interpret,
see figure \ref{fig:mriprob}. Using multiple isosurfaces can create somewhat of an interesting view, but still is hard to interpret, see figure \ref{fig:mrislice}.

\begin{figure}
\begin{center}
 \subfloat[]{\includegraphics[width = 6cm]{mri-trans.png}}
 \subfloat[]{\includegraphics[width = 6cm]{mri-opac.png}}
 \caption{MRI dataset. (a) An isosurface with an isovalue of 1000, with an opacity of $0.2$. Many parts of the head are visible, bone, brain, skin, etcetera. This makes it hard to discern anomalities in the figure.
 (b) The same surface, but now drawn with an opacity of 1. Although easier to interpret, the figure now hides all inside regions.}
\label{fig:mriprob}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
 \subfloat[]{\includegraphics[width = 6cm]{mri-ce.png}}
 \subfloat[]{\includegraphics[width = 6cm]{mri-slice.png}}
 \caption{(a) Multiple isosurfaces of increasing level, from red to white. All have a low opacity, or else only the outside would be visible. (b) Using a slice leads to much better results, an indication that volume rendering with a color transfer function is better suited for analyzing MRI scans.}
\label{fig:mrislice}
\end{center}
\end{figure}

\section{Volume rendering}
Volume rendering is the general term for techniques that create a 2D projection out of a 3D dataset. There are many ways of doing volume rendering, but what they have in common is that they try to visualize the whole dataset. Therefore volume rendering is typically more costly than isosurface extraction, because only a small part of the dataset contributes to an isosurface.

If user interaction with the visualization is required volume rendering becomes a lot slower than the visualization of an isosurface, because it has to create a new projection of the whole dataset whereas a given isosurface can quickly be rendered in a new orientation.

We will use two types of volume rendering, volume ray casting and texture mapping. With volume ray casting, to determine the color of a pixel a ray is shot through the volume, originating from the viewpoint and passing through the pixel. At regular points the volume is sampled, to generate a shading for that location. Then all sample points along the ray are used to determine the color of the pixel. This method gives good image quality but is slow, compared to other volume rendering techniques. Recently implementations have been created that run entirely on GPU's, using their massive parallelization. This was unfortunately not yet available in VTK, so that we also used a much faster rendering technique, texture mapping. 

Texture mapping makes use of the graphics hardware, by first storing the input data as a 3D texture map. Then polygonal slices are generated orthogonal to the viewpoint and shaded according to the texture map, after which they are rendered by the graphics hardware. The advantage of this method is that it is typically a lot faster, but image quality suffers a bit (because of the discrete planes being used, which often introduces aliasing).

\subsection{VTK implementation}
The VTK library contains a variety of mappers, each having its own advantages and disadvantages. For a ray cast mapper we decided on using the \texttt{vtkFixedPointVolumeRayCastMapper} class. The main alternative in the library is \texttt{vtkVolumeRayCastMapper}, but that mapper can only handle \texttt{unsigned char} or \texttt{unsigned short} data, which would have forced us to convert the MRI dataset.

For texture mapping the \texttt{vtkOpenGLVolumeTextureMapper3D} class was used, gaining much speed while sacrificing a small amount of image quality. The rendering implementation is also less portable than the software ray tracing; the VTK library specification states that this mapper supports NVIDIA and ATI graphics cards.

Just like with the contouring, there is a possibility to shrink the date with \texttt{vtkImageShrink3D}. This does increase the performance, especially for the texture mapping, for ray tracing the speedup is small.

Having defined a mapper, the next step is setting properties on the volume defined by the data, enabling the mapper to produce an image. With a \texttt{vtkVolumeProperty} object we set lighting properties and a set of three transfer functions, together producing colors and opacities from the scalars in the dataset.

The first one is the color transfer function, a \texttt{vtkColorTransferFunction}. This function maps the scalar values of the data to certain colors. The user supplies the program with a list of intensities and corresponding colors, and the function is then the linear interpolation
between those points. The library also supports more sophisticated interpolation with the \texttt{sharpness} and \texttt{midpoint} parameters, but for our purposes the linear interpolation suffices. Different scalar values in the data will correspond to different tissue types, with the color transfer function the user can give those all different colors.

The scalar opacity transfer function is a \texttt{vtkPiecewiseFunction} and works in almost the same way as the color transfer function. By giving a list of values with matching opacities a function is contructed by giving the linear interpolation for each value in between. The opacity transfer function defines what tissue types the user sees, by changing this function the user could for example choose to look only at certain types of tissue.

The third is the gradient opacity transfer function, which converts the local computed gradient to an opacity. This is a \texttt{vtkPiecewiseFunction}, just as the scalar opacity. The gradient is measured as the amount the intensity increases over one unit distances, this is typically one millimeter for medical data sets. This function could for example be useful to highlight transitions between different materials in the dataset, by giving high opacity to high gradients.

With these transfer functions the program can calculate a color and opacity for each point in the dataset, where the total opacity is the product
of both the scalar and gradient opacity transfer functions. Transfer function design is an art in itself; to make a truly good transfer function you would need not only knowledge of how volume rendering works, but also insight in the properties of the scan that produced the data and considerable medical expertise. 

Just as with the contour extraction program, we have kept the interface as simple as possible, by letting the user work in the terminal. Firstly, the user can load, edit and save sets of transfer functions. For convenience, sets of transfer functions can also be given as a parameter when starting the program. It is also possible to go into interactive mode and look around to explore the data, using the \texttt{vtkRenderWindowInteractor}. Finally the user can make a screenshot with a higher resolution than used in the window, this can be especially useful when using a slow ray cast mapper.

The total volume rendering visualization pipeline is now as follows:
\begin{enumerate}
 \item Read the dataset with an approriate reader
 \item Possibly shrink the data (\texttt{vtkImageShrink3D})
 \item Load a set of transfer functions from file (\texttt{vtkColorTransferFunction} and \texttt{vtkPiecewiseFunction})
 \item Define volume properties (\texttt{vtkVolumeProperty})
 \item Execute a volume rendering algorithm (\texttt{vtkOpenGLVolumeTextureMapper3D} or \texttt{vtkFixedPointVolumeRayCastMapper})
 \item Add the volume rendering to a renderer (\texttt{vtkRenderer})
 \item Add the renderer to the render window (\texttt{vtkRenderWindow})
 \item Set up a render window interactor (\texttt{vtkRenderWindowInteractor})
\end{enumerate}

\subsection{Example output}
We are looking at the same CT and MRI image as with the contour extraction. In figure \ref{fig:volume-ct} the CT dataset is shown with three different transfer functions. The first image gives the most global view. In this view the damage to the bones near the right eye is clearly visible, looking closer in the program also shows a vertical cut in the skin around that area. In the second view, mainly the bones are visible, giving low intensity a very low opacity. The final example emphasizes the presence of the foreign object and the cut in the patient's skin. Because it is possible to load and edit transfer functions on the fly, a user could quickly switch between these when working with the program.

An MRI scan is good in detecting structure and soft tissues, but making a good transfer function for the MRI dataset also seems to be a bit harder. Two examples are shown in figure \ref{fig:volume-mri}. The first image gives a general overview of person in the scanner; the water in the breath of the woman is visible. The second image focusses on a smaller range of values to give a clear image of brain structure, this view could for example help in viewing a brain hemorrhage or looking for structural abnormalities.

\begin{figure}
\begin{center}
 \includegraphics[width = 4.5cm]{screenshot1-texture.png}\includegraphics[width = 4.5cm]{screenshot2-texture.png}\includegraphics[width=4.5cm]{screenshot3-texture.png}
 \caption{Three different sets of transfer functions on the same dataset, each showing different aspects of the dataset. The first image uses slightly transparent skin, giving a general look of the patient, with bones underneath.  The second picture illustrates how changing the opacity transfer function can give a very different view; here the emphasis is just on the bones. The third example transfer function gives a combination of skin and high intensity points.}
\label{fig:volume-ct}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
 \includegraphics[width = 6cm]{screenshot1-mri.png}\includegraphics[width = 6cm]{screenshot2-mri.png}
 \caption{Visualizing the MRI data set with two sets of transfer functions. The first transfer function shows both skin and denser material,
while the second transfer function focusses on a smaller range of values.}
\label{fig:volume-mri}
\end{center}
\end{figure}


\section{Comparison of contour extraction and volume rendering}
Experimenting with both contour extraction and volume rendering, we now describe the relative (dis)advantages of the methods.
\begin{itemize}
 \item Performance: Contour extraction is computationally much cheaper than volume rendering. So for very high resolutions, there is only one choice. Volume rendering time can be decreased by using the GPU, while this is generally not necessary for contour extraction. The polygonal surface created during contour extraction can be rendered quicker by decimation, shrinking of the input data or stripping. Performance is an important consideration for visualization of medical data, because analysis can be done better if the user can inspect the data in real-time. The easiest way to make volumetric rendering  faster is reducing the screen resolution.
 \item Interpretation of data: Volume rendering is much more powerful than contour extraction, as every contour can be simulated by some special transfer function. Using many contours, one can try to get to some approximation of volume rendering, but this is usually a lot harder than defining a color/opacity transfer function, and does not allow for features like gradient opacity. We think that contour extraction is appropriate if the resulting isosurface is mostly connected (so it does not contain too many separate regions). This is typically the case for CT scans, where bone and skin are the two dominant tissue types that are visible. MRI scans are best visualized using volume rendering, or slices with an appropriate color map.

There are also possible uses where it would be very convenient to have a model consisting of polygons to work with. Examples are creating the 3D model of a face from scan data, or computing the volume of a tumour. 
\item Ease of use: Contour extraction allows for interactively setting the contour value, which makes it easy to search for interesting values. The difficult aspect of volume rendering is creating transfer functions. If nothing is known about the input data, we would prefer using contour extraction and slices at first, to then create a transfer function.
\end{itemize}

\section{Conclusion}
Both volume rendering and contour extraction are powerful methods for the exploration of 3D medical data. Contour extraction has two clear advantages: performance and ease of use. It works quite well for data obtained from CT scans, because the isosurfaces mostly show the boundary between bone and skin. When details in soft tissue are important (and MRI scans are used), contours become harder to interpret. Then the more versatile volume rendering can give much better results, if `good' transfer functions are used. Interactive exploration is possible for the texture mapping technique when supported graphics hardware is present, but ray tracing turned out to be too slow for smooth interaction.
\end{document}
