\documentclass[journal,12pt]{IEEEtran}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{float}
\usepackage{cite}
\makeindex

\begin{document}
\title{Tracking Particles}
\author{
Wesley~Alvaro
\thanks{W. Alvaro is with the Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville TN, 37996 USA e-mail: alvaro@eecs.utk.edu}and
Johnathan~Sparger
\thanks{J. Sparger is with the Department of Nuclear Engineering, University of Tennessee, Knoxville TN, 37996 USA e-mail: jsparger@utk.edu}
}

% The paper headers
\markboth{Industrial Mathematics - M475}%
{Alvaro: Software Dependability Targeting Scientific Computing}
% make the title area
\maketitle

\begin{abstract}
In this report we considered two methods of extracting velocity data from image sequences of particles in a flow field. One method used cross correlation to find similar sections in adjacent frames and one considered the paths and locations of individual particles in the images. Na\"{i}ve implementations of these algorithms were coded in the C\# programming language and tested with a 241 particle image sequence, each extrapolating velocities with about 90\% accuracy. The algorithms were qualitatively tested on various other flow fields including real ultrasound imagery of bubble movement in water to determine their limitations. The algorithms were found to be extremely versatile and often exceeded performance expectations in analysis of complicated image sequences and flow fields for which they were originally expected to be poorly suited.

\end{abstract}

\begin{IEEEkeywords}
Particle, Tracking, PIV, PTV, Image, Velocimetry
\end{IEEEkeywords}

\section{Introduction}
\IEEEPARstart{A}{s} computational technology advances, it is becoming more and more feasible to use visual data in the analysis of complex problems. One of the most useful data sets that can be obtained through analysis of digital imagery is the velocity field of a fluid or particle system as captured in a sequence of images. Because knowledge of velocity fields and particle movements can provide useful information and  a unique perspective on a system, particle tracking techniques have found widespread use in subjects ranging pipe flow to micro-organism mobility. For these reasons, the techniques involved in extracting such data from raw digital imagery have, not surprisingly, been the focus of much study in the scientific community.


In this report we explore two basic methods of extracting velocities from paricle imagery which take fundamentally different approaches from one another in analysing the visual data. One method, 2-Frame Cross Correlation, determines velocities and displacements by trying to locate similar pieces of the image in the next frame. Since this approach considers how similar two image segments are, the algorithm does not concern itself with a physical interpretation of the underlying structure in the image. This type of interrogation is referred to in this report as Particle Image Velocimetry (PIV), because it considers the conglomerate characteristics of the images. The second method of determining particle velocity explored herein is 4-Frame Particle Tracking Velocimetry (PTV). In stark contrast to 2-Frame PIV, 4-Frame PTV works by individually identifying and tracking every particle in the image, linking them through the frames by matching the particle images via extrapolated velocities.


In order to determine the characteristics of each method and discover the conditions under which each approach is most applicable, PIV and PTV algorithms were coded in the C\# programming language along with related image manipulation functions. This velocimetry package was finally fitted with a Graphical User Interface (GUI) for easy interrogation of image sequences.

The discussion that follows describes the theory and implementation of both velocity extraction algorithms, the static and adaptive thresholding algorithms used to prepare image sequences for analysis, and a quick overview of the GUI class structure. Following this, we present the results of a simple test to demonstrate that our algorithms can extract accurate velocity data from an image sequence and discuss what factors are expected to contribute or detract from calculation accuracy.



\section{Thresholding}
One major issue to overcome before particle tracking algorithms can do their work is noise. Noise can obscure particles and distort images and make it impossible to derive meaningful velocities from an image sequence. There are also times when background textures or even dynamic particles, which are not of interest, coexist with and obscure the images. Many times, the intensity or power of the extraneous noise or particles is either appreciably higher or appreciably lower than the particles of interest. In these fortunate cases, images can be cleaned up considerably by applying a threshold. 

A threshold is some level of intensity above which (or below which) pixels will be considered "on". By applying a threshold you make all pixels below the threshold, for instance, black, and all pixels with intensities exceeding the threshold white. Applying a threshold turns complicated color or grayscale image data into much simpler and nuance-free binary data. Not only can this help decrease noise, it can also help eliminate uncertainties about particle boundaries caused by gradients at particle edges by defining a specific intensity accepted as the edge.


There are two main types of thresholding: static and dynamic.  Each pixel in most computer images is composed of three color channels: red, green, and blue.  Each color channel is made up of eight bits.  These eight bits comprise an unsigned short integer from 0 to 255.  In a color image, before thresholding can occur, the image must be converted to monochrome.  This process involves calculating a gray value for each pixel~\eqref{monochrome}.
\begin{equation}
G_{(x,y)} = 0.299\cdot P^r_{(x,y)} + 0.587\cdot P^g_{(x,y)} + 0.114\cdot P^b_{(x,y)}
\label{monochrome}
\end{equation}
Once the image has been transformed into monochrome, the thresholding is much more accurate and well defined.
\subsection{Static Method}
In static thresholding, a threshold value $\mathcal{T}$ from 1 to 254 is chosen.  All pixels with a value less than this are changed to black, and all pixels with a value greater than this are changed to white. There is no special way to choose the threshold value and multiple tries can be used to find a proper value.
\subsection{Dynamic Method}
In dynamic thresholding, an arbitrary value $\mathcal{T}$ from 0-255 is chosen to begin. A value closest to the optimal threshold would be the best choice, but may not be known. Pixels are collected into two groups $\mathcal{G}_1$ and $\mathcal{G}_1$.  The values of each pixel in each groups are accumulated and then that quantity is divided by the number of pixels in its respective group as in equation~\eqref{dynamic}.  These two resulting values are averaged as $\mathcal{T}'$.  If $\mathcal{T}'$ is equal to $\mathcal{T}$, then the correct threshold value has been found.  If not, $\mathcal{T} := \mathcal{T}'$ and the process is iterated again.

\begin{eqnarray}
G_{(x,y)} \in\left\{\begin{array}{ll}
\mathcal{G}_1&if\,G_{(x,y)} > \mathcal{T}\\
\mathcal{G}_2&if\,G_{(x,y)} < \mathcal{T}
\end{array}\right.\\
m_1 = \sum_{i=1}^{|\mathcal{G}_1|}\mathcal{G}_1^i\\
m_2 = \sum_{i=1}^{|\mathcal{G}_2|}\mathcal{G}_2^i\\
\mathcal{T}' = \lfloor\frac{m_1 + m_2}{2}\rfloor
\label{dynamic}
\end{eqnarray}
Once the process ends when the optimal threshold has been found and the static thresholding process is then applied with this found value.  The result is an image that more accurately distinguishes between foreground objects (white pixels) and background objects (black pixels).

\section{Particle Image Velocimetry}
\label{PIV}

\subsection{Information}
Another method employed to find flow velocities in this project is Particle Image Velocimetry (or PIV), specifically 2-Frame Cross-correlation PIV. This method tracks groups of particles by trying to locate the image of the group somewhere in the next frame \cite{wester}.

\subsection{Theory}
\subsubsection{Cross-correlation:}
The cross-correlation is a method of determining how similar two signals are to each other. It is defined for continuous functions in one dimension as:

\begin{equation}
(f\star g)(t) = \int_{-\infty}^\infty f^*(\tau)\cdot g(t + \tau)d\tau
\end{equation}
and for discrete functions in one dimension as:
\begin{equation}
(f\star g)[n] = \sum_{m=-\infty}^\infty f^*[m]\cdot g[n+m]
\end{equation}

Perhaps a more descriptive name for this operation is the sliding dot product, because that is precisely its effect. Shown below is an illustration of the cross-correlation of the array [1, 2, 3] with itself. (When a signal is correlated with itself, it is called an auto-correlation.)

\begin{figure}[H]
\[
\begin{array}{rrl}

m=&-2&
\left|\begin{tabular}{ccccccc}
 &  &  &  &  &  &  \\ 
\textbf{1} & \textbf{2} & \textbf{3} &  &  &  &  \\ 
 &  & \textbf{1} & \textbf{2} & \textbf{3} &  &  \\ 
\hline
 &  & 3 & 0 & 0 &  & =3
\end{tabular}\right.\\
\hline
m=&-1&
\left|\begin{tabular}{ccccccc}
 &  &  &  &  &  &  \\ 
 & \textbf{1} & \textbf{2} & \textbf{3} &  &  &  \\ 
 &  & \textbf{1} & \textbf{2} & \textbf{3} &  &  \\ 
\hline
 &  & 2 & 6 & 0 &  & =8
\end{tabular}\right.\\
\hline
m=&0&
\left|\begin{tabular}{ccccccc}
 &  &  &  &  &  &  \\ 
 &  & \textbf{1} & \textbf{2} & \textbf{3} &  &  \\ 
 &  & \textbf{1} & \textbf{2} & \textbf{3} &  &  \\ 
\hline
 &  & 1 & 4 & 9 &  & =14
\end{tabular}\right.\\
\hline
m=&1&
\left|\begin{tabular}{ccccccc}
 &  &  &  &  &  &  \\ 
 &  &  & \textbf{1} & \textbf{2} & \textbf{3} &  \\ 
 &  & \textbf{1} & \textbf{2} & \textbf{3} &  &  \\ 
\hline
 &  & 0 & 2 & 6 &  & =8
\end{tabular}\right.\\
\hline
m=&2&
\left|\begin{tabular}{ccccccc}
 &  &  &  &  &  &  \\ 
 &  &  &  & \textbf{1} & \textbf{2} & \textbf{3} \\ 
 &  & \textbf{1} & \textbf{2} & \textbf{3} &  &  \\ 
\hline
 &  & 0 & 0 & 3 &  & =3
\end{tabular}\right.
\end{array}
\]
\caption{Illustration of calculation of auto-correlation for the array [1, 2, 3].}
\end{figure}

From the illustration in Figure 1, we can see how the cross-correlation would be useful for comparing two signals. The maximum value occurs at the shift for which the signals are the most similar, which in the case of auto-correlation is a shift of zero places. By finding the index of the maximum value, we can determine how far the signal has shifted, or in the case of an image (which requires a 2D cross-correlation), how far and in what direction a particle, or group of particles, has moved.

As a real world example of how this property would be useful, consider a radar transmitter. A pulse “A” is emitted by the radar station and then the station listens for an echo from an object, recording a signal “B”. If the pulse “A” is cross-correlated with “B”, then the shift required to obtain the maximum value will be directly related to the time it took for the signal to reach the object, be reflected, and return, which can be used to determine how far away the object is.

\subsubsection{Convolution Theorem}
The cross-correlation is a computationally expensive operation. Fortunately, though, we can reduce some of the expense by making use of the convolution theorem.

\begin{equation}
\mathcal{F}\left\{f \star g\right\} = k \cdot \mathcal{F}\left\{f\right\}\cdot\mathcal{F}\left\{g\right\}
\end{equation}

The convolution theorem says the Fourier Transform of the convolution of two signals is equal to the product of the Fourier Transforms of the signals. Because the cross-correlation is a very similar operation to the convolution, only differing in that it requires a complex conjugate, this theorem also applies to it, though sometimes under the special name ``the cross-correlation theorem.''

This is important because, whereas the cross-correlation requires $O(N^2)$ operations, where $N$ is the number of samples in the signal, the Fast Fourier Transform (FFT) of the signal requires only $O(N~log~N)$ operations, which allows us to calculate the cross correlation as

\begin{equation}
\label{crosscorr}
f \star g = \mathcal{F}^{-1}\left\{ \mathcal{F}\left\{f\right\}\cdot\mathcal{F}\left\{g\right\}
\right\}
\end{equation}

thereby using far fewer operations than in a direct evaluation.

\subsection{Methods}

\subsubsection{Subdivision of Frames}
\paragraph{subFrames1}
The first operation performed is the division of the frame into sub-frames. Each sub-frame is represented by a rectangle whose size is chosen by the user. These rectangles will be used to grab pieces of the images in the flow movie being analyzed. These rectangles are stored in a list called \textbf{subFrames1}, and are only calculated once, as the same subsections are used for each image in the movie. Two options are provided for the user to determine how the sub-frames are constructed. 

Regarding the typical sub-frames, they may choose from OVERLAP or NONOVERLAP. If NONOVERLAP is selected, the rectangles will be created side by side like a grid across the width and height of the frame. If OVERLAP is selected, rectangles will overlap each other in each dimension as specified by the user in a constant called Overlap, which is a percentage. OVERLAP is more expensive, but allows for a finer mesh of velocities to be returned while still using a large area for each sub-frame which can increase accuracy.

Where sub-frames along the edge of the image are concerned, the user can specify either MINIATURE or OVERLAP. Miniature causes whatever amount of the edge rectangle that is not contained within the image to be cut off and a sub-frame smaller than the typical sub-frame to be considered at this location. OVERLAP causes the border rectangle to be shifted left, right, up, or down, so that its entire area is contained within the frame.

\begin{figure}[H]
\caption{(Coming Soon) explains these ideas visually.}
\end{figure}

\paragraph{subFrames2}
In order to determine the velocity of each sub-frame in \textbf{subframes1} for a particular frame in our flow movie, we need to search the next frame in the movie for the sub-frames to determine how far they have moved. For the sake of illustration, we will consider only one sub-frame from \textbf{subframes1}, which we will refer to as \textbf{sub1}, and a set of two images in series, Frame1, and Frame2.

After we use the information contained in \textbf{sub1} to obtain a piece of the total image in Frame1, which we will call piece1, we need to search Frame2 to see if we can find any part of it in piece2 -- that is similar enough to indicate that the particles imaged in piece1 moved to the location of piece2 in the time between the recording of Frame1 and the recording of Frame2.

But where should we look? It would be inefficient to do an expensive computation like the cross-correlation over the entire frame, so we instead ask the user to supply a guess at the maximum distance in pixels he expects the particles in the movie to move between frames. This displacement is called \textbf{maxVel}. Since it is not specified in what direction the particles are expected to move (and shouldn't be, because that is partly what we are trying to find out), we must search in a circular sweep around the location of \textbf{sub1}. This is accomplished by creating a set of rectangles, corresponding to \textbf{sub1}, which are the same size as \textbf{sub1}, but whose centroidal locations are determined by

\begin{eqnarray}
x_c = maxVel \cdot cos \theta,& 0 < \theta < 2\pi
\end{eqnarray}
\begin{center}
and
\end{center}
\begin{eqnarray}
y_c = maxVel \cdot sin \theta,& 0 < \theta < 2\pi
\end{eqnarray}

This is done by incrementing theta by a currently arbitrary amount (though code is in place to prevent duplicate rectangles). 

This new set of rectangles that covers the search area for \textbf{sub1} is stored as a list of rectangles (\texttt{list<rectangle>}), inside of a list of lists of rectangles (\texttt{list<list<rectangle>>}) called \textbf{subframes2}. In this way, each sub-frame from \textbf{subframes1} has a corresponding list of rectangles that dictate its search area in the next Frame stored as a list in \textbf{subframes2}. Since the sub-frames for which we get velocities, those dictated by rectangles in \textbf{subframes1}, don't change throughout the analysis of the movie, their search areas won't either. Therefore, \textbf{subframes2} only needs to be constructed one time. 

\begin{figure}[H]
\caption{(Coming Soon) explains this construction visually.}
\end{figure}

\subsubsection{Finding the Velocities}
For illustration, we will describe the determination of the velocity for one subsection of one frame of our movie, called piece1, whose location and size will be determined by a sub-frame from \textbf{subframes1}, which we will call \textbf{sub1}. The frame of the movie for which we will be calculating a velocity will be referred to as Frame1, and the next frame in the movie will be referred to Frame2.

The first task is to perform the cross-correlation (xCorr) between piece1 and all the regions in the corresponding search area, as dictated by the corresponding list of rectangles in \textbf{subframes2}. 

\paragraph{FFT2 and iFFT2}
Recall that we will be computing the cross-correlation by method \eqref{crosscorr}:

\[
\label{eqn:crosscorr}
f \star g = \mathcal{F}^{-1}\left\{ \mathcal{F}\left\{f\right\}\cdot\mathcal{F}\left\{g\right\}
\right\}
\]
 
Our program implements a math library called MathNet Iridium which contains a method for performing the 1D FFT, which is to say, it can transform a single row or column vector into the frequency domain. Our data, however, is in the form of 2D arrays of pixels obtained from our images Frame1 and Frame2 by use of our sub-frame rectangles.

The first step is to turn the pixel data into a complex number format also supplied by MathNet. Next we must use the 1D FFT algorithm to perform the 2D FFT. This is accomplished by first taking the 1D FFT of every row of data, storing the complex result, and then taking the 1D FFT of every column.

The inverse Fourier transform can be accomplished similarly by simply taking the 1D inverse FFT of each row, and then taking the 1D inverse FFT of each column.

This is the most computationally expensive section of the code. Unfortunately, though, there is not much room for optimization due to the fact that a third party FFT algorithm was required.

\paragraph{XCorr}
Before performing the cross-correlation on the data we must consider two consequences of how we are performing the operation. Fourier transforms work on periodic data. As shown in Figure 4 below, when we take the Fourier transform of something like $A$, what we are really doing is taking the Fourier transform of $B$, because the basis functions upon which the Fourier transform is based, sines and cosines, are infinite, and not localized. 

\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.5]{periodicity.eps}
\caption{Periodicity in Fourier Transform.}
\end{center}
\end{figure}

This is important, because it means that a cross-correlation of a piece of Frame1, piece1, with a piece of Frame2, piece2, will be like sliding piece1 across an infinitely repeating quilt made of piece2.

The second problem is that the FFT method only slides forward. This means that the two arrays start lined up at zero shift, and the shift is only increased, so that we lose information the similarity of the signals for negative shifts.

See Figure~\ref{fft}.

\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{frames.eps}
\[
\begin{array}{rrl}
m=&
-2^\dagger&
\left|\begin{tabular}{cccccccccc}
 & &  &  &  &  &  &  & \\ 
 & \textbf{1} & \textbf{2} & \textbf{3} &  &  &  & & \\ 
1 & 2 & 3 & \textbf{1} & \textbf{2} & \textbf{3} & 1 & 2 & 3 \\ 
\hline
& 2 & 6 & 3 &  &  &  & & &=11
\end{tabular}\right.\\
\hline
m=&
-1^\dagger&
\left|\begin{tabular}{cccccccccc}
 & &  &  &  &  &  &  & \\ 
 & & \textbf{1} & \textbf{2} & \textbf{3} &  &  &  & \\ 
1 & 2 & 3 & \textbf{1} & \textbf{2} & \textbf{3} & 1 & 2 & 3 \\
\hline
& & 3 & 2 & 6 &  &  & & &=11
\end{tabular}\right.\\
\hline
m=&0&
\left|\begin{tabular}{cccccccccc}
 & &  &  &  &  &  &  & \\ 
 & &  & \textbf{1} & \textbf{2} & \textbf{3} &  &  & \\ 
1 & 2 & 3 & \textbf{1} & \textbf{2} & \textbf{3} & 1 & 2 & 3 \\
\hline
& &  & 1 & 4 & 9 &  & & &=14
\end{tabular}\right.\\
\hline
m=&1&
\left|\begin{tabular}{cccccccccc}
 & &  &  &  &  &  &  & \\ 
 & &  &  & \textbf{1} & \textbf{2} & \textbf{3} &  & \\ 
1 & 2 & 3 & \textbf{1} & \textbf{2} & \textbf{3} & 1 & 2 & 3 \\
\hline
& &  &  & 2 & 6 & 3 & & &=11
\end{tabular}\right.\\
\hline
m=&2&
\left|\begin{tabular}{cccccccccc}
 & &  &  &  &  &  &  & \\ 
 & &  &  &  & \textbf{1} & \textbf{2} & \textbf{3} & \\ 
1 & 2 & 3 & \textbf{1} & \textbf{2} & \textbf{3} & 1 & 2 & 3 \\
\hline
& &  &  &  & 3 & 2 & 6 & &=11
\end{tabular}\right.
\end{array}
\]\\
$^\dagger$ : Not~evaluated\\
Result = [14,11,11], Desired Results = [3, 8, 14, 8, 3]
\caption{Illustration of the results of the auto-correlation of the 1D array [1 2 3] via FFT.}
\end{center}
\label{fft}
\end{figure}

We can see then, that the cross-correlation derived from the FFT is neither complete nor physical. It is incomplete because it lacks information about leftward shifts. It is not physical because it is based off of a distorted version of space where one piece of space periodically extends to infinity. Just as we do not experience the sensation of leaving a room only to enter the same room from the other side, neither do particles teleport back to the other side of an image subsection when they hit a boundary. 

The solution to this problem was to zero pad the arrays. See the figure below.

\begin{figure}
\begin{center}
\[
\begin{array}{ccc}
	\left[\begin{array}{cc}
		1 & 2\\
		3 & 4
	\end{array}\right]&

	\begin{array}{c}
		\star\\
		\mathit{fft}
	\end{array}&

	\left[\begin{array}{cc}
		1 & 2\\
		3 & 4
	\end{array}\right]\\
	&\Downarrow&
\end{array}
\]
\[
\hspace*{2cm}
\left[\begin{array}{cc}
30 & 28\\
22 & 20
\end{array}\right] \Leftarrow WRONG
\]
\vspace*{1cm}
\[
\begin{array}{ccc}
\left[\begin{array}{cccc}
1 & 2 & 0 & 0 \\ 
3 & 4 & 0 & 0 \\ 
0 & 0 & 0 & 0 \\ 
0 & 0 & 0 & 0
\end{array}\right]&  
 
\begin{array}{c}
\star\\
\mathit{fft}
\end{array}&  
 
\left[\begin{array}{cccc}
0 & 0 & 0 & 0 \\ 
0 & 0 & 0 & 0 \\ 
0 & 0 & 1 & 2 \\ 
0 & 0 & 3 & 4
\end{array}\right]\\

&\Downarrow&
\end{array}
\]
\[
\hspace*{2cm}
\left[\begin{array}{cccc}
0 & 0 & 0 & 0 \\ 
0 & 4 & 11 & 6 \\ 
0 & 14 & 30 & 14 \\ 
0 & 6 & 11 & 4
\end{array}\right] \Leftarrow RIGHT!
\]
\caption{This figure illustrates that the matrices must be padded before applying the $\mathit{fft}$ operation for the correct result.}
\end{center}
\end{figure}

By zero padding the arrays to twice the length of the array (actually the next power of 2 above twice the length of the array due to FFT requirements), we can recover the lost negative shifts and make sure no unnatural values creep into our results. And, since we know the cross-correlation of a vector of length $M$ with a vector of length $N$ should return a vector of length $(M+N-1)$, it is easy to extract the useful data from the zero-padded cross-correlation.

Another useful consequence of zero padding is that we can now compare arrays of different sizes with the cross-correlation. As long as the zero padded arrays have the same dimensions, we can mix and match the sizes of our pieces (the parts of the movie frames) in any way we please. This is important because, as you recall, we created some sub-frames with smaller than average size due to the fact that they were at the boundary of the frame.

\paragraph{Interpretation of XCorr}
So, remember that we are to search an area of Frame2 for an image matching piece1 from Frame1. The process now is to grab piece1 out of Frame1 using our rectangle sub1. Next we grab pieces of Frame2 from the search area using the search area rectangles we defined for sub1 in a list in subFrames2. Now we perform the cross-correlation between sub1 and each of the pieces of the search area, noting the maximum value of the cross correlation and its indices (the shift). The largest cross-correlation value indicates the best match, and so the area (piece) from Frame2 that yielded the largest maximum cross-correlation value is selected as the most probable destination of the particles from Frame1.

Since the matching piece is defined by one of the search area rectangles, which we will call sub2, we know the location of search area our particles moved to. Knowing that our particles have moved from the rectangle sub1 to somewhere inside the rectangle sub2 gives us a coarse measure of displacement. Now we can examine the indices of the maximum value in the cross-correlation to determine how far within sub2 the particle image has shifted. 

The zero displacement index (zero based) for the cross-correlation of a vector of length M with a vector of length $N$ is $(M+N-1)-N = M-1$. Zero displacement in our situation means that the upper left corners of a 2D array are aligned. Using this rule we can easily find the shift from zero displacement in sub2, which we can add into the displacement of sub2 from sub1 to find the total displacement of our particle image.

\paragraph{Confidence}
As a means of assessing the confidence of the match between piece1 and piece2, the maximum value of the cross correlation of piece1 and piece2 is divided by the auto-correlation maximum of piece1 with itself. Since we, ideally, expect to find an exact match to piece1 somewhere in the search region, and since the maximum value we can obtain from the cross-correlation, assuming we use thresholded values, is the autocorrelation maximum, this serves as a good indicator of how complete the match was between piece1 and piece2. This can be used later to discriminate between values expected to be more accurate and values expected to be less accurate.
























\section{Particle Tracking Velocimetry}

\subsection{Information}
One of the methods used to track the movement of particles in this project is Particle Tracking Velocimetry (or PTV).  This method tracks individual particles in each frame, attempting to match them up with their probable selves in a set of four frames: the current frame, two subsequent frames, and the preceding frame. PTV attempts to track and link particles through neighboring frames using a strong assumption: that the particle path will be approximately linear throughout the four linked frames. While this assumption is strong, with reasonable frame rates, this assumption does not prevent this velocimetry method from being effective in a wide range of situations. 

\subsection{Theory}
The algorithm attempts to detect each particle in four sequential frames in order to chart its path.  This method allows the user to follow individual particles, creating a path for each particle as it moves in and out of the field. Potential future particle locations are found for an original particle by looking in a search area around the particle.  The trajectory between each candidate pair of particles is abstracted to the next frame in order confirm that the potential particle is actually the original particle. This is accomplished by using a much smaller secondary search area. The algorithm also tries to look into the past to confirm the particle's path.

If particles are moving especially sporadically, the algorithm can encounter problems with linking the particles from frame to frame, given that it requires particles to be collinear in at least three frames. This can usually be solved by increasing the frame rate. 

\begin{figure}[H]
\begin{center}
\includegraphics{ptv.eps}
\caption{This shows how the PTV algorithm works on an example particle in a set of images.}
\end{center}
\end{figure}

\subsection{Models}
In order to apply this method, the user must have at least 4 images representing chronological movement of the particle in a medium.  The user must also have an initial guess as to the limit of the particles' speed.  The method will approximate particle velocities for particles within a frame $i$. This means that a user must have frames $i$, $i+1$, $i+2$, and $i-1$ available for analysis \cite{vuk}. 

\subsection{Methods}
For best results from the PTV method, it is suggested that the images used first be processed by an appropriate thresholding algorithm.  This will yield the best outcome in the detection of the particles in the frame because it will reduce overlap of particles and false inclusion of pixels into the particle set.  This is, however, not a requirement, as it will be seen in the following section.

\subsubsection{Gathering Particles in Frame $i$}
White pixels are recognized as particles by the program.  The currently implemented version of the program inspects only the red color channel for white pixels in the image.  This is why it is best to threshold the image so that the program can easily find particles and so that false positives are not a problem.

\paragraph{Collecting Pixels}
The program iterates over the entire image, collecting contiguous areas of white pixels as particles in the frame. When a white pixel is encountered, the program checks all of the previous neighboring pixel locations for other white pixels, and if found, the current pixel is added to the previously found set of pixels.

\subsubsection{Find Particles in Frame $i+1$}
Once all of the particles have been found in the $i^{th}$ frame, the program will inspect the chronologically proceeding image for the possible future positions of the particles.

\paragraph{Search Area}
The search area is dependent on input given by the user. The user must specify a radius $R_1$ that denotes the maximum travelling distance for a particle in any frame.  In this case a square of size $2R_1$ is drawn around the particle's position in the next frame and all particles found within the search area will be considered as the particles potentially next position.

\paragraph{Calculating Velocities}
For all of the particles found in the search area, a simple velocity is calculated by using the difference of the particles' center positions. For a certain frame $i$, calculating the velocity for a particle $p$ with a possible particle $q$ in frame $i+1$:
\begin{eqnarray}
\overrightarrow{v}_x^{(p,q)} := & p_x^i - q_x^{i+1} \\
\overrightarrow{v}_y^{(p,q)} := & p_y^i - q_y^{i+1}
\end{eqnarray}
\subsubsection{Find Particles in Frame $i+2$}
This step of the method tries to confirm that the particles in each frame $i$, $i+1$, and $i+2$ are collinear.  After all the possible velocities have been found for a particle $p$, the next frame is examined in order to {\it confirm} the true velocity of the particle.  The principle is if the particle can be found with the same velocity vector in 3 consistent frames, then it must be the same particle.  This step also requires input from the user.  Another radius $R_2$ must be specified such that $R_2 < R_1$ to reduce the search area and the chance of false positives.
\paragraph{Extrapolating Movement}
Instead of basing the search area around the initial position of the particle, the search area is moved a distance (the suspected velocity) away from the particle found in frame $i+1$.  
\paragraph{Search Area}
The area for searching has been reduced to a square with sides the size of $2R_2$.  This is due to the fact that we have exhausted the {\it search} part of this method.  This step will only serve to confirm our suspected particle movement.
\paragraph{Confirmation}
If a particle is found within the restricted search area, the original particle has its velocity set to the newly confirmed velocity.
\paragraph{No Confirmation}
If a particle is not found with the restricted search area, the method continues to the next step of the process, attempting to confirm the suspect velocity with a particle found in the past.
\subsubsection{Find Particles in Frame $i-1$}
As mentioned in the previous section, sometimes a particle is unable to be located according to a suspected velocity two frames into the future.  This is explained one of three ways:
\begin{itemize}
\item {\it The suspected velocity is incorrect.} This may be due to a false potential particle being examined by a search region too large, or by particles that are too close to each other.
\item {\it The search areas are too restrictive to find the particles.} The user may try a sample to hone their $R_{1,2}$ values for optimal search areas.
\item {\it The particle has simply changed directions and cannot be tracked.}
\end{itemize}
If either of the top two conditions are the case, the program can recover by inspecting the previous frame $i-1$. This will allow us to perform the same speculation for confirming the velocity of the particle.
\paragraph{Assuming Previous Location}
This step of the method tries confirm that the particles in each frame $i$, $i+1$, and $i-1$ are collinear.  This step is very similar to the last, except for the placement of the search area.  In this case the velocity $\overrightarrow{v}$ is subtracted from the center of the particle p in frame $i$ such that the search area $a$ with sides of size $2R_2$ is centered around the point:
\begin{eqnarray}
a_x^{i-1} := p_x^i - \overrightarrow{v}_x \\
a_y^{i-1} := p_y^i - \overrightarrow{v}_y
\end{eqnarray}
Again, this search radius is reduced and should tightly surround the particle's previous position.  The program will attempt to find a particle in the search area and if found, the velocity of the particle is confirmed.

\subsection{Implementation}
\subsubsection{Improving Performance}
Locating particles within the image frames is the largest consumer of time while executing the algorithm.  Particles are contiguous blocks of white pixels within an image.  To speed up this process, each white pixel location is assembled into one of C\#'s {\tt Dictionary} class, associated with a {\tt Particle} object.  The {\tt Dictionary} class implements a hash map such that insertions and retrievals are both done in constant time $O(1)$.  This eliminates the need to incessantly iterate over a list of objects $O(n)$ and dramatically improves performance. Each frame builds this initial {\tt Dictionary} and the speed of subsequent requests for particles in the frame at a given location is greatly decreased.  This {\tt Dictionary}/hash map is key to the performance of the algorithm as it spends most of its time searching repeatedly for potential particles in all evaluated frames.






\section{Application}

\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{gui.eps}
\caption{The graphical user interface~(\ref{GUI}) we developed for the project that allowed us to easily define all of the necessary parameters to fine tune the execution of the two methods for tracking particles. Within the program, you can see a frame with its particles and corresponding overlaid velocities.}
\end{center}
\end{figure}

\subsection{Class Structure}
Because C\# is an object oriented programming (OOP) language, it was easy for us to implement the program in such a way that switching between the two different particle tracking methods was seamless, as they both implemented the {\tt ParticleTracking} interface.  This homology is extended throughout the implementations: {\tt SubFrame} and {\tt Particle} extend the abstract class {\tt Piece} for Particle Image Velocimetry and Particle Tracking Velocimetry, respectively. Using OOP helped to speed up development and let us focus more time on the algorithms underneath.

\subsection{Graphical User Interface}
\label{GUI}
The graphical user interface (or GUI) of the program contains several different elements.  The top left corner displays the current frame.  Frames are loaded from an images folder and the slider below the display selects the current frame. An options panel to the right of the window allows inputting parameters into both particle tracking codes.  The code to be run is selected via a dropdown box. These controls allow us to easily change the values and rerun the selected method with different parameter values.  Each code contains default parameter values, so detailed option configuration is not strictly necessary.  The user runs the selected code by clicking on the ``Work'' button, which freezes input until the code has finished its calculations.  After the work has been done by the velocimetry algorithm, the program iterates over all the calculated velocities in the frames and draws them to the window.  The user can press the ``Play'' button to play (and pause) an animation of the frames with an overlay of colored velocity vectors.




\section{Test of Algorithms}
As a preliminary test to determine whether the algorithms could accurately extract velocities from images, a simple test case image sequence was constructed and analyzed using default settings. In this sequence 241 particles were moved to the right at a velocity of 10 pixels per frame for seven frames. Both algorithms performed very well, but not perfectly. The visual representations of the velocity field as generated by each algorithm are shown in figures \ref{ptv:ltr} and \ref{piv:ltr}, as are tables \ref{ptv:tb} and \ref{piv:tb} summarizing the velocity fields they calculated. Keep in mind that zero values are also possible correct answers for the PIV sequence because it also considers the empty space in the images.

\begin{figure}
\label{ptv:ltr}
\begin{center}
\includegraphics[]{ptv-ltr.eps}
\caption{The graphical output of the PTV algorithm when tracking 241 particles, each moving to the right with a velocity of 10 pixels per frame. Notice how the algorithm gets occasionally confused and incorrectly links the particles (non-horizontal lines). This is due to random alignment of unrelated particles with the secondary search radius. The PTV algorithm chooses the first extended link it finds since it is fairly unlikely another particle will meet the same conditions. In higher density fields, however, this can cause occasional errors.}
\end{center}
\end{figure}

\begin{figure}
\label{piv:ltr}
\begin{center}
\includegraphics[]{piv-ltr.eps}
\caption{The graphical output of the PIV algorithm when tracking 241 particles, each moving to the right with a velocity of 10 pixels per frame. Without periodicity in the spacing of particles, the PIV algorithm is very accurate for such a simple velocity field. Errors are most likely generated at the edges of the image or in regions where no particles exist, since velocities are still chosen, even if the cross correlation just returns near zero values on the order of round off error.}
\end{center}
\end{figure}

\begin{table}
\label{ptv:tb}
\begin{center}
	\begin{tabular}{|l||c|c|}
	\hline
	Velocity & Confidence & Percentage \\
	\hline
	10 & $1$ & 86.8\% \\
	other & $1$ & 9.8\% \\
	other & $<1$ & 3.4\% \\
	\hline
	\end{tabular}
\end{center}
\caption{This is a table}
\end{table}

\begin{table}
\label{piv:tb}
\begin{center}
	\begin{tabular}{|l||c|c|}
	\hline
	Velocity & Confidence & Percentage \\
	\hline
	10 & $1$ & 1.6\% \\
	10 & $<1$ & 90.6\% \\
	0 & $<1$ & 4.7\% \\
	other & $1$ & 3.1\% \\
	\hline
	\end{tabular}
\end{center}
\caption{This is a table}
\end{table}

\begin{table}
\begin{tabular}{c|c|c}
Method & Time (s) & Image Size\\
\hline
PIV & 8.72 & 200$\times$200\\
PTV & 0.2382 & 200$\times$200
\end{tabular}
\caption{Time comparison of PTV and PIV algorithms on a large particle region moving right at 10 pixels per frame.  Parameters to the algorithms were adjusted for the best results.  Time is in seconds per frame.}
\end{table}

\begin{table}
\begin{tabular}{c|c}
Method & Time (s) & Image Size\\
\hline
PIV & 5,031.34 & 832$\times$836\\
PTV & 1.958 & 832$\times$836
\end{tabular}
\caption{Time comparison of PTV and PIV algorithms on an actual ultrasound video.  Parameters were adjusted for best output.  Time is in seconds per frame.}
\end{table}

\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{ptv-us.eps}
\caption{In this actual ultrasound images, the PTV algorithm performs well in the low-density, slow moving particle areas to the lower right.  It has trouble producing intelligible velocities for the high-density, fast-moving particle regions in the corners of the images.}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{piv-us.eps}
\caption{In this actual ultrasound images, the PIV algorithm performs just as well in the low-density, slow moving particle areas to the lower right.  It also has trouble producing intelligible velocities for the high-density, fast-moving particle regions in the corners of the images.}
\end{center}
\end{figure}



\begin{figure}
\begin{center}
\includegraphics[]{ptv-curve.eps}
\caption{A fundamental assumption of the PTV algorithm is the movement of particles in a linear fashion.  Small secondary search radius $R_2$ are thus unable to locate the correct location of particles on a curved path. The secondary radius size can be increased, but this can cause false linkages.}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[]{ptv-prob-cap.eps}
\includegraphics[scale=0.7]{ptv-prob-ill.eps}
\caption{This illustrates one of the major flaws with the PTV method.  Given a grid-like arrangement of particles, the algorithm is incapable of determining which path is the true path without some intelligent inferences and history that are not present in this na\"{i}ve implementation. Above, the highlighted portions are examples of the problem. The below figure illustrates what is happening over the three frames.}
\end{center}
\end{figure}


\section{Performance Expectations for PIV and PTV and Test Case Imagery}

\subsection{Speed of Execution}
It is generally expected that the PTV algorithm will execute much faster than the PIV algorithm. Although the PTV algorithm has to consider every pixel of every image individually, this is only done one time up front for every frame in the sequence. This process uses simple logic to map important (white) pixels into a hash table linked to the particles to which they belong. Since hash tables provide for very quick ways to look up data, and because the bulk of the mathematics involved in this algorithm is simple addition and subtraction, the overall cost of 4-Frame PTV is very low, even for high particle densities, though it does scale with the number of particles considered.

The PIV algorithm, on the other hand, must, at the very least, compare every pixel of every frame at every possible displacement. Often times this comparison is done more than once for a given set of pixels due to overlapping subframes or the use of smaller subframes and circular subframes if a non-zero max velocity parameter is used. Despite the speed gained by using the FFT to perform the cross correlation, there is no way that PIV can compete with the speed of PTV. Since PIV must consider the entire image, the cost of the algorithm scales with the size of the images being considered and the number and size of the subframes.

\subsection{Particle Density}
As particle density increases in the same velocity field, the accuracy of the PIV algorithm should generally increase as long as periodic patterns are not present in the particle positions. Since the PIV algorithm works by finding similar looking groupings of particles in different images, its accuracy will increase as it is given a more complex and therefore harder to match image segment. It is easy to see how two unrelated particles could much more easily appear in close to exactly the same configuration within a subframe than 10 particles. Also, when the particle density is small, a large percentage of particles can drift out of the subsection of the image being considered, whereas with a large number of particles, it is more likely that an identifiable portion of the group will remain in the search area.

For the PTV algorithm, an increase in particle density can mean a decrease in accuracy because it presents more possibility for false linkages between frames. Since the PTV algorithm works by searching a set radius for possibly linked particles in the i+1 frame, and then extrapolating their position in the i+1 frame and determining the accuracy of the linkage by the presence of a particle at this predicted location, an increase in the density of the particles presents more opportunity for unrelated particles to appear in these locations purely by chance, resulting in false linkages and bad velocities.

\subsection{Complicated Velocity Fields}
The accuracy of the PTV algorithm should be largely unaffected by complex velocity fields provided the particles spend enough time (at least 3 frames) travelling in any one direction, which is generally not a hard condition to meet given typical velocimetry frame rates. the PTV algorithm would likely not be able to link particles as they cross from one section of the image to another if there are sharp changes in the direction of particle flow, but this would not constitute a large loss of information as the bulk of each section of the image would still be able to have many velocities associated with it.

Complicated velocity fields do pose a problem for the PIV algorithm, however. Because groups of particles would not stay in the same configuration due to entering and exiting different velocity fields at different times, the algorithm may not be able to make good matches from frame to frame. In order to overcome this problem, the size of the subframes used would need to be reduced in order to avoid having multiple flow directions and vorticity occurring within the same search frame (the algorithm will  perform best when evaluating a clean, single velocity subsection). Since reducing the size of the subframe reduces the amount of information and the uniqueness of the pattern, as well as the amount of time particles spend within the subframe, false matches become more probable.

\subsection{Curvature in Particle Paths (Rotation in the Flow)}
Curvature in particle paths or rotation in the flow field can be very detrimental to the accuracy of the PTV algorithm if the radius of curvature is small and/or the frame rate is low. Since the PTV algorithm looks for co-linear particle images across a minimum of 3-frames, a small radius of curvature will make linkage using typical search radius values impossible unless the frame rate and resolution are sufficiently high to allow for approximately linear motion across the frames used for particle identification. Linkage can still be achieved at a lower frame rate, but this requires increasing the secondary search radius to allow for more deviation from linearity. Since the presence of a particle in the secondary search radius is all that is required for linkage, increasing the size of the secondary radius also greatly increases the chances of false linkage.

Since curved velocity fields will distort the shapes of particle groups, the PIV algorithm can also have difficulty with rotational flows. However, if particle spacing is fairly tight, the increase or decrease in the spacing may be small enough that partial overlap of many of the particles is still possible when considering an image of the same group in two adjacent frames, so that accurate velocities can still be reported with appreciable confidence.

In figures \ref{ptv:rotate} and \ref{ptv:rotate} are the responses of the two algorithms to an image sequence containing a rotating group of particles. Despite our predictions, both sequences extracted very believable velocity data from the rotating particle field.


\begin{figure}
\label{ptv:rotate}
\begin{center}
\includegraphics[]{ptv-rotate.eps}
\caption{The PTV algorithm is able to accurately determine the velocities of each particle in this frame as they rotate.  Those particles on the outside are rotating faster, orbiting around the center, stationary particle.}
\end{center}
\end{figure}

\begin{figure}
\label{piv:rotate}
\begin{center}
\includegraphics[]{piv-rotate.eps}
\caption{The PIV algorithm also handles this particle rotation instance without any problems.}
\end{center}
\end{figure}

\section{Conclusions}
Particle image and particle tracking velocimetry are two widely used and heavily researched methods of interrogating digital imagery. Both the 2-Frame Cross Correlation and 4-Frame Particle Tracking velocimetry algorithms performed beyond expectations and were able to extrapolate complicated velocity fields with accurate results, even with the low level of logic present in this implementation. 

Both velocimetry approaches performed well in rotational flows; however the PIV algorithm seemed more robust for higher particle densities. Nevertheless, the PTV algorithm is expected to perform better in more complicated velocity fields and executed orders of magnitudes faster than its counterpart. Since the calculation costs for both algorithms are relatively fixed, any future work in this area would likely focus on PTV, with attempts being made to improve the linkage logic for the development of an enhanced 4-Frame algorithm. In this way, an interrogation method could be developed that would be robust while still benefiting from the execution speed of 4-Frame PTV.


\listoffigures
\listoftables

\bibliography{bib}{}
\bibliographystyle{IEEEtran}

\end{document}
