\documentclass[10pt]{amsart}
\usepackage{amsmath,amssymb,amsthm,graphicx,enumerate,color,bbm}

% Scientific Notation
\providecommand{\e}[1]{\ensuremath{\times 10^{#1}}}

\oddsidemargin=17pt \evensidemargin=17pt
\headheight=9pt     \topmargin=26pt
\textheight=576pt   \textwidth=433.8pt

\begin{document}

\title{A Low-Budget Environment for Radiation Detection}
\author{Jeremy Davis}
\address{MSC 261\\California Institute of Technology\\Pasadena, CA 91125}
\email{scezumin@caltech.edu}
\author{Sean Choi}
\address{California Institute of Technology\\Pasadena, CA 91125}
\email{yo2seol@caltech.edu}

\date{August 27, 2010}

\begin{abstract}
Detecting and locating a dirty bomb or other radioactive source is a problem with many modern applications. Creating an experiment, with real radiation sources and detectors, to devise a strategy that will effectively minimize the damage in such a catastrophe is very expensive. However, creating a virtual environment that will closely simulate such possibilities is much cheaper. It should possible to simulate almost any kind of environment using such a virtual modeling system to obtain experimental results for any hypothetical strategies.  We have been working to implement such an engine. We will expand upon its capabilities and use it to verify the current findings of many different researchers by using the k-sigma methods of detection.  Also, we present new findings from adding multiple obstacles that shield the radiation sources.  This is intended to be an ongoing project which will support various location and detection algorithms.
\end{abstract}

\maketitle


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In this day and age, radioactive materials are more easily available to individuals.  Some sources are benign, and even helpful; a swallow of barium sulfate can be useful in diagnosing gastrointestinal difficulties.  The by-products of a nuclear reactor, however, are not so beneficial, and must be closely monitored and contained.  Such materials could leak out and adversely affect humans or wildlife, or worse, be fabricated into weapons.  Because of this, the problem of radiation detection is a major one, with many important applications to preventing terrorist activity.  One goal is to efficiently locate and intercept a terrorist who may have a `dirty bomb' in a complex structure, such as an airport, a cargo ship, or an office building.  Another is to ensure that an armored transport vehicle reaches its destination without losing any of its radioactive payload.  Both of these problems involve one or many emitters, detectors, and obstacles in the environment, which produce additional problems regarding detection time, intrusiveness, and cost.  We seek, therefore, to create an engine which simulates such an environment, as well as the calculations undergone in pinpointing a radioactive source.

The presence and intensity of radioactive material is measured by detectors which register photons striking their surface.  Measurements of radioactivity are typically made with a Geiger counter.  This utilizes a chamber of inert gas which is briefly rendered conductive by the high-energy photons from the radioactive source.  However, in the past decade, Caltech, Smith Technology, and Motorola are phasing these out with the introduction of a new device with a detector crystal made of Cadmium Zinc Telluride (CdZnTe) and a GPS, gyroscope, and wireless capacity \cite{ipsn}.  These crystals are a mere 20mm x 20mm x 5mm in size, which is a significant improvement over the relatively large Geiger counters.  It is this type of device which hopefully will be integrated into stationary, man-portable, and even robotic detectors, and therefore, the type of device our simulations are based on.

In a vacuum, a sensor $r$ meters away from a source of radiation detects photons with a rate proportionate to $1/r^2$; a detector at ten meters from the source will give a readout which is expected to be one one-hundredth of a readout at one meter.  This drop-off implies that even very sensitive detectors must be arranged in an array which gives coverage appropriate to the region being monitored.  More detectors will allow for superior resolution when scanning; however, each additional detector increases monetary and computational cost.  Additional difficulties include intentional or unintentional shielding of a radioactive source, which may mask or lessen its radiation signature, background radiation, which can interfere with array readings, and may not be uniform in the environment.  A mobile source complicates things further, and the most complicated situations of all arise from multiple mobile detectors and emitters.

Caltech researchers have investigated the efficiency \cite{dano} \cite{ipsn} of their detectors using simulations driven by Valve Corporation's Source engine.  This video game engine, popular among game modifiers offered several existing advantages, such as a level editor, useful raytracing functions, and a very professional appearance when observing simulations.  The research done dealt mainly with the `open field' model; a two-dimensional area devoid of obstacles, with various arrangements of stationary or mobile detectors seeking to determine the existence and/or position of a radioactive source randomly placed in the environment.  In this report, when the `standard model' is mentioned, it refers to a 3x3 or 4x4 arrangement of detectors in an open, unobstructed field.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Goals}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Our goal in writing the HAL-BERD \footnote{Hopefully, A Low-Budget Engine for Radiation Detection.} engine was to streamline many aspects of conducting such tests and extend the environment's flexibility.  We aim for:
\begin{itemize}
\item A small, cross-platform suite of classes and functions with few dependencies.  Previous simulations depended on the installation and maintenance of Half-Life 2, the Microsoft Windows operating system, and Microsoft Visual Studio.  This requires several gigabytes of space on one's computer and several hundred dollars.  By comparison, we will be using a brand new engine, coded in Python, which takes up only a few hundred kilobytes. It makes use of the free Numerical Python (numpy) library, the standard Python interface to the Tk GUI toolkit, Tkinter, and a JavaScript Object Notation (JSON) encoder/decoder, simplejson.
\item High readability and maintainability through use of well-documented Python.  Much of the work being done deals with fairly simple computations; there is no need to obscure them with a more complicated syntax.  Additionally, high readability should aid portability; for example, if  this code were to be parallelized and run on a GPU.
\item Rapid development speed with a fast learning curve for new contributors.  The modularity of our engine makes it easy to swap out or edit specific algorithms or formulae, only loading the features necessary for the current simulation.  The Half-Life 2 engine, while powerful, contained thousands of lines of code which in no way aided simulation.
\item Intuitive environment specification using JavaScript Object Notation (JSON).  Files written in the JSON format are easily readable by man and machine alike.  This allows users to quickly edit the properties of their scenario with nothing more than a text editor\footnote{though a graphical editor may be on the horizon.}, instead of a proprietary map editor.
\item The introduction of radiation-realistic obstacles to the environment.  Previous research has dealt largely with open environments, with simplistic shielding or no shielding implemented.
\item Exploration of non-isotropic detectors, that is, the introduction of directionally-dependent detectors to the system.  These allow for a very different style of detection and location algorithm, with interesting implications for detection efficiency and the sensing of multiple targets.
\item A graph system allowing for mobile detectors to traverse known environments.  In order to allow for reasonable movement within a structure and determine shortest paths from one location to another, we will implement a nodegraph system, which can be thought of a mesh of rails along which mobile detectors and emitters may travel.
\end{itemize}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Methods}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection{Modeling photon emission}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Perfectly true-to-life radiation is beyond the abilities of computer simulation; the sheer number of photons and the complex interactions between them and their environment are prohibitively large.  However, there are several approximations that can be made in order to simplify the process. First, we model detectors as point-sources.  At reasonable distances ($>1$ meter) this is reasonably true-to-life, and enables us to define a source as a single point, $(x, y, z)$.  Radioactive decay is a classical example of a Poisson process, and so, photon emission is modeled as Poisson in terms of the rate of emission times the window of time for which the detector is active.  For simplicity, we model the emitters as isotropic; they emit photons in all directions with equal probability.  Therefore, the probability that a given photon collides with a detector is proportionate to $1/r^2$, the fraction of the spherical isosurface occupied by the detector crystal.  A more realistic simulation would also take into account the dot product of the incoming photon's path and the normal of the crystal\footnote{A crystal is most likely to detect photons if its face is oriented directly toward the source.}; however, for the purposes of validating the previous standard model, detectors are isotropic. $\Lambda$, the expected rate of photons detected for detector at distance $r$ from a source emitting with rate $\mu$, (adapted from \cite{jj}), is
\begin{equation}\label{eq:Lambda}
\Lambda = \frac{A \mu 	\prod_{i=1}^n e^{- \alpha_i r_i}}{r^2}
\end{equation}
$A$ is a proportionality constant related to the area of the detector, for in isotropic emission, the fraction of photons reaching the target is related to the target's area over the area of a spherical shell at radius $r$. The product found in the numerator is the factor of radiation attenuation due to shielding.  In the attenuation factor, we define $n$ to be the number of distinct obstacles (including air and vacuum) the photon must pass through, with $i$ being the $i^{th}$ obstacle, $r_i$ being the total length of the ray in that type of obstacle\footnote{Therefore, $\sum_{i=1}^n r_i = r$.}, and $\alpha_i$ the linear attenuation coefficient of the $i^{th}$ obstacle's material.\footnote{For pure vacuum, this product would be 1, indicating no decay.  For air, the attenuation coefficient is roughly 0.0159 for 200keV, so the radiation would be reduced by roughly $e^{-0.0159r}$ in air.}

However, non-isotropic emitters are not as easy to model.  With an isotropic detector, the formula for $\Lambda$ depends solely on $r$.  To simulate a non-isotropic source, one would have to introduce a directionally-dependent term.  Alternatively, one could encase the point-source in simulated shielding material -- for example, a shielded briefcase intended for transport of radioactive samples.
    
We are using data collected by Bozhil (citation and wording change needed). These data comprise the means of photons detected per second in each of many frequency bins, measured using a one milliCurie source one meter from an actual detector.  Bozhil collected data on Cesium-137, and unless otherwise indicated, all experiments are performed using the $^{137}$Cs emission spectrum, which can be easily scaled by the detector size and source intensity.  Recall that while photons are assumed to be emitted uniformly in all directions, the emission is still probabilistic. Therefore, the counts received in a timestep $dt$ are equal to $Poisson(\Lambda dt)$ provided that $\mu$ has been set to the correct intensity, in mCi.

\subsection{Modeling shielding parameters and obstacles}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
In real-world situations, there is no limit to the number of shapes or materials which objects that shield an emission source can take. Due to such multitude of possibilities, it is nearly impossible to perfectly predict what will shield an emission source. Therefore, it is beyond the scope of this simulator  to model all the possibilities of shielding a photon emission down to every minute detail. However, by making few assumptions, it is possible to closely approximate the decay term using only the given parameters with the minimum amount of computational overhead. First assumption that we can make is that the object is composed of uniform material and has uniform overall density. It is trivial to see that it is hard to predict the composition and the corresponding densities of an existing object. Even if we are given the composition of material, it is very hard to predict the composition and the density of a single portion of the object that shields the emission source. Therefore, the above assumption greatly simplifies task of modeling. Now, given such assumption, in order to compute the the attenuation factor $e^{- \alpha_i r_i}$ in equation (1), we first need to compute the linear attenuation coefficient, $alpha$, from the mass attenuation coefficient data that are readily availabe. Some sample mass attenuation coefficients for each material are as follows. \footnote{Data provided by NIST (National Institute of Standards and Technology). Unused data is omitted from the table.}
\begin{table}[ht]
\caption{Sample mass attenuation coefficients $\mu/\rho$ $(cm^2/g)$ }
\centering
\begin{tabular}{c|c c c}
\hline\hline
Material & 100 keV & 200 keV & 500 keV\\
\hline
Air & 1.541\e{-01} & 1.233\e{-01} & 8.712\e{-02}\\
Water & 1.707\e{-01} & 1.370 \e{-01} & 9.687\e{-02}\\
Concrete & 1.738 \e{-01} & 1.282\e{-01} & 8.915\e{-02}\\
\hline
\end{tabular}
\end{table}

As given above, each material consists of three values as attenuation varies by the energy of the emission source. For the sake of simplicity, we have only utilized the mass attenuation coefficient for 200 keV, due to the wide energy spectrum given by Bozhil $^{137}$Cs source. Given these values the linear attenuation coefficient, $\alpha$, is simply obtained by the product of mass attenuation coefficient and the density of the material and undergoing minor unit conversions. \footnote{Densities are assumed to be air: 0.0001292 $g/cm^{-3}$ and concrete: 2.3 $g/cm^{-3}$} 

In order to simplify the calculation of the distances travelled through the attenuation medium, the shielding object is partitioned and approximated into many smaller sized rectangular prisms. similar to the idea of pixels. By doing so, finding the point of intersection is greatly simplified rather than having many random sized objects.  Then the distance is calculated using the line box intersection algorithm \cite{linebox}, which gives an effective way to come up with all the of points of intersection between a line and a box. Given the two intersection points, the distance can be obtained as the magnitude of two vector points.

Finally, given an emission source and a detector, all of the obstacles that lie between them are precomputed before the simulation, so that the counts are always being attenuated by the computed factor. However, this method will be obsolete once either the source or the detector are given motion as the obstacles and the distance travelled through those obstacles change. Such consideration is beyond the scope of this project and will be discussed in the future. 

\subsection{The k-sigma method of detection and location}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\cite{jj}
The primary method of detection and location implemented in our engine so named because it is a measure of how many $(k)$ deviations $(\sigma)$ a detector's readout is above the background radiation.  Prior to the simulation starting, the detectors are organized into groups of one, two, three, and four detectors.  Any group of four is possible as long as the distance between any two detectors in a group does not exceed a predetermined value.\footnote{For example, when detectors are arranged in a uniform grid with spacing $s$, the distance limit for a group of four would be set to $\lceil s\sqrt{2}\rceil$, which is slightly more than the diagonal of a square in the grid.}  Each timestep, the simulation computes the radiation counts from the emitter to the detectors, and the counts are summed in each group of detectors.  If one or more groups report great enough deviation from the background intensity, it is reported that there is a point-source of radiation somewhere in the environment.  The threshold for this deviation is one area of interest, and so we run simulations on sources of various intensities to determine appropriate threshold $k$ values.

It is then assumed that the group with the highest aggregate count is closest to the source, or at least, able to provide the data which best predicts the source's location. Then, using the counts from this group , we evaluate the following likelihood estimator, $L$:\cite{jj}
\begin{equation}\label{L}
L(x,y,z) = \frac{1}{1 + \displaystyle\sum_{i=1}^{i=4} (\Upsilon_i(x,y,z) - \overline{\Upsilon})^2}
\end{equation}
where, for $C_i$ equal to the cumulative photon hits at the $i^{th}$ detector, $\Gamma$ the expected number of counts due to background radiation, $T$ as time, $x$, $y$, and $z$ the coordinates of the location being tested, $x_i$, $y_i$, and $z_i$ the coordinates of the $i^{th}$ detector, and $decay$ equal to the term discussed in equation \eqref{eq:Lambda}:
\begin{equation}\label{Upsilon}
\Upsilon_i = (C_i / T - \Gamma)((x_i - x)^2 + (y_i - y)^2 + (z_i - z)^2)(decay)
\end{equation}
This equation uses $n=4$, i.e., locates based on a group of size four.  It is possible to use different group sizes for location, of course, though at the cost of accuracy or over-constraining the problem.  A single detector, for example, can only estimate a distance, and in a vacuum, each point on any spherical shell centered on the detector has equal likelihood.  With two detectors, the spherical shell collapses to concentric circles about the line connecting the detectors, and with three or more coplanar but non-collinear detectors, any point and its mirror image about the detectors' plane have the same likelihood.

% there is a sizing issue here -- give visuals print capabilities or take more precise screenshots.
\begin{figure}[h!]
 \centering
  \includegraphics[width=0.4\textwidth]{ksig100.pdf}
  \includegraphics[width=0.401\textwidth]{ksig1.pdf}
   \caption{An example of the k-sigma heatmap visualizer for two sources of different intensities.}\label{ksigmaexamples}
\end{figure}
 
Given a list of locations, it is possible to determine the value of $L$ for each of them.  It is concluded that the source is most likely to be closest to the point with the highest likelihood.  In our simulation, we grid the region in three dimensions and test the center of each voxel. It is the value of this likelihood estimator, normalized to its highest return value, which determines the color intensity on our visual representation.  The visualization module accepts user-defined slices and positions of the heatmap for effecive representation of, for example, floors of a building, or an overview of a large area as well as larger views of regions which should be closely monitored.  Figure \ref{ksigmaexamples} shows two simulations after about ten seconds.  The left heatmap corresponds to a source with intensity 100 mCi, while the right has intensity of only one mCi.  The high intensity causes the likelihood estimates to differ to a greater degree, resulting in a much smaller red region than the low-intensity source.

\subsection{Simulating directionally-dependent detectors}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
An array of directionally-dependent detectors with the ability to sweep their area can, in theory, provide superior detection and location than a similar array of isotropic detectors.  Whereas an isotropic detector can estimate distance but not direction, our algorithm for directed detectors makes no assumptions about distance, seeking to pinpoint the location based on the crossing of the detectors' cones of vision.

In reality, the detectors would be rendered insensitive to radiation in all directions but one, and then rotated so that its window of detection points at all relevant areas over the course of its sweep.  In our simulation, it is assumed that the detectors sweep over all the area at a constant, but arbitrarily high speed.  Therefore, when sweeping the horizontal plane a detector occupies, each voxel is within the cone of sensitivity a fraction of the time corresponding directly to the detector's view arc.  We examined several methods of updating the heatmap.

The first, and simplest method, represents only the cone of vision of a detector which was perfectly shielded on all sides, with a hole bored in one side. Figure \ref{flatsweepexample} shows four detectors, each with a 15$^{\circ}$ arc of vision, `illuminating' a source.  These are very precise, because background radiation, which is assumed to come from all directions equally, is reduced by over 99.5\%.  For the assumed average background we use, which is eight counts per second per crystal, this means that virtually no background is detected, and that we can assume that when the detector receives photons, it is pointed towards a source.  By averaging the position of the voxels with the maximum values, we can approximate the location of the source.  Cursory tests have revealed that as long as the region of maximum values is completely contained within the heatmap, that is, the source is located too close to the faces of the heatmap, this approximation is accurate to within the size of one voxel.

\begin{figure}[h!]
 \centering
  \includegraphics[width=0.4\textwidth]{flat1.pdf}
   \caption{The simplest method, which resembles spotlights converging on a target.}\label{flatsweepexample}
\end{figure}

The second method also assumes a 15$^{\circ}$ arc of vision and perfect shielding elsewhere.  However, instead of painting a flat-intensity cone on the heatmap, we allow the cone of vision to rotate across the source.  When we consider the sum of the cones of vision which can contain the source, we realize that the intensity actually peaks at the angle which points directly at the source.  Allowing the sweeping detector to sum sequential `snapshots' gives this same effect, shown in Figure \ref{sweepexample}.  As seen in the figure, this very clearly illuminates the location of the source.  This method produces a very accurate estimate of location, again, within one voxel of the source.  When only a single source is present on the field, the source intensity can be estimated very accurately, using this location and the hit counts of each detector.  When the source is on the order of 100 mCi or higher, the accuracy is generally within one percent of the source intensity.  The tests used to produce this result did not include obstacles due to computational intensity, but obstacles do not prevent intensity estimation.

The last set of experiments is different, in that it actually allows the detectors to rotate.  Over the course of 36 seconds, the detectors were rotated 360 degrees.  Before the simulation began, the heatmap was partitioned into 360 wedges.\footnote{The wedges' points are located at the detector.  In a twelve-wedge partitioning, the wedges would be analogous to the areas swept by the hour hand of a clock each hour.}  The plates were not assumed to be shielded, but instead, considered most sensitive when the source was normal to the surface of the crystal.  However, these simulations were computationally costly and produced only washed-out, inaccurate heatmaps.

See Results for a proposed solution.

\begin{figure}[h!]
 \centering
  \includegraphics[width=0.4\textwidth]{sweep.pdf}
   \caption{An example of the visualizer for an array of sweeping detectors shielded on all sides except the direction in which they are facing.}\label{sweepexample}
\end{figure}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Results}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

With the exception of a fleshed-out nodegraph system, we have accomplished each of the goals for the HAL-BERD, to the degree that it is possible to collect meaningful data on simulations.  We deal with two different types of simulation: isotropic detectors with obstacles and collimated.

\subsection{K-Sigma detection with isotropic detectors with obstacles}

First of all, the validity of our simulation engine without any obstacles has been verified with the results in \cite{jj}. However, due to the lack of standard model or any data for simulation with obstacles, we were only able to obtain data for an arbitrary structure. The structure that we have modeled is a structure that resembles the third floor of Millikan library at California Institute of Technology (Figure \ref{obstacle}). Each wall of the structure is assumed to be made up of concrete for simplicity. 
\begin{figure}[h!]
 \centering
  \includegraphics[width=0.4\textwidth]{obs1.pdf}
   \caption{K-sigma simulation with obstacles}\label{obstacle}
\end{figure}

We first analyzed the efficiency of K-sigma simulation for the given structure by obtaining ROC curves, which models the relationship between the True Positive and False Positive rate, and the distance error estimation, which is the error of estimation given by the results of K-sigma simulation. We have obtained two sets of data for comparison by varying the thickness of the wall in the structure. First set consists of 20cm concrete walls, where as the second set consists of 40cm concrete walls. All the walls are assumed to have equal thickness for simplicity. The data is as follows. 
\begin{figure}[h!]
 \centering
  \includegraphics[width=0.4\textwidth]{roc20cm.pdf}
    \includegraphics[width=0.4\textwidth]{roc40cm.pdf}
  \includegraphics[width=0.4\textwidth]{plot.pdf}
    \includegraphics[width=0.4\textwidth]{plot40cm.pdf}
       \caption{ROC curves and distance estimation error}
    \label{obsdata}
\end{figure}

Although the simulator is capable of producing meaningful data as Figure \ref{obsdata}, we were not able to make deeper analysis for its significance. However, we were able to verify a trivial trend that the K-sigma and its distance estimation is more significant if there are less obstacles that the radiation travels through. Further experiment and analysis is to come in the future with varying structure and experimental model. 

\subsection{Sweep detection with directed detectors}

The sweeping detector simulations are not being quantitatively tested, as we believe the existing implementations are not sufficiently realistic.  The more `spotlight-like' simulations, however, do produce extremely high accuracy estimates of position and intensity.  It is our belief that with a superior weighting scheme, we can collapse the sweep of each detector into a binary function, as follows.  Each sweep, the detector effectively receives a vector of data; the amount of photons detected while looking in each direction.  If we find the local maxima of this vector, we could impose threshold values and collapse a vector with many different values into a sparse vector which is only not zero at angles close to the local maxima.  Then we can use the method described in the first directionally-dependent detector attempt to pinpoint the radioactive sources.  Figure \ref{sweepcollapse} gives a crude example of a hypothetical detector with two nearby sources, and the data before and after simplification.

\begin{figure}[h!]
 \centering
  \includegraphics[width=0.7\textwidth]{sweepcollapse.pdf}
   \caption{A crude example of how one might simplify sweeping detector data.  The black line represents actual photon counts, while the red has been subjected to a threshold which is a function of the values at local maxima.}\label{sweepcollapse}
\end{figure}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Discussion}%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\subsection{Future Goals}

We believe HAL-BERD has been successful in laying a foundation for various ways of simulating radiation-realistic environments.  It does allow for quick development and rapid changes to be implemented.  However, it is not perfect, and there are several areas in which we will continue to improve it.  Presented in no particular order are some goals for the future:
\begin{itemize}
\item 3D visualization:  Though the heatmap is specifiable and sliceable to the user's content, there are group members who believe that the ability to render the structure in 3D would be of use.
\item Parallelization:  As it is written in Python, HAL-BERD is intended for rapid development, but not necessarily rapid execution.  With no obstacles in the field, it can perform roughly ten $k$-threshold tests per second on a three-by-three array of detectors, with timesteps of three seconds, for thirty seconds.  With a 100x100 grid of points to test, and a roughly three-second timestep, it can compute the likelihoods of and render to an image at approximately real-time.

One way to get a huge speed boost would be to shunt the likelihood estimation section of code off to a graphics processing unit, or GPU.  The GPU is designed to perform a multitude of lesser computations in parallel, such as the calculations to determine what color every pixel on a screen should be, given many lighting equations or overlapping entities in space.  However, the number of pixels on a screen is comparable to the number of voxels in a reasonable
simulation.\footnote{A computer screen with a modest resolution might be 1024x768, which is 786432 pixels.  Assuming resolution accurate to half a meter, this is on the order of the number of voxels in a three-story, 100m x 100m building.}  When considering that modern graphics cards can fairly easily achieve refresh rates of thirty frames per second, we can confidently say that a modern graphics card would be very well-designed for the task of updating an actual heatmap readout in smooth realtime.  For this, we would consider NVIDIA's Compute Unified Device Architecture (CUDA) as a tool for parallelization.

\item Multiple emitters in a single region.  The k-sigma detection algorithm is not really designed with location of more than one source.  It is conceivable that given an estimate for the number of sources, the simulation could attempt to partition the region into sections that have a single emitter in each region, thereby reducing the problem to one more similar to the one-emitter case.  This division could be performed along the regions where the counts would be high given a single emitter, but are instead low, suggesting that no source is nearby.

A detector which is able to sweep a region is significantly more able to discern multiple targets in its environment, provided that they are not collinear with the detector itself.  Therefore, an appropriate array of rotating detectors shows great theoretical promise for detecting one or more arrays.

\item Radiation Emitting Obstacles (Varying background radiation). In reality, each obstacle also is capable of emitting background radiation that can be significant when combined. In the future, we can add such functionality such that every obstacles is capable of emitting its background radiation for the simulation.

\item Mobile emitters.  The obvious issue here is that data from a certain timestep becomes less and less useful as time goes on.  However, it should be quite possible to have it decay at a reasonable rate, and yield a decent estimate as to the source's position, as long as there are enough detectors in the nearby region.  However, this would require us to finish the nodegraph module.

\item Mobile detectors.  Theoretically, detectors which can work together and create an optimal detection array would improve detection times.  Alternatively, this could be used to plan search paths for security teams armed with man-portable detection equipment linked to a central computer.  There does exist research on this optimal arrangement: \cite{dano}

\end{itemize}

\section*{Acknowledgements}
Special thanks to \O istein and Rita A. Skjellum for helping to fund this research project.

\begin{thebibliography}{5}

\bibitem{jj}
Chandy, K. M., J. J. Bunn, et al. (2010). Models and Algorithms for Radiation Detection. Pasadena, California Institute of Technology.

\bibitem{cortez}
Cortez, R. A., H. G. Tanner, et al. "Information Surfing for Radiation Map Building." 28.

\bibitem{ipsn}
Liu, A., M. Wu, et al. (2008). Design Tradeoffs for Radiation Detection Sensor Networks. Pasadena, California Institute of Technology.

\bibitem{fusion}
Liu, A. H., J. J. Bunn, et al. (2010). An Analysis of Data Fusion For Radiation Detection and Localization. Pasadena, California Institute of Technology.

\bibitem{ristic07}
Ristic, B., A. Gunatilaka, et al. (2007). "An Information Gain Driven Search for a Radioactive Point Source." Information Fusion.

\bibitem{ristic08}
Ristic, B., M. Morelande, et al. (2008). "A Controlled Search for Radioactive Point Sources." Information Fusion.

\bibitem{dano}
Wu, M. and D. Obenshain. Using Submodular Function Optimization for Mobile Radiation Detector Path Planning. Pasadena, California Institute of Technology.

\bibitem{linebox}
Kreuzer, J and Hess, J. 3D Programming Weekly: Graphics, Games. www.3dkingdoms.com.
\end{thebibliography}

\end{document}
