\documentclass[pra, twocolumn,amsmath,amssymb]{revtex4}

\usepackage{graphicx}
\usepackage{color}
\usepackage{bm}
\usepackage{amssymb}
\usepackage{epsfig}

\begin{document}

\title{nBody Simulation with Hierarchical Force Calculation and Adaptive Timestepping}
\author{Gregory R. Meece, Jeremey Stevens}
\affiliation{Department of Physics and Astronomy, Michigan State University, East Lansing, Michigan, USA}

\begin{abstract}
\noindent The core of modern astrophysics dynamics simulations revolve around solving the nonlinear equations of motion which arise from the gravitational interaction between masses. The goal of n-body simulations is to numerically integrate the resulting equations through time in a way which is numerically accurate, preserves physical invariants, and is computationally efficient. In this project, we implement a tree gravity solver modeled after the method described by Hut et. All\cite{Hut}. In 1983. This method reduces the number of calculations required to O(n log(n)) while maintaining accuracy. We integrate the equations using a Verlet leap frogging method. To improve computational speed, we will use an adaptive time step, allowing a longer time step to be used while keeping error within a specified tolerance.

\noindent In order to test correctness of our code, we compare the results of our simulations to known setups with analytical solutions. Specifically, we simulate an Earth sun system, a planet in an elliptical orbit, collisions between two clusters, and randomly generated clusters of several particles. We also use randomly generated clusters to test the convergence properties of our simulation.\end{abstract}

\maketitle

\section{Introduction}
In recent decades, computational modeling has become increasingly valuable as a tool for investigating diverse astrophysical phenomena. Such problems are often modeled as a system of  discrete components interacting under Newton's law of Gravitation. The goal of an nBody simulation is to accurately simulate the motion of a system of $n$ collisionless particles acting under a gravitational force. This paper describes our implementation of the Barnes-Hut algorithm for solving the nBody problem.

Nbody simulations have a wide variety of applications in astrophysics research. Most obviously, they can be used to model the behavior of individual masses, such as planets or stars. When the number of masses exceeds two, the equations of motion do not generally have analytic or periodic solutions. In this case, a numerical solution of the motion is required. Nbody simulations are also employed to study the behavior of continuous matter distributions, such as fluids, where the interacting particles represent Monte Carlo sampling of the continuous field. This technique is widely used to simulate the dynamics of dark matter and is employed in several widely used astrophysics codes, including the adaptive mesh code Enzo\cite{Enzo} and the smoothed particle code GADGET\cite{Springel}.

Modeling gravitationally interacting systems presents several challenges. As the force on each particle is determined through interactions with all other particles, the computational time required for a full calculation of the forces on the system will scale as $n^2$. Secondly, the timescales present in an nBody simulation can vary over a wide dynamic range, as the timescale for gravity to alter the velocity of close particles may be much smaller than the orbital period of isolated masses. Finally, close interactions between particles which are meant to represent samples of a continuous fluid may introduce unphysical effects which must be accounted for.

In recent decades, different methods have been proposed for solving the nBody problem. These methods have the goal of improving computational efficiency while maintaining a desired level of accuracy. One method, implemented in this project, is the method of Barnes and Hut. The main idea of the algorithm is to hierarchically subdivide space into octants. Rather than calculating the force from each individual interaction, the Barnes-Hut method (henceforth referred to as the Octree method) calculates force on a particle by hierarchically grouping distant particles together. With the Octree method, the number of force calculations scales as O(n log n) rather than quadratically. A second method of solving the nbody problem is to place particles on a discretized grid and compute the global gravitational potential using a discrete Fourier transform. This method, known as particle-mesh is highly efficient, but has trouble with close interactions. Algorithms have been developed which combine particle-mesh and tree codes, and are generally known as particle-particle-particle mesh algorithms, or $P^3$ M.

This paper describes our implementation of the Barnes-Hut Octree method for solving the nBody problem. Section 2 describes the design of the code. In particular, we describe the use of an an adaptive time stepping scheme and the implementation of the octree itself. In Section 3, some sample results of our simulation are presented. In Section 4, we discuss the validation and convergence properties of the code. We end with some concluding remarks and a brief mention of possible improvements to the code.

\section{Method}
Our simulation solves the nBody problem by implementing the Barnes-Hut algorithm. To boost the computational efficiency while preserving accuracy, we introduce an adaptive time step which allows particles undergoing close interactions to be updated more frequently. Masses are represented as collisionless particles which interact only through gravity and move in three dimensional space. The only properties particles posess are mass, position, and velocity.

\subsection{Code Design}
The overall design of our code is based on the Model-View-Controller paradigm and is intended to be modular. The model part of the code holds the particles and updates their positions according to Newton's Laws. The controller module handles file input and output and initializes the simulation. The view module visualizes the output data. We chose to code the model and controller in C++, since the language is suited for number crunching. Additionally, an object oriented language is very useful for implementing complex structures such as an octree. The view was scripted in Python and utilizes the Matplotlib and Pyplot modules for plotting. We chose Python for ease of use and familiarity.

Simulations are initialized from files. A system of particles is stored in a comma delimited file which stores initial masses, positions, and velocities. Various parameters can be read in from a text file with a simple ``parameter = value'' format. Simulation data is output in the same format as the input, meaning that output data can be used to initialize a new simulation. These features were inspired by the similar initialization method of Enzo.

Once initialized, the system is iteratively updated by a Simulation module, while particles are stored in a Holder module. The holder module is also responsible for calculating the force on individual particles. By design, the simulation has no knowledge of how the holder stores the particles, and treats the holder like an array. The goal of abstracting the storage mechanism is to make it easy to change how particles are stored without affecting the rest of the code.

\subsection{Particle Storage}
The bodies are stored in an octree data structure that is based on the hierarchical structure from Barnes-Hut. An octree is chosen over a linked list to improve calculation time from O(N"2) to O(N log N) while conserving energy and accurately computing forces. This type of tree is used to partition a three dimensional space, the root node of the tree, by recursively subdividing each internal node into eight octants. Each node holds one pointer to one body, an array of eight pointers to its eight sub-octants, the total mass of all bodies in itself and the nodes below it, the position of the center of mass of the octant, the length of the octant, and the position of the center of the octant.

The octree is initialed from a file containing data for N-bodies. Bodies are read in and added recursively to the tree. The addBody function adds a body so there is only one pointer to a body within a node. However, each node contains a count of the body in the node and all of the bodies within the nodes below. The three dimensional space and subdivisions within are resized if the body being added lies outside the dimensions of the three dimensional space. If the amount of bodies contained in a node and all the nodes below is one or more, the new body and the old body are added recursively to the children of the node and the pointer in the node points to null. Otherwise, the body is simply added to the node. 

The octree determines the force on bodies using the nodes rather than individual bodies. Rather then iterating over all of the bodies, the force can be determined by using the total mass and center of mass of a node if the length of the box, l, divided by the distance from the body to the center of the box, D, satisfies the following

\begin{equation}
   \frac{l}{D} < \alpha
\end{equation}

\noindent where $\alpha$ is a tolerance parameter. This reduces the amount of computation needed by allowing the force from a cluster of bodies far away to act as one massive body at the center of mass of the cluster.

\subsection{Updating Positions and Velocities}
The positions and velocities of particles are updated iteratively using a second order accurate leapfrog method. The algorithm is known a s 'kick-drift-kick' algorithm, and is based on the work of Springer\cite{Springel}. 

For an exact calculation, the force would be computed according to Newton's law

\begin{equation}
   \vec{F} = \frac{G m_1 m_2}{r^2} \hat{r}
\end{equation}

A direct calculation, however, leads to singularities if particles approach very close to one another, causing an acceleration approaching infinity, in which case our discretization of time yields inaccurate results. Additionally, if the particles are meant to be Monte-Carlo samplings from a continuous distribution, these close interactions represent forces not present in the physical system. To circumvent these problems, we introduce a softening parameter $\epsilon$ which modifies the force at close distances

\begin{equation}
   \vec{F} = \frac{G m_1 m_2}{r^2 + \epsilon^2} \hat{r}
\end{equation}

At the beginning of the timestep, the acceleration of each particle is computed based on the original configuration of the system , and the velocities are updated over half of a timestep (the kick).

\begin{equation}
   \vec{v}_{1/2} = \vec{v}_0 + \vec{a}_0 *\frac{dt}{2}
\end{equation}

At the half timestep mark, particle positions are updated using the computed velocities over a full timestep (the drift)

\begin{equation}
   \vec{x}_{1} = \vec{x}_0 + \vec{v}_{1/2} *dt
\end{equation}

Finally, the acceleration is recalculate using the new positions for the second half of the timestep, which is used to recalculate the velocities (the second kick).

\begin{equation}
   \vec{v}_1 = \vec{v}_{1/2} + \vec{a}_1 *\frac{dt}{2}
\end{equation}

This iterative scheme has the advantage of being symplectic, which gives good energy and momentum conservation properties, while only requiring two force calculations per particle per iteration. As force calculations are the most time consuming part of the calculation, reducing their frequency is important for efficiency.

\subsection{Adaptive Time Stepping}
The computational cost of the force calculations introduces an important shortcoming of the iterative approach. An accurate simulation will require a times step which is smaller than the timescale for changes to the acceleration of the particle which is undergoing the fastest change in acceleration. Other particles, however, may have much longer timescales for acceleration shifts, meaning that  frequent force calculations are not necessary. We therefore adopt an adaptive timestep scheme, again inspired by Springerel et all, which allows particles requiring more frequent force updates to use a smaller timestep while isolated particles are updated less frequently.

First, the acceleration of all particles is computed at the beginning of the time step. Next, an individual timestep is calculated for each particle according to the formula

\begin{equation}
   fracDt = \sqrt{\frac{\eta \epsilon}{\mid \vec{a}\mid}}
\end{equation}

\noindent where $\eta$ is an accuracy parameter.

The individual timestep is compared to the base timestep, and the particle is assigned the smaller of the two. For simplicity, timesteps are stored as fractions of the full timestep and are rounded to the nearest (negative) power of two (eg. $\frac{1}{2}$ dt, $\frac{1}{4}$ dt, etc.)

The simulation now advances over the base timestep in intervals of the smallest fractional timestep. Each particle is evolved according the the KDK scheme, but is updated according to its own fractional timestep. The particles with the smallest fractional timestep will thus be kicked at every interval, while those with a longer timestep will simply be allowed to drift. Thus, although the total number of iterations is increased, only a small number of particles require force calculations over each iteration. Discretitizing the timesteps in powers of two ensures that all particles are synchronized the the end of the base timestep, at which point all particles are kicked. The adaptive timestep allows us to choose a fairly long base timestep, limiting the total number of force calculations necessary.

\begin{figure}
\includegraphics[scale=0.4,angle=-0]{energy.png}
\caption{Energy of the system as a function of time for a cluster of 200 randomly generated masses. Run with $\epsilon=0.005$, $\eta=0.01$, and a base timestep of $dt=0.01$.}\label{energy}
\end{figure}

\begin{figure}
\includegraphics[scale=0.4,angle=-0]{timing.png}
\caption{Run time to advance a randomly generated cluster of $n$ bodies for one year with and without adaptive timestepping. Run with $\epsilon=0.005$, $\eta=0.01$, and a base timestep of $dt=0.01$ for the ADT case, and $dt=0.0001$ for the non-ADT case.}\label{timing}
\end{figure}

\begin{figure}
\includegraphics[scale=0.4,angle=-0]{ellipse.png}
\caption{Several orbits of an earth mass planet around a solar mass star in an elliptical orbit. Note the unphysical precession of the orbit- a result of the error introduced by the softening parameter.}\label{ellipse}
\end{figure}

\section{Performance}
The nBody simulation works as intended and has been tested for systems of up to 500 particles. In order to verify that the code solves the problem correctly, our team ran a number of test problemsand compared the simulation's output to the expected results. Test cases included a system modeling the Earth and Sun, an Earth sized planet orbiting a solar mass star in an ellipse, several isolated spherical clusters of multiple bodies, and a pair of colliding clusters. In all cases, results agreed with expectations, and no consequential unrealistic effects were observed. Some minor deviations which do arise and possible sources of error are discussed below.

\subsection{Verification}
No numerical model is perfect. However, it is possible to check that a computer simulation correctly solves the equations which it is designed to model, limits errors to within a known tolerance, and preserves certain invariants.

In an isolated, non radiating system, total energy must be conserved. Figure \ref{energy} shows the distribution of energy in a spherical cluster of 200 bodies, each with a small random initial velocity. As expected, the total energy remains nearly constant over the course of the simulation. A few minor fluctuations are observed (most notably around 0.3 years), but these tend to be quickly corrected and do not have a major impact on the dynamics of the cluster. It is important to note that while the cluster does not start out in equilibrium, the spatial and velocity distributions adjust until the system settles into Virial equilibrium. The cluster spends roughly two years collapsing, settles, and after three years fluctuates around a configuration in which the negative potential energy is equal to twice the kinetic energy. The excellent energy preservation properties are a consequence of the Symplectic property of the KDK leapfrog method.

\subsection{Sources of Error}
A simple check on whether the simulation is working correctly is to place an Earth like body in an elliptical orbit around a solar mass star. Clearly, the planet should orbit the star in a periodic orbit with a period of somewhat less than a year. The results of such a simulation are shown in figure \ref{ellipse}. While the size of the orbit remains constant over several periods, the axis of the orbit is clearly rotating around the star. This phantom torque is a result of the departure from Newtonian gravity created by the softening parameter and the discretization of time. Importantly, the torque does not change the energy or the period of the orbit- again a result of symplecticity.

There are three main sources of error in this simulation: the softening parameter, the discretization of time, and the approximations made in the Octree method. All three sources are predictable, and can be controlled via simulation parameters.

As seen with the ellipse, the introduction of the softening parameter leads to an error in the force calculation. Normally, $epsilon$ is much smaller than $r$, in which case the divergence is minimal. For close encounters, however, the softening parameter is no longer negligible, and thus has an important effect on dynamics. The overall effect of the softening parameter is to decrease the force of gravity at small distances. However, close collisions are generally not important (and indeed undesirable) in many nBody simulations, and this effect can be ignored. The softening parameter can be adjusted in the parameter file.

Another unavoidable source of error is the effect of time discretization. In the simulation, the assumption is that the velocity of a particle is constant between kicks. In reality, the particle is constantly being accelerated. However, the timestep is generally much smaller than the timescale of changes in the velocity, meaning that this assumption is appropriate (though not perfect). In close encounters, on the other hand, it may not be valid to assume that the velocity is constant over short times. For example, a particle orbiting within one solar radius of a solar mass object would have its velocity reversed over the course of an hour. This error can never be eliminated, but may be minimized by using a shorter timestep and by introducing the softening parameter, which eliminates close encounters entirely.

Finally, the octree method introduces errors in the force calculation. Within the hierarchical force calculation, multiple distant particle are approximated as being in the group's center of mass, leading to small errors in the inter-particle distance. If the opening angle is small, however, the effect of this approximation in position is small. More importantly, the force equation is no longer symmetric- the force on particle A from particle B is no longer equal to the force on B from A. This implies a violation of Newton's third law, and therefore a violation of the conservation of momentum. Fortunately, these errors tend to cancel each other out, and total momentum is conserved. These errors may also be minimized by using a smaller opening angle.

\subsection{Convergence}
A second verification technique is to check that the simulation scales correctly with the as the number of particles increases. Theoretically, the scaling should be O(n log n). Scaling was tested by running a cluster of $n$ bodies for one year, both with and without ADT, with $n$ varying from 10 to 400.

After our first test, we found the simulation to be scaling as $n^2$, contrary to the expected scaling of the Barnes-Hut algorithm. Following exhaustive investigation, we determined that the cause of this behavior was the adaptive timestep and the nature of our test case. In the randomly generated cluster, particles were placed randomly within a sphere with a radius of 1 AU. As the number of particle increased, the number of close encounters rose. This in turn increased the number of particles taking smaller time steps, leading to a greater number of force calculations. When we reran the time test without ADT, the timing scaled slightly better. Importantly, the scaling is sub quadratic, indicating that the Barnes-Hut scheme is performing better than direct summation.

\section{Conclusion}

\begin{thebibliography}{99}

\bibitem{Hut} Barnes, J. Hut, P. A hierarchical O(N log N) force-calculation algorithm. Nature. Vol 324. p. 446-449. Dec 4 (1986) 
\bibitem{Springel} Springel, V. The cosmologial simulation code GADGET-2. Monthly Notice of the Royal Astronomical Society. Volume 364. p. 1105-1134. (2005)
\bibitem{Enzo} O'Shea, B. et all. Introducing Enzo, an AMR Cosmology Application. arXiv:astro-ph/0403044. (2004)
\bibitem{Oshea}  O'Shea, B, personal communication. April 16, 2012.

\end{thebibliography}

\end{document}
