\documentclass[12pt,fleqn]{article}

\usepackage{graphicx}
\usepackage[small,bf]{caption}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{enumitem}
\usepackage[margin=1.5in]{geometry}
\usepackage{booktabs}
\usepackage{subfigure}
\usepackage[dvipdfm]{hyperref}
\usepackage{algorithm}
\usepackage{algorithmic}

\setlength{\captionmargin}{0.1\textwidth}
\setlength\parskip{0.05in}
\setlength\parindent{0.2in}

\numberwithin{equation}{section}

\newcommand{\del}{\partial}
% For mermin type matricies
\newcommand{\m}[1]{\boldsymbol{\mathsf{#1}}}
\renewcommand{\v}[1]{\boldsymbol{#1}}
\renewcommand{\d}[1]{\,d#1}
% For easy Qbit notating
\newcommand{\ket}[1]{{|{#1} \rangle}}
\newcommand{\bra}[1]{{\langle {#1}|}} 
\newcommand{\braket}[2]{\langle {#1} | {#2} \rangle}
\newcommand{\ave}[1]{\langle {#1} \rangle}
\newcommand{\magnitude}[1]{\|#1\|}
\newcommand{\dotprod}{\boldsymbol\cdot}

\DeclareMathOperator{\sech}{sech}
\makeatletter
\def\imod#1{\allowbreak\mkern5mu({\operator@font mod}\,\,#1)}
\makeatother

\title{Nucleation in High Dimensional Ising Models}
\author{Jake Ellowitz}
\date{\today}

\begin{document}
\maketitle

\begin{abstract}
We simulate nucleation in the nearest-neighbor Ising model in five and seven dimensions. The upper critical dimension is $d_c = 6$, above which nucleation is predicted to be qualitatively different than below $d_c$. We find nucleation in the $d=7$ Ising model, find evidence for a pseudospinodal in both $d=5$ and $d=7$ from the behavior of the susceptibility, and find that the lifetime of the metastable state shows qualitatively different behavior below and above $d_c$. We also identify tentative evidence of a difference in the geometry of the nucleating droplet above and below $d_c$. To carry out our intensive computations we employ methods of massively parallel computing using CUDA and graphics cards.
\end{abstract}

%\begin{abstract}
%We simulate nucleation in the nearest-neighbor Ising model in five and seven dimensions. The upper critical dimension is $d_c = 6$, above which nucleation is predicted to be qualitatively different than below $d_c$. We find nucleation in the $d=7$ Ising model, find evidence for a pseudospinodal in both $d=5$ and $d=7$ from the behavior of the susceptibility, and find that the lifetime of the metastable state shows qualitatively different below and above $d_c$. We also identify tentative evidence of a difference in the geometry of the nucleating droplet above and below $d_c$. To carry out our intensive computations we employ methods of massively parallel computing using CUDA and graphics cards.
%\end{abstract}

\section{Introduction}
\subsection{Overview}\label{sec:overview}
We are interested in the nature of phase transitions in high dimensional nearest-neighbor Ising models. It is predicted that the susceptibility diverges at the limit of stability of the metastable state in these high dimensional ($d>6$) nearest neighbor models~\cite{monette}. This behavior is associated with what is called a spinodal. Formally, the spinodal is a point (or line) in which there is a sharp division between the metastable and unstable regimes.  Of particular interest is the divergence of the magnetic susceptibility $\chi$ as the spinodal is approached. A well defined spinodal exists only in the mean-field limit, but spinodal-like effects can be observed for long, but finite-range systems, and possibly for high dimensional systems. 

In classical nucleation the droplets are compact and independent, and grow by adding single spins~\cite{monette}. Near the spinodal, we observe a different type of nucleation (called spinodal nucleation) in which the surface tension of the nucleating droplet vanishes (the free energy cost of being antialigned with interacting spins is equal to the gain from aligning with the external field), so the droplets become ramified, and grow by coalescence~\cite{bigklein}. 

Spinodal nucleation has been observed in $d=2$ and $d=3$ for long-range Ising models~\cite{herrmann}. A long range interaction is necessary to more  closely approximate mean field. We will test mean field theoretic predictions by increasing the dimensionality, while keeping nearest neighbor spin interactions. The theoretical prediction for the free energy barrier at $d=6$ differs from that below $d=6$ (see Section~\ref{sec:spinodalNucleation}). Previous work by Ray~\cite{ray} on such systems found nucleation in $d=5$ and not $d=7$, but his results were limited by finite size effects and by the low speed of computers nearly two decades ago.

Recently there have been advances in graphics processing technology, which has made parallel computing much faster and affordable. In addition to the hardware development, NVIDIA has developed a C-like programming language (CUDA) which makes taking advantage of graphics cards for general purpose parallel computing much more accessible. We simulated the nearest neighbor Ising model in parallel by using the checkerboard update method (see Section~\ref{sec:checkerboard}). The ability to simulate the dynamics in a parallel fashion is what allows us to make efficient use of the graphics cards.

Because each spin has many nearest neighbors in high dimensions, we find that a long range interaction is not needed for the system  to behave like it is close to the mean-field limit. We confirm nucleation in the $d=5$ Ising model and discover nucleation in the $d=7$ model. By measuring the magnetic susceptibility $\chi$ we can extrapolate its magnetic field and find a pseudospinodal external field $H_s$ in both $d=5$ and $d=7$. We also investigate the qualitative behavior of the nucleation rate.

We find nucleation in $d=7$ by investigating the distribution of nucleation times. We also find pseudospinodals in both $d=5$ and $d=7$ by identifying a divergence in the susceptibility. We also find a qualitative difference in the nucleation rate in $d=7$ from that of $d=5$. We also provide evidence that the geometry of the nucleating droplet near the spinodal differs above and below the critical dimension by looking at finite size effects.

\subsection{Introduction to the Ising Model}
The Ising model is arguably the most important system of interacting spins due to its simplicity and general applications. The Ising model is the simplest system which exhibits a phase transition.

The Ising model consists of of $N$ spins on a lattice of linear dimension $L$ ($N = L^d$ for a hypercube). The Hamiltonian for is specified by
    \begin{equation}\label{eq:isingHamiltonian} 
    \mathcal H = -J \sum_{\ave{i,j}} \sigma_i\sigma_j - H\sum_i \sigma_i,
    \end{equation}
where $\sigma_i$ is the spin at lattice site  $i$, $J$ is the exchange constant,  and $H$ is the value of the uniform external magnetic field; $\ave{i,j}$ denotes that the sum is over nearest neighbors. The magnetization is defined
    \begin{equation}
    M \equiv \sum_i \sigma_i.
    \end{equation}
Often it is convenient to normalize the magnetization. We denote the magnetization per spin 
    \begin{equation}
    m \equiv \frac M N,
    \end{equation}
where $N$ is the number of particles.

\section{Theory of Nucleation}
\subsection{Metastability}
Many systems in nature are not in a stable phase, but in a metastable phase, which corresponds to a local (and not global) minimum free energy. For example a diamond at room temperature and pressure is in a metastable phase: graphite has a slightly lower free energy. However we never observe diamonds turning into graphite because the free energy barrier of this conversion is very high. Perhaps a more elucidating example of metastability is a supercooled liquid, since this system realistically exhibits a phase transition. If we put water into a smooth container and quickly lower the temperature of the water, we have prepared water below its freezing temperature. Eventually a crystalline structure (ice) will form, minimizing the free energy of the water and therefore leaving the metastable supercooled state. Thus the metastable state corresponds to an equilibrium state which is susceptible to fall into a lower energy state due to thermal fluctuations or small interactions.

We are interested in the transition between the metastable and stable phases. These transitions occur because of the existence of fluctuations: as independent variables change throughout the evolution of a system, the system will eventually fluctuate into its lowest free energy (though this decay process can take a very long time).

One way to induce metastability in the Ising model is by equilibrating the system at $T< T_c$ with the external field $H$ up, then \emph{quenching} the magnetic field ($H:=-H$), allowing a time for the system to relax into the metastable state. After the quench  we find that most of the spins are antialigned with $H$. They do not immediately align with $H$ because of the free energy cost of forming a nucleating droplet. This free energy cost is easily understood by realizing that it is favorable for the spins to stay aligned with each other, even though there is a penalty from being against the field, so if just one spin is aligned with the field, it is penalized by being antialigned with all of its neighbors. Thus we must wait for a large enough thermal fluctuation (for a local region of the lattice to uniformly align with the field) to initiate the decay of the metastable state. See Figure~\ref{fig:2dnucleate} for an example of nucleation in the $d=2$ Ising model.

    \begin{figure}
    \centering
    \includegraphics[width=0.4\textwidth]{figs/2dnucleate}
    \caption{\label{fig:2dnucleate}An example of nucleation in the $d=2$ nearest-neighbor Ising model. The lighter pixels are aligned with the external field.}
    \end{figure}

\subsubsection{Metastable Decay as a Poisson Process}\label{sec:metastablePoisson}
The decay of the metastable state is due to random thermal fluctuations which induce nucleation. These fluctuations are history-independent, and therefore are a Markov process. Hence a thermal fluctuation which induces nucleation (and thus the decay of the metastable state) is a Poisson process. So, the time until nucleation is given by an exponential distribution
    \begin{equation}
    p(t) \propto \exp(-rt),
    \end{equation}
where $p$ is the probability density of an ``event'' (nucleation) and $r$ is the nucleation rate. Recall for the exponential distribution that
    \begin{equation}
    \ave t = \sigma_t = \frac{1}{r}.
    \end{equation}

\subsection{Mean-Field Theory for the Ising Model}\label{sec:meanFieldTheory}
Each spin $\sigma_i$ sees an effective field
    \begin{equation}
    H_{{\rm eff},i} = J \sum_{j=1}^q \sigma_j + H,
    \end{equation}
where the summation is over all $q$ spins interacting with $\sigma_i$. The effective field depends on the neighboring spins, so $H_{{\rm eff},i}$ will not be constant. However, we can write the average effective field as
    \begin{equation}
    \label{eq:aveHeff}
    \ave{H_{{\rm eff},i}} = 
    \bigg\langle J\sum_{j=1}^q \sigma_j + H\bigg\rangle
    = J \sum_{j=1}^q \ave{\sigma_j} + H
    \end{equation}
Now let $q\rightarrow \infty$ and $J\rightarrow 0$ such that $J q = \mbox{(nonzero) constant}$, corresponding to the limit of mean-field theory. This means that on average, $\sigma_j = m$ since we are averaging over all spins, meaning we are left with the same mean effective field for every spin. Thus Eq.~\eqref{eq:aveHeff} can be simplified as
    \begin{equation}
    H_{\rm eff} = \ave{H_{{\rm eff},i}} = Jqm + H.
    \end{equation}
The energy for spin $i$ is therefore given by $H_{\rm eff}\sigma_i$.

Because the system is at constant temperature, we can write the partition function for the $i$th spin as
    \begin{equation}
    Z_i = \sum_{\sigma_i\in\{\pm 1\}}\exp\left\{\beta H_{\rm eff}\sigma_i\right\},
    \end{equation}
and hence the free energy from a single spin is given by 
    \begin{equation}
    \label{eq:freeEnergyPerSpin}
    f = -kT\ln Z = -kT\cosh \beta H_{\rm eff} = -kT\cosh\beta(Jqm + H).
    \end{equation}
We can now find the magnetization per particle from Eq.~\eqref{eq:freeEnergyPerSpin}:
    \begin{equation}
    \label{eq:mPerSpin}
    m = -\frac{\del f}{\del H} = \tanh \beta(Jqm + H).
    \end{equation}

Consider Eq.~\eqref{eq:mPerSpin} in zero external field. The transcendental equation \eqref{eq:mPerSpin} has a single solution for $\beta J q < 1$, and three solutions for $\beta J q > 1$. When $\beta J q < 1$, the only solution to \eqref{eq:mPerSpin} is $m=0$. If the system is cooled, and now $\beta J q > 1$, we see that there is a positive, zero, and negative solution for $m$. The solution $m=0$ is unstable in the case where $\beta J q > 0$, and the other two (stable) solutions occur in a physical system with equal probability, as sometimes the system becomes spontaneously positively or negatively magnetized. See Figure~\ref{fig:transcendental}.

    \begin{figure}
    \centering
    \includegraphics[width=0.5\textwidth]{figs/tc}
    \caption{\label{fig:transcendental}Solutions to Eq.~\eqref{eq:mPerSpin} for different values of $\beta J q$. Observe the bifurcation at $\beta J q = 1$, when the stable $m=0$ splits into two stable values when $\beta J q$ becomes greater than one, leaving an unstable solution at $m=0$.}
    \end{figure}

Hence $\beta J q > 1$ is a {\em ferromagnetic phase}, and $\beta J q < 1$ is a {\em paramagnetic phase}. Thus mean field theory predicts a phase transition in the order parameter $m$ for some critical $\beta_c J q = J q/k T_c$ \footnote{Only $\beta$ and $T$ are subscripted as critical values since $J$, $q$, and $k$ are all constants.} when $\beta_c J q = 1$. Therefore, we find the critical temperature $T_c$ to be given by the relation
    \begin{equation}
    \label{eq:meanFieldTc}
    k T_c = q J.
    \end{equation}

\subsubsection{Critical Dimension and Mean Field Approximations}
Mean field theory makes the assumption that the fluctuations of the order parameter are much smaller than the mean value of the order parameter. When the fluctuations of the order parameter are small compared to its mean value, the fluctuations of the order parameter can be ignored, and we can consider its mean value alone.

Mean field theory breaks down near critical points (when the fluctuations of the order parameter become large). For example, in the Ising model, the isothermal susceptibility is given by
    \begin{equation}
    \chi = \frac{1}{kT}\sigma_m^2,
    \end{equation}
where $\sigma^2_m$ is the variance of the magnetization. Since we know that $\chi\rightarrow\infty$ near the critical point, the fluctuations in the order parameter $m$ become very large. However, these fluctuations become negligible near the critical temperature $T_c$ for the high dimensional nearest neighbor Ising model according to the Ginzburg criterion, which states that the fluctuations of $m$ are much smaller than $\ave m$ for $d>d_c = 4$~\cite{gould}. That is, for $d>4$, the critical exponents given by  mean field theory are exact, so behavior near the critical point (but not the spinodal) can be described exactly by mean field theory in $d>4$.

The upper critical dimension for the spinodal is $d=6$~\cite{muratov, bigklein}, hence near the spinodal in $d>6$, the behavior of the Ising model should be mean field.

\subsubsection{\texorpdfstring{Predicting $H_s$}{Predicting Hs}}\label{sec:predictHs1}
It is conventional when studying nucleation in the Ising model to choose $T = \frac49 T_c$, meaning we have  from Eq.~\eqref{eq:meanFieldTc}
    \begin{equation}
    \label{eq:meanFieldRunT}
    \beta J q = \frac{9}{4}.
    \end{equation}
Consider the isothermal magnetic susceptibility calculated from \eqref{eq:mPerSpin}
    \begin{equation}
    \label{eq:meanFieldChi}
    \chi = \frac{\del m}{\del H}
    = \frac{\beta \sech^2 \beta(qJm+H)}{1-\beta J q\sech^2\beta(qJm+H)}.
    \end{equation}
As the spinodal is approached, we expect that the metastable well will disappear, leaving a saddle point in the free energy (see Section~\ref{sec:freeEnergy}). Under these circumstances, the magnetization should fluctuate greatly, hence we expect $\chi \rightarrow \infty$. Thus the denominator in \eqref{eq:meanFieldChi} should approach zero, yielding
    \begin{equation}
    \label{eq:geths1}
    \beta J q \sech^2 \beta(qJm_s + H_s) = 1,
    \end{equation}
where we subscript $s$ indicating the corresponding values are those near the spinodal. Noting the identity $\sech^2\alpha = 1 - \tanh^2\alpha$, Eq.~\eqref{eq:mPerSpin} gives us 
    \begin{equation}
    \label{eq:geths2}
    m_s^2 = 1-\sech^2\beta(Jqm_s + H_s).
    \end{equation}
Adding \eqref{eq:geths1} and \eqref{eq:geths2} together then solving for $m_s$ yields
    \begin{equation}
    m_s = \left\{\frac{\beta J q - 1}{\beta J q}\right\}^{1/2}.
    \end{equation}
Notice that if we have $T=\frac{4}{9}T_c$, \eqref{eq:meanFieldRunT} implies that the spinodal magnetization is a constant $m_s = \sqrt{5/9}$ in all dimensions. Solving for $H_s$ in Eq.~\eqref{eq:geths1} gives us
    \begin{equation}
    \label{eq:meanFieldHs}
    H_s 
    = \frac1\beta \cosh^{-1}\sqrt{\beta J q} - qJm_s
    = \frac1\beta \cosh^{-1}\sqrt{\beta J q} - qJ\left\{\frac{\beta J q - 1}{\beta J q}\right\}^{1/2}.
    \end{equation}
Hence at $T=\frac49T_c$,
    \begin{equation}
    \label{eq:meanFieldHsFixT}
    H_s = qJ \left\{\frac49\cosh^{-1}\frac{3}{2} - \left(\frac59\right)^{1/2} \right\} \approx -0.3176\, qJ.
    \end{equation}

Although Eq.~\eqref{eq:meanFieldHs} assumes $q\rightarrow\infty$, we can use the mean field result to make predictions in the nearest neighbor system in which $q = 2d$ and $J=1$. Some predictions for $H_s$ when $T = \frac49 T_c$ (using \eqref{eq:meanFieldHsFixT}) can be found in Table~\ref{tab:hs}.

    \begin{table}
    \centering
        \begin{tabular}{cc}\toprule
        $d$ & $|H_s|$ \\ \midrule
        2 & 1.2704 \\
        5 & 3.1761 \\
        7 & 4.4466 \\
        \bottomrule
        \end{tabular}
        \caption{\label{tab:hs}Mean field predictions for $H_s$ in the nearest neighbor Ising model at $T=\frac{4}{9}T_c$.}
    \end{table}

\subsection{Landau Theory for the Ising Model}\label{sec:freeEnergy}
Using phenomenological arguments, I will derive an expression for the free energy of the Ising model. This expression will reveal the existence of a metastable state. 

We consider the free energy per spin $f$ with the magnetization per particle $m$ as our order parameter\footnote{The mean magnetization is chosen as our order parameter as it has been predicted to have interesting behavior, namely a phase transition, from considerations in Section~\ref{sec:meanFieldTheory}.}. In zero external field ($H=0$), we assume that $f$ should be symmetric about $m$ because there is no preferred orientation. Thus when $H=0$, $f$ must be an {\em even} function of $m$:
    \begin{equation}\label{eq:fEven}
    H=0\Rightarrow f(m) = f(-m)
    \end{equation}

If we do a power series expansion of $f$ in $m$, we have
    \begin{equation}
    f(m) = a_0 + a_1 m + \frac{a_2}{2}m^2 +\frac{a_3}{3}m^3 + \frac{a_4}{4} m^4 + O(m^5).
    \end{equation}
Let us now consider $H=0$. The condition implied by \eqref{eq:fEven} leaves us with
    \begin{equation}
    f(m) = a_0 + \frac{a_2}{2} m^2 + \frac{a_4}{4} m^4 + O(m^6).
    \end{equation}
It turns out that a fourth-order term is all we need to display a phase transition in $m$, thus we consider all of the higher order terms as corrections, and we can write
    \begin{equation}\label{eq:fEvenReduced}
    f(m) \approx a_0 + \frac{a_2}{2} m^2 + \frac{a_4}{4} m^4.
    \end{equation}
Finally, taking into account the external field (which we know contributes $-Hm$ to the free energy), we write
    \begin{equation}\label{eq:f}
    f(m) = a_0 + \frac{a_2}{2} m^2 + \frac{a_4}{4} m^4 - Hm.
    \end{equation}
For a more rigorous derivation of the phenomenological free energy, see Ref.~\cite{monette}.

In order to minimize the free energy (which is what the physical system will aim to do), the following will occur:
    \begin{enumerate}\itemsep -2pt
    \item The system will try to minimize its total energy.
    \item The system will try to maximize its entropy.
    \end{enumerate}
Let us consider the case in which $H=0$. As $m\rightarrow\pm1$, the system will reach its lowest entropy configuration which is highly undesirable according to the second condition above. This implies that $a_4$ in \eqref{eq:fEvenReduced} \emph{must} be a positive quantity because for high values of $|m|$, the quartic term in Eq.~\eqref{eq:fEvenReduced} becomes dominant (provided $|a_4| > |a_2|$). 

We now turn our consideration to the parameters $a_0$ and $a_2$. We see that $a_0$ cannot change the qualitative shape of $f$, just shift it up or down. For this reason I will presume it not to be interesting and set $a_0 = 0$. Now, consider the extrema of $f$, which are found by
    \begin{equation}
    \frac{df}{dm} = a_2m + a_4m^3 = 0.
    \end{equation}
We find the critical points of $m$ ($m_c$) to be
    \begin{equation}
    m_c \in \left\{0,\pm\left(-\frac{a_2}{a_4}\right)^{1/2}\right\}
    \end{equation}
Thus for $a_2 > 0$, there is only one critical point, for $a_2 < 0$ there are three critical points. Hence we can say that there is a phase transition when $a_2 = 0$, which corresponds to the system at the critical temperature, and we write
    \begin{equation}
    a_2 (T) = a_{2,0} |T_c-T|,
    \end{equation}
See Figure~\ref{fig:fh0} as a visualization of the above discussion of the free energy when $H=0$.

When we turn on the external field, we observe that the system loses its symmetry, and we observe a global minimum as well as a local minimum, see Figure~\ref{fig:fhpos}. The local minimum corresponds to the metastable state. If we continue to ramp up $H$, eventually we reach the point in which the metastable state can no longer exist. This happens at the spinodal field $H=H_s$, see Figure~\ref{fig:fhs}.

    \begin{figure}
    \centering
    \subfigure[\label{fig:fh0}$H=0$]{\includegraphics[width=0.3\textwidth]{figs/f1}}
    \subfigure[\label{fig:fhpos}$0<H<H_s$]{\includegraphics[width=0.3\textwidth]{figs/f2}}
    \subfigure[\label{fig:fhs}$H=H_s$]{\includegraphics[width=0.3\textwidth]{figs/f3}}
    \caption{\label{fig:f}Plot of the phenomenological free energy. At $H=H_s$ we see one of the critical points disappears as the metastable state can no longer exist, leaving us with a saddle point.}
    \end{figure}

\subsubsection{\texorpdfstring{Another way of Predicting $H_s$}{Another way of Predicting Hs}}
Let us make a linear transformation $L:m\rightarrow\phi$ in \eqref{eq:f} such that
    \begin{equation}\label{eq:fphi}
    f(\phi) = C - |\epsilon|\phi^2 + \alpha\phi^4 - H_s\phi
    \end{equation}
for $\epsilon = (T_c-T)/T_c$ and $\alpha>0$. Though this transformation may seem arbitrary, its relevance will be apparent after a bit more work. The spinodal field $H_s$ occurs in \eqref{eq:fphi} which results in some critical $\phi_s$ such that
    \begin{align}
    &\left.\left\{\frac{df}{d\phi}\right\}\right|_{\phi=\phi_s} = 0 \label{eq:fCritical}\\
    &\left.\left\{\frac{d^2f}{d\phi^2}\right\}\right|_{\phi=\phi_s} = 0\label{eq:fInflection};
    \end{align}
that is, $H_s$ gives us a critical point which is also an inflection point. Solving Eq.s \eqref{eq:fCritical} and \eqref{eq:fInflection} (and choosing the negative solution in order to correspond to Figure~\ref{fig:fhs}), we obtain
    \begin{equation}
    |H_s| = -\frac{4}{3}\frac{|\epsilon|^{3/2}}{\sqrt{6\alpha}}.
    \end{equation}

In Ref.~\cite{monette}, it is shown that $\alpha$ above is given by
    \begin{equation}
    2 - \alpha = \frac32.
    \end{equation}
Setting $T=\frac49T_c$, we obtain $|\epsilon| = \frac59T_c$, thus the spinodal field is given by
    \begin{equation}
    |H_s| = -4\left\{\frac{5}{27}T_c\right\}^{3/2} \approx -0.319\,T_c^{3/2}.
    \end{equation}
This prediction is not related to the nearest neighbor Ising model, though it gives us an idea of how the spinodal field might scale with respect to $T_c$

\subsection{Nucleation Near the Spinodal}\label{sec:spinodalNucleation}
Here I give an introduction to why classical nucleation theory (above) does not adequately explain higher dimensional phase transitions. We define
    \begin{equation}
    \Delta H \equiv \left|\frac{H_s - H}{H_s}\right|.
    \end{equation}
In Ref.~\cite{monette} the following (classical) relations are shown:
    \begin{subequations}
        \label{eq:monetteRelations}
        \begin{align}
        &\chi \sim (\Delta H)^{-\gamma}\label{eq:monettaChi} \\
        &\Delta F \sim (\Delta H)^{3/2-d/4} \label{eq:monetteDeltaF}\\
        &r \sim \exp\left(-\beta\Delta F\right),
        \end{align}
    \end{subequations}
where $\Delta F$ is the free energy cost of the nucleating droplet, $r$ is the nucleation rate (where $t$ is the expected nucleation time), and $\chi$ is the isothermal susceptibility with critical exponent $\gamma=1/2$. 

From Eq.~\eqref{eq:monetteDeltaF} we see that there is a critical dimension $d_c = 6$ in which the form of the free energy cost of a droplet no longer makes sense: if we set $d < d_c$ in \eqref{eq:monetteDeltaF}, we find that as $\Delta H \rightarrow 0$, nucleation becomes more frequent it requires less free energy for a droplet to form. However, if we set $d>d_c$ in \eqref{eq:monetteDeltaF}, we find that $\Delta F\rightarrow\infty$ as $\Delta H\rightarrow 0$. This would mean that as we increase $H$, nucleation becomes less frequent, which makes no sense. From this we conclude that the scaling relation \eqref{eq:monetteDeltaF} is invalid for high dimensions ($d > d_c$).

\section{GPU's and CUDA}
\subsection{Overview}
Moore's law, which specifies that the number of transistors per chip doubles roughly every year, has been observed for about 20 years. In the past these speed increases have been observed through faster clock rates and larger caches. Recently chip manufacturers have had trouble continuing this trend since, for example, the clock rates have gotten so high that an incredible amount of heat dissipation is required to prevent the CPU from overheating. CPU manufacturers then began manufacturing multi-core CPU's in order to continue the doubling trend. Each one of these cores still maintains, however, a high clock rate and large cache. This is not necessarily the best architecture for large workloads since the cache are built at the expense of having more cores.

GPU's are specialized for compute intensive highly parallel computation, and are excellent for number crunching. Though graphics cards are optimized for 32 bit integer arithmetic, they are able to conduct floating point arithmetic as well, and soon will be able to do 64 bit floating point arithmetic efficiently. They achieve the ability to crunch numbers by having high memory bandwidth and small caches (that is, more room on the chip for more cores). But realize: GPU's are only advantageous when the computation can be done in a highly parallel manner. GPU's also have a Moore's doubling law. Now, GPU's have greatly superseded CPU's in terms of computational power, see Figure~\ref{fig:cudamoore}.

    \begin{figure}
    \centering
    \includegraphics[scale=0.5]{figs/cudamoore}
    \caption{\label{fig:cudamoore}The current trend in GPU GFLOPS versus time, relative to CPU's~\cite{cuda}.}
    \end{figure}

By reducing the cache, there is more space on-chip for transistors, so one would think it would be obvious that a GPU could have more peak processing power over CPU's, it simply has more transistors. But a GPU does not just compute more FLOPS than a CPU, it gets more FLOPS per transistor than a CPU as well. A CPU hides memory latency in cache. A GPU hides memory latency with {\em parallelism}: instead of accessing the cache when waiting on the next memory read, a GPU will just send the next thread that is ready to be processed. 

In the past, if somebody wanted to conduct a general purpose calculation on a GPU as a parallel computing device, one would need to map their problem to the graphics API and their program output would be in the form of the conventional graphics card output: in pixels. Furthermore, previous programming tools and hardware did not support high speed sharing between threads and high speed thread synchronization. NVIDIA's CUDA (Compute Unified Device Architecture) allows us to do the aforementioned. Also, CUDA allows read and write access to any on-board memory location from any thread. These tools available with CUDA are crucial in allowing us to implement a wider variety of problems on GPU's.

On the GPU on a Tesla C1060 (which are the cards used for my simulations), there are 30 multiprocessors, each multiprocessor containing 8 thread processors. For a visualization, see Figure~\ref{fig:cudaproc}. This means we have $30\cdot 8 = 240$ cores, each of which clocks at $1.3\,{\rm GHz}$, which turns out to result in a $930\,{\rm GFLOPS}$ peak! For a unit which costs just a little over \$1000, that means a graphics card provides a lot of bang for the buck.

    \begin{figure}
    \centering
    \includegraphics[scale=0.45]{figs/cudaproc}
    \caption{\label{fig:cudaproc}A schematic of the hierarchy of processing in the GPU~\cite{cuda}.}
    \end{figure}

    \begin{figure}
    \centering
    %\subfigure{\includegraphics[scale=0.41]{figs/cudamem2}}
    %\subfigure{\includegraphics[scale=0.55]{figs/cudamem}}
    \includegraphics[scale=0.41]{figs/cudamem2}
    \caption{\label{fig:cudamem}The CUDA memory model. Notice that in order for two blocks to communicate they must go through the global memory, yet since they cannot synchronize inter-block communication is unreliable and therefore should not be attempted~\cite{cuda}.}
    \end{figure}

In depth documentation about CUDA features, coding, and hardware implementation are readily found in Ref.~\cite{cuda}.

\subsection{Some CUDA Optimization Techniques}
In the following I outline some basic concepts which are not complicated to understand or implement in order to write a relatively fast program. I discuss two aspects of CUDA optimization: making the most of the memory bandwidth (by reading the graphics card memory efficiently), and making the most of the GPU (by making use of the multiprocessors efficiently). 

Contiguous memory access is dramatically faster than random memory access. For this reason, the programmer is encouraged to do sequential array accesses. Even though the threads execute in parallel, a group of threads during execution will request their block of memory in a single instruction, so groups of parallel threads are able to access memory in a contiguous way.

Each multiprocessor has a single instruction multiple data architecture (SIMD): at a given clock cycle, every processor in a multiprocessor is able to execute the same instruction to many different data. This is perfect for parallel programming: one processor (of which we have hundreds) can execute the same thing many times in a single clock cycle. Each block (see Figure~\ref{fig:cudaproc}) is split up into SIMD groups called warps. The number of threads in a warp is called the {\em warp size}. When choosing the number of threads per block, it is best to choose this number as a multiple of the warp size. Each thread processor can operate on 8 data at once, so the minimum warp size of a block (or alternatively a multiprocessor) is 64. In order avoid wasting computation on underpopulated warps, the threads per block should be a multiple of 64. Higher multiples of 64 threads means more hidden memory latency, though having too many threads will occupy too many registers in the multiprocessor. The optimal balance for the number of threads per block really depends on the computational task at hand, but it is important to note that this number should be a multiple of 64.

\subsection{CUDA Example}
As you will soon see, CUDA is a C-like programming language with some added functions. There is not a very big syntactical leap from C to CUDA (though there is a big leap in program design: work in parallel instead of sequentially). Below we use the {\tt \_\_global\_\_} function, which is the primary functional interface between the computer and the graphics card; {\tt \_\_global\_\_} functions are invoked once you want the graphics card to start doing some computation.

Below is a complete program which creates two large matrices, then adds them together on the graphics card. We must first initialize arrays on the computer, then copy these arrays to memory on the graphics card. We then set the dimensions of the grids and blocks to specify the number of concurrent threads which are running, and the number which can intercommunicate through shared memory (all threads in a single block can readily communicate with each other through shared memory and thread synchronization, but there is no contact with other blocks). In this case since matrix addition does not require cross-index communication, this feature is somewhat irrelevant. After executing the calculation (or the {\em kernel}) on the graphics card, we take the output and copy it back to the hard drive for any extraneous computations one might be interested in. At the end of the program, we employ good programming practice and free all of the memory which has been allocated. Refer to in-line comments for an explanation behind how the program works.

\begin{footnotesize}
\begin{verbatim}

    /**
     * Matrix addition in CUDA
     * Jake Ellowitz
     */

    #include <stdio.h>
    #include <stdlib.h>

    const int L = (1 << 10);
    const int blocksize = 32;

    // __global__ functions are executed on the graphics device. We take 
    // two input matrices a, b (represented in single dimensional arrays)
    // and add them into an output matrix c.
    __global__ void add_matrix (float * a, float * b, float * c, int L)
    {
        // Each thread has access to its thread number. We can use these
        // thread numbers as matrix indexes.
        int i = blockIdx.x * blockDim.x + threadIdx.x;
        int j = blockIdx.y * blockDim.y + threadIdx.y;
        int index = i + j*L;
        // For matrices or L not a multiple of block dimension
        if (i < L && j < L)
            c[index] = a[index] + b[index];
    }

    int main (int argc, char ** argv)
    {
        const int size = L*L*sizeof (float);

        // Allocate CPU memory for the matrices
        float * a = new float[L*L];
        float * b = new float[L*L];
        float * c = new float[L*L];
        // Initialize the CPU matrix values
        for (int i=0; i<L*L; ++i)
        {
            a[i] = 1.9f; // Just some numbers to add...
            b[i] = 2.7f;
        }

        // Allocate GPU memory for the matrices
        float * ad, * bd, *cd;
        cudaMalloc ((void **)&ad, size); 
        cudaMalloc ((void **)&bd, size);
        cudaMalloc ((void **)&cd, size);

        // Copy initialized matrices to the GPU
        cudaMemcpy (ad, a, size, cudaMemcpyHostToDevice);
        cudaMemcpy (bd, b, size, cudaMemcpyHostToDevice);

        // Set kernel parameters:
        // Each block is blocksize by blocksize threads
        dim3 dimBlock (blocksize, blocksize);
        // Each grid has L*L/(numThreadsPerBlock) blocks to complete
        // the L*L total desired independent calculations.
        dim3 dimGrid  (L/dimBlock.x, L/dimBlock.y);
        // Add the matrices in the kernel, computing c = a + b
        add_matrix<<<dimGrid,dimBlock>>> (ad, bd, cd, L);

        // Retrieve the computed data
        cudaMemcpy (c, cd, size, cudaMemcpyDeviceToHost);

        /** Do something with computed data
         * ...
         */

        // Prevent memory leaks
        cudaFree (ad); cudaFree (bd); cudaFree (cd);
        delete [] a; delete [] b; delete [] c;

        return EXIT_SUCCESS;
    }

\end{verbatim}
\end{footnotesize}

\subsection{OpenCL}
OpenCL (Open Computing Language) is the next generation heterogeneous programming platform. It is an open source effort under development by dozens of influential corporations including NVIDIA, and is designed to allow general purpose computing on a wide range of GPU's, including AMD's models. It is similar to CUDA in syntax, but will extend CUDA's capability since it is designed to allow easy access to extraneous hardware beyond just graphics cards.

\section{Methods}
\subsection{Metropolis Algorithm}\label{sec:metropolis}
The Metropolis algorithm (also sometimes called the Metropolis-Hastings algorithm) is a simple method for sampling states from a distribution when all we know is the probability density function. This algorithm is useful when we have a very large number of states. For example, a relatively small Ising model will have $10^6$ spins, yet it corresponds to $\Omega = 2^{10^6}$ states. Sampling all of those states is impossible. The Metropolis algorithm allows us a way of sampling the distribution by picking the probable states more often than picking the less common states.

    \begin{algorithm}[t]
    \caption{\label{alg:metropolis}The Metropolis algorithm.}
        \begin{algorithmic}
        \REQUIRE {Knowledge of $p(S)$}
        \STATE Initialize the system in state $A$ 
        \STATE\COMMENT {Our current state will always be denoted by $A$}
        \LOOP
            \STATE Find a new state $B$
            \STATE Calculate a random number $r\in[0,1)$
            \IF {$r<\min\left\{1, \frac{p(B)}{p(A)}\right\}$}
                \STATE Set $A = B$
            \ELSE 
                \STATE Retain state $A$
            \ENDIF
        \ENDLOOP
        \end{algorithmic}
    \end{algorithm}

The algorithm is outlined in Algorithm~\ref{alg:metropolis}. Note that in an isothermal system with a fixed number of particles, we know that the probability of a given state $S$ is given by the Boltzmann distribution
    \begin{equation}
    p(S) = \frac{1}{Z}\exp(-\beta E_S)
    \end{equation}
where $E_S$ is the energy of state $S$. It is important to note that the lowest energy state is therefore the most probable energy state. Since the Metropolis algorithm samples more probable states more frequently, we spend more time in the low energy state. 

Relevant to Algorithm~\ref{alg:metropolis} is the computation of the relative probabilities of two states $S$ and $S'$:
    \begin{equation}
    \frac{p(S')}{p(S)} = \frac{\exp(-\beta E_{S'})}{\exp(-\beta E_S)}
    = \exp(-\beta(E_{S'} - E_S)) = \exp(-\beta\Delta E),
    \end{equation}
with $\Delta E$ denoting the change in energy due to accepting the new state $S'$.

You might be wondering how to generate new useful states. In the Ising model, we can select a random lattice site and flip it to generate a new state and accept the change according to the Metropolis algorithm. This process is called a single spin  update. Though flipping single spins at random works, it cannot be implemented in a parallel fashion: if we pick two adjacent sites we cannot update one or the other because one site will have to wait for the other site to update in order to calculate $\Delta E$ (which defeats the purpose of running in parallel in the first place). See Section~\ref{sec:checkerboard} for a method of parallel spin configuration sampling.

\subsubsection{Energy Change Due to Trial Flip}
If we are flipping any given spin, we can calculate the change in energy from this trial flip simply. We gather the magnetization of all spins interacting with spin $\sigma_i$ denoted $m_{\ave i}$ (in our case, this is just all nearest neighbors). The original energy $E_0$ is the energy of all bonds aside from those with $\sigma_i$ and their interaction with $H$ (call this energy $E_{j\ne i}$) added to the energy contribution of $\sigma_i$:
    \begin{equation}
    E_0 = E_{j\ne i} + E_{i} = E_0 - (Jm_{\ave i}+H)\sigma_i.
    \end{equation}
When we do a trial flip of spin $i$, we have $\sigma_i \leftarrow -\sigma_i$, the final energy is given
    \begin{equation}
    E_f =  E_{j\ne i}- (Jm_{\ave i}+H)(-\sigma_i).
    \end{equation}
Thus the change in energy (defined $\Delta E \equiv E_f - E_0$) is given
    \begin{equation}
    \Delta E = 2(Jm_{\ave i}+H)\sigma_i.
    \end{equation}

\subsection{Checkerboard Spin Updating}\label{sec:checkerboard}
Our goal is to devise a spin updating scheme which can overcome the problem set forth in Section~\ref{sec:metropolis}: devise a way to update spins in parallel. In order to accomplish this task we divide the lattice into black and red sites, where no black site is adjacent to another black site (just like on a checkerboard). It turns out that it is fine to update all the black sites, then all of the red sites.

In order to construct a generalized checkerboard, or a $d$-dimensional hypercheckerboard (if you will), we need to distinguish the ``black'' spots from the ``red'' spots (see Figure~\ref{fig:checkerboard}). Suppose some spin $\sigma$ is located at the location $(x_0, x_1,\cdots,x_{d-1})\in \mathbb Z^d$ on a $d$-dimensional lattice with linear dimension $L$. Define the {\em parity} of a spin
    \begin{equation}\label{eq:coordinateParity}
    P = \sum_{i=0}^{d-1} x_i \pmod 2.
    \end{equation}
Hence if a spin $\sigma_a$ is adjacent to $\sigma_b$, we know that the sum of their coordinates must differ by 1 ($P_a - P_b \equiv 1\imod 2$), and therefore the parity of a spin is different from all of its neighbors. 

In order to do a parallel update on a set of spins $\{\sigma\}$, updating every spin $\sigma_i$ must be independent of all of the other spins in the set. I will denote the set of parity $P$ spins by $\{\sigma^{(P)}\}$. We partition the set into two parities, and call a Monte Carlo step (MCS) an update of all $N=L^d$ spins. First we update the $N/2$ spins in $\{\sigma^{(0)}\}$, then we update $\{\sigma^{(1)}\}$. We can update all of the spins in either of the disjoint parity sets in parallel since according to the nearest neighbor interaction, updating the blacks depends only on the state of the reds.

This method seems like it might be too systematic to give the lattice dynamics as picking random spins. Fortunately, in Ref.~\cite{checkerboard} it is shown that a wide variety of spin updating schemes still samples the parameter space with the Boltzmann distribution. Among this general class of applicable updating algorithms is the checkerboard updating scheme. Equilibrium processes are unaffected by checkerboard updating, but nonequilibrium processes (such as the kinetics of the nucleation process) {\em will} be affected by choosing checkerboard updating. The only slightly nonequilibrium measurement we make is the measurement of $t$ (the time until nucleation). In this case the checkerboard update only affects $t$ by a constant factor, so it does not offset our simulation results.

    \begin{figure}
    \centering
    \includegraphics[height=0.35\textwidth,width=0.35\textwidth]{figs/checkerboard}
    \caption{\label{fig:checkerboard}Checkerboard even parity partitioned $d=2$ lattice. If the interaction is only with nearest neighbors, we can update all of the black locations, then all of the red locations.}
    \end{figure}

\subsubsection{Indexing and Parity Calculations}
It is more convenient and faster to index the $L^d$ spins in a single array with $L^d$ elements as opposed to using a $d$-dimensional coordinate array. We can transform a coordinate representation to a linear representation quite simply through the following spin indexing scheme:
    \begin{equation}\label{eq:indexTransform}
    \phi(x_0, x_1,\cdots,x_{d-1}) = \sum_{i=0}^{d-1}x_iL^i = k.
    \end{equation}
Equation~\eqref{eq:indexTransform} defines a map 
    \begin{equation}
    \phi\ :\ \mathbb Z_L^{\times d} \rightarrow \mathbb Z_{L^d} = \mathbb Z_N
    \end{equation}
Let us call $G = \mathbb Z_L^{\times d}$ and $K = \mathbb Z_{L^d}$. It is easy to show that both maps form groups under modulo $L$ and $L^d$ addition (respectfully), with identities $e_G = 0^{\times d}$ and $e_K=0$. Notice that the kernel of $\phi$ (or the set of elements in $G$ which map to the identity in $K$) is simply the identity in $G$:
    \begin{equation}
    {\rm ker}(\phi) = \{ e_G \},
    \end{equation}
and this well known result from group theory implies that $\phi$ is one to one. Furthermore, since $|G| = |K| = L^d$, $\phi$ is also onto, and therefore bijective. This means we can index the spins in a single dimensional array without losing track of any of them, or using extraneous indexes.

So, how are we to calculate the parity from \eqref{eq:coordinateParity} without coordinates? The answer is fairly simple: the coordinate information can actually be extracted from the indexing scheme. Since the transformation $\phi$ is bijective, we can always recover the original coordinates from the linear index number $k$. Namely, $\phi$ has an inverse
    \begin{equation}
    \phi^{-1} (k) = (k\imod L, \lfloor k/L\rfloor\imod L,\cdots,\lfloor k/L^{d-1}\rfloor\imod L).
    \end{equation}
Thus given spin number $k\in\mathbb Z_{L^d}$, we can calculate its parity from \eqref{eq:coordinateParity} with
    \begin{equation}\label{eq:linearParityPreview}
    P = \sum_{i=0}^{d-1}\left\{\left\lfloor\frac{k}{L^i}\right\rfloor\imod L\right\} \imod 2.
    \end{equation}
    %\begin{align}\label{eq:linearParityPreview}
    %P &= \sum_{i=0}^{d-1}\left\{\left\lfloor\frac{k}{L^i}\right\rfloor\imod L\right\} \imod 2 \notag \\ &= 
    %\left(\left\{\sum_{i=0}^{d-1}\left\lfloor\frac{k}{L^i}\right\rfloor\right\}\imod L\right) \imod 2.
    %\end{align}
However, notice that in order for a hypercheckerboard not to have adjacent blacks or reds, $L$ must be even. This implies that the $\!\!\imod L$ statement is redundant in the presence of the $\!\!\imod 2$ statement, and \eqref{eq:linearParityPreview} can be written
    \begin{equation}\label{eq:linearParity}
    P = \sum_{i=0}^{d-1}\left\lfloor\frac{k}{L^i}\right\rfloor \pmod 2.
    \end{equation}

\subsection{Data Acquisition and Analysis}
We are primarily interested in the magnetic susceptibility $\chi$ and the nucleation rate $r$. We can measure $\chi$ by noting that the isothermal susceptibility (per spin) is given by
    \begin{equation}
    \chi = \frac{1}{kT}\sigma_m^2.
    \end{equation}
We can measure the nucleation rate $r$ by noting that it is the inverse of the average nucleation time $\ave t$ (see Section~\ref{sec:metastablePoisson})
    \begin{equation}
    r = \frac{1}{\ave t}.
    \end{equation}
The program determines the magnetization $m$ after every MC step per spin. In addition, the program determines the lifetime of the metastable state for a particular run. This information is put into separate files.


\subsubsection{Detecting Nucleation}\label{sec:nucleationDetection}
    \begin{figure}
        \centering
        \includegraphics[width=0.75\textwidth]{figs/threshhold.eps}
        \caption{\label{fig:threshhold}Typical behavior of metastable state. We observe $t_0\sim 10\,{\rm MCS}$. The decay is detected using our nucleation detection algorithm. The darker green band is $\sigma_m$ as determined in the beginning of the algorithm, the lighter green band is $2\sigma_m$. Notice how this method tends to be accurate to within $10\,{\rm MCS}$.}
    \end{figure}

The method we use to detect nucleation is simple in concept: when $m$ changes dramatically, nucleation has occurred. We first prepare the lattice by flipping the external field and relaxing for $t_0$ steps (where $t_0$ is typically $40$\,MCS). When the value $|m-\ave m|$ exceeds a reasonably large multiple of the fluctuations in $m$ as determined by $\sigma_m$, and  remains so for some time, then we call this  a nucleation event. More quantitatively, we compute $\ave m$ and $\sigma_m$ for a given run and assume that these values do not vary appreciably from one run to the next.  When $|m-\ave m|$ exceeds $\alpha\sigma_m$ (where $\alpha$ is typically $5$), then we are not observing a normal fluctuation in $m$. In this case a possible candidate for nucleation has occurred (say, at time $t_1$). If so, then shortly thereafter $m(t)$ should rapidly change. If $m$ at time $t_1+t_a$ (where $t_a$ is typically $15$) is aligned with the external field, then we say nucleation has occurred at time $t_1$. To avoid detection of premature nucleation, if at (or before) $t_0$, $|m-\ave m| > \alpha \sigma_m$, or $m$ at $t_1+t_a$ does not align with $H$, then we throw away the trial.

Notice that this method requires knowledge of $\ave m$ and $\sigma_m$ in the metastable state. These values are calculated before a series of many trial simulations. These quantities are calculated during a test trial of $t_{\rm test}$ steps. We repeat the test trial until we have run one in which we are sure there was no metastable decay.

\subsubsection{Data Protocol}
We run the simulation repeatedly, collecting magnetization data between nucleation events. The two pieces of data being gathered directly from the simulation are the magnetization of a trial, and the nucleation time of that trial.

    \begin{figure}
        \centering
        \includegraphics[scale=0.4]{figs/data}
        \caption{\label{fig:data}Schematic of the data output protocol.}
    \end{figure}

As mentioned before, in order to avoid complications in the code which executes the simulation, all data analysis is done on an output file by an external program. The simulation outputs data in single precision floating point in order to avoid large data files which would be incurred from using double precision numbers. In an output file we have a header which contains relevant information about a trial, and a flag to signify the beginning of trial data. The end of the trial data is signified by a footer, which contains a flag indicating that the end of trial data. This makes writing data analysis programs much simpler. For a diagram of the data protocol, see Figure~\ref{fig:data}.

The header is 10 values: (flag, $L$, $d$, $H$, $T$, [4 auxiliary bits], flag). The footer is just a flag. The header and footer flags are different values. This protocol makes parsing the data by an external data analysis program more convenient. Each flag is a number which will never be encountered in the simulation data.

\subsubsection{\texorpdfstring{Calculating $\chi$}{Calculating chi}}\label{sec:calculatingChi}
We calculate the magnetic susceptibility from the large data files which contain millions of measurements of the magnetization $m$. Each measurement of the magnetic susceptibility for a given set of parameters $(L, d, T, H)$ is given by
    \begin{equation}
    \chi = \frac{\sigma^2_m}{kT}.
    \end{equation}
However, we must {\em average} these measurements, so we calculate the weighted mean
    \begin{equation}
    \ave{\chi} = \frac{\sum_i w_i \chi_i}{\sum_i w_i},
    \end{equation}
where $w_i$ are the weights of each measurement of the susceptibility. The weights $w_i$ simply correspond to the number of measurements of $m$ for a singe measurement of $\chi$. For example, if we made two measurements of the susceptibility, $\chi_1$ deduced from $N_1$ measurements of $m$, and $\chi_2$ deduced from $N_2$ measurements of $m$, then $w_1$ and $w_2$ are equal to $N_1/(N_1+N_2)$ and $N_2/(N_1+N_2)$ respectfully.

To get the standard deviation of these measurements of $\chi$ for a given set of parameters $(L, d, T, H)$ we must calculate the weighted standard deviation for the weighted mean $\ave{\chi}$. The value for $\sigma^2_{\ave\chi}$ can be derived using well known properties of the variance, namely
    \begin{equation}
    \sigma^2_{a\chi} = a^2\sigma^2_\chi,
    \quad
    \sigma^s_{\chi_1 + \chi_2} = \sigma^2_{\chi_1} + \sigma^2_{\chi_2},
    \end{equation}
the second property relies on independence of the random variables $\chi_1$ and $\chi_2$. Hence it is straightforward to calculate
    \begin{align}
    \sigma^2_{\ave\chi}
    &= \sigma^2_{\frac{\sum_i w_i \chi_i}{\sum_i w_i}} \notag\\
    &= \left\{\frac{1}{\sum_i w_i}\right\}^2 \sigma^2_{\sum_iw_i\chi_i} \notag\\
    &= \frac{1}{N^2\ave w^2}\sum_iw_i^2\sigma^2_{\chi_i} \notag\\
    &= \frac{1}{N^2\ave w^2}\left\{\sum_iw_i^2 \right\}\sigma^2_{\chi}
        \tag{Since $\sigma_{\chi_i} = \sigma_{\chi}$ for all $i$}\notag\\
    %&= \frac1N\frac{\ave{\chi^2}}{\ave{\chi}^2}\sigma^2_\chi \label{eq:sigma2chi},
    &= \frac{1}{N'}\sigma^2_\chi \label{eq:sigma2chi},
    \end{align}
where
    \begin{equation}
    N^\prime = N \frac{\ave{w}^2}{\ave{w^2}}.
    \end{equation}
We can check the validity of \eqref{eq:sigma2chi} by noting that this reduces to the standard error when all of the weights are equal. For an unbiased estimator of $\sigma^2_{\ave\chi}$, we make  a slight modification to \eqref{eq:sigma2chi} and compute
    \begin{equation}
    \sigma^2_{\ave\chi} = \frac{\ave{\chi^2}-\ave{\chi}^2}{N^\prime-1},
    \end{equation}

I would like to remark that in addition to neglecting the data during the transient/relaxation period after reversing the external field in the calculations of $\chi$, I impose further criterion on a single simulation trial before determining if its magnetization data is valid enough to give us a decent observation of the susceptibility. For the following, suppose we have $n$ measurements of $m$ between times $t_0$ and $t_1$:
    \begin{enumerate}
    \itemsep -2pt
    \item We ignore the first $n_1$ measurements of $m$ in the trial in case the transient period has not passed.
    
    \item Ignore the final $n_2$ measurements of $m$ in the trial so that we are sure to avoid measurements of the magnetization too close to the decay of the metastable state.
    \item Make sure that $n-n_1-n_2>n_3$ so that we get a good estimate of the variance of $m$ for the particular trial.
    \end{enumerate}
I set $n_1=60$, $n_2 = 20$, and $n_3 = 30$. Note that this condition means we do not include any trial in which $t<n_1+n_2+n_3 = 110$ in our calculations for $\chi$. When $\ave t$ gets small, we end up losing almost all of our measurements of $\chi$ and therefore cannot measure the susceptibility when $\ave t$ (and consequently $\Delta H$) is small.

\section{Results}

\subsection{Tests of the Simulation}
To ensure that the simulation works properly, I made a rough estimate of the critical temperature in $d=7$. We know that $\chi\rightarrow\infty$ as $T\rightarrow T_c$. Accordingly, I varied $T$ and looked for a sharp increase in the magnetic susceptibility. Though I did not do finite size scaling, it is apparent in Figure~\ref{fig:7dtc} that my measurements of $T_c$ approach Stauffer's measurement of $T_c$ in $d=7$ of $T_c \approx 12.869$ as $L$ is increased~\cite{stauffer}.

    \begin{table}
    \centering
        \begin{tabular}{ccccc}
        \toprule
        $d$ & $L$ & GPU & CPU & GPU Speedup \\ \midrule
        2 & 128  & $1.60\cdot10^8$ & $3.61\cdot10^6$ & 44.32  \\
        2 & 256  & $3.34\cdot10^8$ & $3.63\cdot10^6$ & 92.01  \\
        5 & 10   & $5.26\cdot10^8$ & $6.42\cdot10^6$ & 81.93  \\
        5 & 20   & $9.50\cdot10^8$ & N/A              & N/A    \\
        7 & 6    & $6.29\cdot10^8$ & $5.69\cdot10^6$ & 110.54 \\
        7 & 10   & $8.04\cdot10^8$ & N/A              & N/A    \\
        \bottomrule
        \end{tabular}
        \caption{\label{tab:gpucputiming}Timing results (number of spin updates per second) for GTX 280 and Xeon $2\,{\rm GHz}$. Though we did not use the GTX for the simulations, both the GTX 280 and the Tesla C1060 have 240 cores: the GTX performance is similar to the Tesla.}
    \end{table}

I was also interested in how the simulation performs on the graphics card compared to the simulation run on the CPU. Kip Barros, who wrote the CUDA implementation of the nearest neighbor Ising model, implemented the same simulation (the same spin updating algorithms, indexing scheme, and such) in C++ to be run on the CPU. The test is simple: execute $\sim 10^4\,{\rm MCS}$; time how long all of the updates take ($\tau$); compute $NM/\tau$ to get the number of spins updated per second. We observe an enormous speedup on the GPU: in $d=7$ we find that the GPU can update over 100 times faster than the CPU. See Table~\ref{tab:gpucputiming} for the speedup measurements. 

The CPU implementation is {\em single threaded}, though the Xeon processor used in the timing tests actually has 4 cores. So utilizing all threading capabilities of the Xeon processor would result in a speedup of somewhere between 2 and 3 on the CPU. Nevertheless, we still see an enormous performance increase when switching to the GPU. Some timing tests were not done on the CPU because I did not want to wait a long time for the CPU timing trials to complete.

    \begin{figure}
    \centering
    \includegraphics[width=0.7\textwidth]{figs/results/tc_gpu_7d}
    \caption{\label{fig:7dtc}Critical temperature measurements in $d=7$ ($T_c$ corresponds to the maximum in $\chi$). For different $L$ we scale $\chi$ so that the trend in the maxima for $\chi$ is more apparent. For $L=8$ we see the maximum for $\chi$ at $T_c=12.865$, which is notably to a previous measurement of $T_c = 12.869$~\cite{stauffer}.}
    \end{figure}

I was also interested in choosing a linear dimension $L$ for the simulations in $d=5$ and $d=7$. We expect that $\ave t \propto 1/V$ since nucleation is a random process. Thus we should expect
    \begin{equation}\label{eq:rOverV}
r \propto V.
    \end{equation}
If \eqref{eq:rOverV} is not satisfied, then finite size effects are influencing the nucleation process. See Figure~\ref{fig:scale} for the scaling results when we fix all parameters and vary $L$. I find that \eqref{eq:rOverV} is satisfied when $L\geq 16$ in $d=5$, and for $L\geq 8$ in $d=7$. Note that the range of $L$ when other parameters are fixed is limited by $\ave t$ becoming very small as $L$ increases since
    \begin{equation}
    \ave t \propto L^{-d}.
    \end{equation}
Corresponding to the trials shown in Figure~\ref{fig:scale}, we find the following range for $\ave t$ when investigating finite size effects:
    \begin{align}
    &d=5:\quad L=8 \Rightarrow \ave t \sim 10^4,\ L=18\Rightarrow \ave t \sim 200 \\
    &d=7:\quad L=6 \Rightarrow \ave t \sim 10^4,\ L=14\Rightarrow \ave t \sim 60.
    \end{align}
Thus the range of values for $L$ at fixed $(d,T,H)$ in which we can investigate finite size effects is limited for low $L$ because run time becomes too long, and for high $L$ because run time becomes too short.
    \begin{figure}
    \centering
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_scale}}
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_scale}}
    \caption{\label{fig:scale}The nucleation rate per unit volume $r/V$ versus $L$ for $d=5$ (left) and $d=7$. Finite size effects appear to be unimportant in nucleation for $L\geq 16$ in $d=5$ and for $L\geq 8$ in $d=7$.}
    \end{figure}

In Figure~\ref{fig:scale}, notice how $r/V$ gradually becomes constant in $d=5$, and dramatically in $d=7$. This could infer that the nucleating droplet is more spread out in $d=5$ than in $d=7$.

\subsection{Nucleation in High Dimensions}
As explained in Section~\ref{sec:metastablePoisson}, nucleation is a Poisson process. Thus after waiting $t_0\,{\rm MCS}$ (the transient time, allowing the system to enter the metastable state), an exponential distribution of the time until the decay of the metastable state $t$ infers that the decays are nucleation events.

I measured the time until the decay of the metastable state using the method outlined in Section~\ref{sec:nucleationDetection}. A histogram of the decay times for a fixed set of parameters is found in Figure~\ref{fig:foundNucleation}. The distribution fits an exponential extremely well, and therefore we conclude that nucleation exists in $d=5$ and $d=7$. 

Before these simulations, nobody had seen a distribution of nucleation times in the Ising model for $d=5$ (and certainly not for $d=7$). We also are the first to provide convincing evidence of nucleation in the $d=7$ Ising model.

    \begin{figure}
    \centering
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_nucleationExample}}
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_nucleationExample}}
    \caption{\label{fig:foundNucleation}The distribution of nucleation times $t$ in $d=5$ (left) and $d=7$. The distribution is exponential, and therefore evidence of nucleation in $d=5,7$. The straight line is a guide to the eye. The time $t$ in the figure is not measured from the time of the quench, but rather from $t_0=40$.}
    \end{figure}

\subsection{Reliability of Data when Fitting Curves}\label{sec:reliableData}
I chose $t_0=40\,{\rm MCS}$ after looking at various runs of $m(t)$ (such as Figure~\ref{fig:threshhold}), I observed that after roughly $t_0=10\,{\rm MCS}$ the mean value of the magnetization is seemingly constant. As $\Delta H$ gets very small, we encounter situations in which $\ave t < t_0$. I will now address how to deal with such data, namely why I will (for the most part) ignore data which corresponds to $\ave t < t_0$.

%In Ref.~\cite{ray}, it is mentioned that the correlation length diverges near the spinodal as
%    \begin{equation}
%    \xi \sim (\Delta H)^{-1/4}.
%    \end{equation}
%When $\ave t < t_0$, $\Delta H$ will be small and $\xi$ will be larger, signifying that finite size effects will be more prevalent. Essentially when spins that are farther apart become correlated, the nucleating droplet will see itself across the periodic boundaries. This effect will interfere with the nucleation process and disturb the results. As a precaution against complications or problems due to the droplet seeing itself, I ignore (for the most part) data which corresponds to $\ave t < t_0$ in my analysis and curve fits.

When $\ave t < t_0$, Hui Wang {\em et al.} find that nucleation is different~\cite{wang}. More specifically, it is found that though $m$ and $E$ appear to reach time-independent values, the nucleation rate is reduced from its equilibrium value. For this reason we do the curve fits on the data corresponding to $\ave t > t_0$. Refer to Figure~\ref{tab:hRelax} for the values of $H$ in $d=5$ and $d=7$ corresponding to the limit of the reliability of data (the maximum external field in which $\ave t < t_0$).

    \begin{table}
    \centering
    \begin{tabular}{cc}
    \toprule 
    $d$ & $H$ \\ \midrule
    $5$ & $2.597$ \\
    $7$ & $4.0245$ \\ 
    \bottomrule
    \end{tabular}
    \caption{\label{tab:hRelax}Values for $H$ such that $\ave t\approx t_0$. These are the cutoff values for most of the reliable data (data in which the transient period is not longer than the average lifetime of the metastable state).}
    \end{table}

\subsection{Evidence of Pseudospinodals}\label{sec:pseudospinodal}
We noted in Section~\ref{sec:overview} and Section~\ref{sec:predictHs1} that a (pseudo)spinodal will show up as a divergence in the magnetic susceptibility $\chi$, and that the divergence will go according to a power law with a critical exponent $\gamma = 1/2$ (see Section~\ref{sec:spinodalNucleation}): 
    \begin{equation}\label{eq:chiDivergence}
    \chi = a|H-H_s|^{-\gamma}.
    \end{equation}
We measure $\chi$ according to the method outlined in Section~\ref{sec:calculatingChi}. We fit our data to \eqref{eq:chiDivergence} for the three parameters $a$, $H_s$, and $\gamma$ to see if the data suggests a pseudospinodal extrapolation. Note that as mentioned in Section~\ref{sec:reliableData} we only use values of $\chi$ corresponding to $\ave t > t_0$.

In Figure~\ref{fig:chi1}, after fitting the parameters mentioned above in both $d=5$ and $d=7$, we observe that $\ln\chi$ appears linear in $\ln|H-H_s|$ where $H_s$ is obtained from the fit. The numerical values for the fit parameters are found in Table~\ref{tab:chi1}. Our measurements of the critical exponent $\gamma$ are consistent with the predicted (dimensionally independent) value $\gamma = 1/2$. 

    \begin{figure}
    \centering
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_chi1}}
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_chi1}}
    \caption{\label{fig:chi1}We find evidence of a pseudospinodal in $d=5$ (left) and $d=7$ through increase of $\chi$. We fit the three parameters in $\chi = a|H-H_s|^{-\gamma}$; see Table~\ref{tab:chi1} for parameters from curve fitting.}
    \end{figure}

    \begin{table}
    \centering
    \begin{tabular}{ccccc}
    \toprule
    $d$ & $a$ & $\gamma$ & $H_s$ & $H_s$ (mean field)\\ \midrule
    $5$ & $3.31\cdot 10^{-8}$ & $0.439\pm0.04$ & $2.602\pm0.004$ & 3.176 \\
    $7$ & $2.78\cdot 10^{-9}$ & $0.658\pm0.16$ & $4.028\pm0.004$ & 4.447 \\
    \bottomrule
    \end{tabular}
    \caption{\label{tab:chi1}Parameters found from fitting $\chi = a|H-H_s|^{-\gamma}$, see Figure~\ref{fig:chi1}. We only fit to measurements of $\chi$ which correspond to $\ave t > t_0$ (see Table~\ref{tab:hRelax}). The spinodal field determined from the fits is reasonably close to the mean field predictions from Table~\ref{tab:hs}.}
    \end{table}

\subsection{Nucleation Rate}
In Section~\ref{sec:spinodalNucleation} it is noted that near the spinodal,
    \begin{equation}
    \label{eq:rFit}
    %r \sim \exp(-\beta(\Delta F))
    %\Rightarrow 
    r = \exp(a|H-H_s|^b + c),
    \end{equation}
where $a$ should be negative (to correspond to the $-\beta$) and $b$ should be positive (to correspond to a convergent $\Delta F$). Because there are so many parameters in Eq.~\eqref{eq:rFit}, we make the curve fitting more feasible by setting $H_s$ to the values obtained in Section~\ref{sec:pseudospinodal} from fitting Eq.~\eqref{eq:chiDivergence}~\footnote{First I tried to fit to $H_s$ as well as $a,b,c$, but fitting was impossible in this case, which is why the fits were done by fixing $H_s$.}.

The curve fits pertaining to \eqref{eq:rFit} are plotted in Figure~\ref{fig:rFit} for both $d=5$ and $d=7$, with the numerical values from the fits in Table~\ref{tab:rFit}. Notice that the parameters fit in Table~\ref{tab:rFit} all satisfy the conditions above ($a<0$, $b>0$). 

We have a guess that $b = 3/2-d/4$ when $d<d_c = 6$~\cite{monette}, but when $d=5$ we do not find $b=1/4$. However, we cannot take these numerical values too seriously since were are fitting many nonlinear parameters to extract subtleties from the data. Rather we can merely discuss the qualitative behavior of the fits. In $d=5$ we observe that the fit is concave down, whereas the fit in $d=7$ is much straighter, or even a little concave up.

Our results suggest that the critical exponent for $\Delta F$ is smaller in $d=7$ than it is in $d=5$.

    \begin{figure}
    \centering
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_rFit}}
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_rFit}}
    \caption{\label{fig:rFit}Fits of $r = \exp(a|H-H_s|^b+c)$. The value of $H_s$ is obtained from fitting the power law divergence in $\chi$ (see Figure~\ref{fig:chi1}). All data included satisfies $\ave t < t_0$.}
    \end{figure}

    \begin{table}
    \centering
    \begin{tabular}{cccc}
    \toprule
    $d$ & $a$ & $b$ & $c$ \\ \midrule
    $5$ & $-76.9\pm1.9$ & $1.08\pm0.01$ & $-3.39\pm0.02$ \\
    $7$ & $-214.8\pm 8.3$ & $0.914\pm0.01$ & $-2.44\pm0.03$ \\
    \bottomrule
    \end{tabular}
    \caption{\label{tab:rFit}Parameters from fitting $r=\exp(a|H-H_s|^b + c)$ when $H_s$ is fixed to the fitted value from $\chi$ (see Table~\ref{tab:chi1}). These fits are highly nonlinear, and we are fitting many parameters, so the obtained values should be taken with a grain of salt.}
    \end{table}

%\subsection{Simulated Limit of Metastability}\label{sec:findHPrime}
%Aside from identifying singularities in a thermodynamic quantity using data satisfying criterion outlined in Section~\ref{sec:reliableData}, we can brush aside any of the criterion and curve fits by looking directly at the nucleation process. More specifically, we can state that nucleation occurs when the time until metastable decay follows an exponential distribution. If we can identify the value for $H$ at which the distribution for $t$ changes dramatically (and is no longer exponential), then we can consider this value of $H$ an alternative measurement of $H_s$ (denoted $\tilde H_s$) obtained purely through simulation, independent of any curve fits.
%
%I analyzed many distributions of nucleation times in both $d=5$ and $d=7$. The values for $H$ which correspond to a dramatic change in the distribution of $t$ are $\tilde H_s = 2.641$ in $d=5$ and $\tilde H_s =4.0335$ in $d=7$. These values were identified by eye and purely qualitatively. Figure~\ref{fig:limit} displays the qualitative difference once $H=\tilde H_s$.
%
%    \begin{figure}
%    \centering
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_nearLimit}}
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_atLimit}}
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_nearLimit}}
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_atLimit}}
%    \caption{\label{fig:limit}Distribution of nucleation times for values of $H$ at which the  distribution of nucleation times is  no longer exponential. We can consider these values alternative measurements of $H_s$ than those obtained from curve fits (see Table~\ref{tab:chi1}). The line is a guide to the eye.}
%    \end{figure}
%
%A graph of $r(H)$ for $H$ varied up to $\tilde H_s$ in both $d=5$ and $d=7$ is found in Figure~\ref{fig:limit}. Note that in Figure~\ref{fig:chi2} we cannot extend $H$ to $\tilde H_s$ because when $\ave t < t_0$ we cannot get many trials which are useful for measuring $\chi$ as they would require long lifetimes (see Section~\ref{sec:calculatingChi}). 
%
%%We see that both values of $\tilde H_s$ go past the measurement of $H_s$ found in Section~\ref{sec:pseudospinodal}. {\em How far} each trial goes past $H_s$ in $d=5$ versus $d=7$ is a tricky question, since sensitivity to $H$ is not constant across dimensions. 
%
%We are also interested in the critical exponent $\gamma$ which can be gotten through a curve fit when fixing the value of $H_s$ in Eq.~\eqref{eq:chiDivergence} (by setting $H_s = \tilde H_s$). However, when setting $H_s = \tilde H_s$, we see that the data do not fall on a straight line in the log-log plot for both $d=5$ and $d=7$ (see Figure~\ref{fig:chi2}), suggesting that $\chi$ does not diverge as a power law in $|H-\tilde H_s|$ ($\tilde H_s$ is not a good value for the pseudospinodal field, and the value of $H_s$ found in Section~\ref{sec:pseudospinodal} should be taken as the pseudospinodal field). The fit parameters corresponding to the line in Figure~\ref{fig:chi2} are found in Table~\ref{tab:chi2}. Note that the value for $\gamma$ is far off from the theoretical prediction, giving us another indication that this method for finding the spinodal field is worse than the curve fit for $H_s$.
%
%    \begin{figure}
%    \centering
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_limit}}
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_limit}}
%    \caption{\label{fig:rLimit}Nucleation rates for $d=5$ (left) and $d=7$ when we increment $H$ until the distribution of nucleation times is no longer exponential (see Figure~\ref{fig:limit}). The spinodal field $H_s$ is obtained from curve fitting (see Table~\ref{tab:chi1}). The straight line is a guide to the eye for noticing curvature in $r$.}
%    \end{figure}
%   
%    \begin{figure}
%    \centering
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_chi2}}
%    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_chi2}}
%    \caption{\label{fig:chi2}Results to fitting $a$ and $\gamma$ when fixing $\tilde H_s$ (see Table~\ref{tab:chi2}). We see that the data in both $d=5$ (left) and $d=7$ does not appear to be as linear as in Figure~\ref{fig:chi1}, which suggests that the value $\tilde H_s$ is not an accurate measurement of the spinodal field compared to the fitted $H_s$.}
%    \end{figure}
%
%    \begin{table}
%    \centering
%    \begin{tabular}{cccc}
%    \toprule
%    $d$ & $a$ & $\tilde H_s$ & $\gamma$ \\ \midrule
%    $5$ & $1.38\cdot 10^{-8}$ & $2.641\pm0.004$ & $0.926\pm0.02$ \\
%    $7$ & $7.087\cdot 10^{-10}$ & $4.0335\pm0.0005$ & $1.056\pm0.03$ \\
%    \bottomrule
%    \end{tabular}
%    \caption{\label{tab:chi2}Parameters from fitting $\chi = a|H-\tilde H_s|^{-\gamma}$, where $\tilde H_s$ is determined from Figure~\ref{fig:limit}; also see Figure~\ref{fig:chi2}. We only fit to measurements of $\chi$ which correspond to $\ave t > t_0$. The error estimate on $\tilde H_s$ is determined by the increment of $H$ used when searching for the point in which the distribution of $t$ is no longer exponential.}
%    \end{table}

\subsection{Nonequilibrium Decay Region}\label{sec:unstableDecay}
If we increase the external field $H$ enough, the lifetime of the metastable state will become shorter than the relaxation period. Though I use the empirical relaxation time $t_0 = 40$, the system almost certainly has a different relaxation period. Regardless of this fact, if $H$ is large enough, $\ave t$ will be shorter than the {\em actual} $t_0$ (which was never directly measured). From \cite{wang} we expect to observe the unstable decay regime as a decrease in the nucleation rate, which is exactly what we find (see Figure~\ref{fig:rLimit}). In $d=5$, the concave down behavior of $r(H)$ becomes exaggerated, and the concave up behavior in $d=7$ becomes linear. These two observations imply that $r$ decreases as we encounter nonequilibrium decays.

    \begin{figure}
    \centering
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_limit}}
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_limit}}
    \caption{\label{fig:rLimit}Nucleation rates for $d=5$ (left) and $d=7$ for all trials, including those in which $\ave t < t_0$. Notice that the concave down trend is exaggerated as $H$ extends to the nonequilibrium decay regime for $d=5$, and the concave up trend in $d=7$ becomes linear.}
    \end{figure}

But why can we observe a nonzero lifetime past $H_s$, as is clearly displayed in Figure~\ref{fig:rLimit}? Simply put: $H_s$ is not a real spinodal.

\subsubsection{\texorpdfstring{Quantitative of Quality of $t$ Distribution}{Quantitative of Quality of t Distribution}}
\label{sec:rQuality}
One notable characteristic of the exponential distribution is that the standard deviation is equal to the mean. Hence if the random variable $t$ is exponentially distributed, then
    \begin{equation}
    \label{eq:exponentialProperty}
    \ave t = \sigma_t.
    \end{equation}
A quantitative way of determining the quality of an exponential distribution is by looking at the normalized stray from the property in \eqref{eq:exponentialProperty}:
    \begin{equation}
    Q \equiv \frac{\ave t - \sigma_t}{\sigma_t}.
    \end{equation}
If $Q$ strays from zero, then the distribution is in a way ``less'' of an exponential distribution. This corresponds to nonequilibrium decay, as equilibrium decay events (nucleation) {\em must} be exponentially distributed.

In Figure~\ref{fig:distQuality} we plot $Q(H)$. Notice at $H=H_s$ (obtained in Section~\ref{sec:pseudospinodal}) $Q$ becomes nonzero, and increases as we go further past the pseudospinodal field $H_s$. This means that as the fit $H_s$ is approached and exceeded, our distribution for $t$ is no longer exponentially distributed, corresponding to a nonequilibrium state prior to decay. Hence what we propose in Section~\ref{sec:reliableData} in terms of which data we can use (that corresponding to $\ave t < t_0$) is indeed confirmed here. The measurements of $H_s$ are also confirmed here: as we exceed $H_s$ our metastable state is no longer metastable.

    \begin{figure}
    \centering
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/5d_distQuality}}
    \subfigure{\includegraphics[width=0.45\textwidth]{figs/results/7d_distQuality}}
    \caption{\label{fig:distQuality}We gauge the quality of the exponential distribution by looking at $(\ave t-\sigma_t)/\sigma_t$ which is zero for an exponential distribution. The spinodal field $H_s$ is obtained from curve fitting (see Table~\ref{tab:chi1}).}
    \end{figure}

\section{Conclusions}
One primary accomplishment of this work was being able to demonstrate the use of graphics cards as effective scientific computing tools, which was made possible by NVIDIA's recent development of CUDA. Though I did not mention this earlier, we observed $441,630$ metastable decays in five dimensions, and $486,018$ decays in seven dimensions (across all values of $H$)\footnote{With just one more week of computation on our Tesla C1070, we could have tripled these figures.}. The wealth of results was obtainable due to the enormous speedup over conventional computing, though it is important to note that the nearest neighbor Ising model is easily implemented in a parallel fashion; other simulations and models are likely not going to experience the same speedup we found in our simulation.

We discovered nucleation in the $d=5$ and $d=7$ Ising models by noting that the time until the metastable state decays follows an exponential distribution. We also discover a pseudospinodal external field in $d=5$ and $d=7$. This can be understood by noting that in higher dimensions, each spin has more neighbors, so the higher dimensional models should behave more mean field than lower dimensional models. We were able to make measurements of the pseudospinodal field using curve fitting. In these fits we also obtain a numerical approximation of critical exponent $\gamma$ (which classifies how $\chi$ diverges near $H_s$. Our numerical value closely agrees with the theoretical prediction in both $d=5$ and $d=7$. We also found a qualitative difference in the nucleation rate in $d=5$ versus $d=7$, namely that in $d=5$, $r$ is concave down, and in $d=7$, $r$ is concave up.

We considered a quantitative measurement of the quality of an exponential distribution $Q$. This measure identifies that the nucleation gradually drifts away from being a ``good'' exponential distribution in both $d=5$ and $d=7$. By examining $Q$, we observe that the regime $H>H_s$ is a nonequilibrium region, hence the metastable state does not exist, which is consistent with the effects of a pseudospinodal. Furthermore, this result is consistent with those from Ref.~\cite{wang}, who found that for $\ave t < t_0$, $r$ decreases (which is what we observe in Figure~\ref{fig:rLimit}), and metastable decay is no longer an equilibrium process.

\subsection{Future Work}
There is still much work to be done classifying the nucleating droplet. If there is a significant difference in the nucleating process in $d=7$ from $d<d_c=6$ then the droplet is the best candidate for identifying what the difference is. Though we do not find evidence of a real spinodal in $d=7$, we do have evidence that nucleation is different in $d=7$ than in $d=5$. In Figure~\ref{fig:scale} we see a gradual decline of $r/V$ as $L$ is increased before $r/V$ approaches a constant value in $d=5$; whereas for $d=7$ we observe a steep drop in $r/V$ directly to a constant value. This is evidence that the droplet may be {\em ramified} in $d=5$ (spread out), and {\em compact} in $d=7$.

    \begin{table}[p]
    \centering
    \begin{tabular}{ll}
    \toprule
    \multicolumn{2}{c}{\large\bf Glossary} \\
    \midrule
    $r$ & Nucleation rate ($1/\ave t$) \\
    $t$ & Time until nucleation \\
    $t_0$ & Transient/relaxation time \\
    $m$ & Magnetization per spin \\
    $\chi$ & Magnetic susceptibility ($\frac{\del m}{\del H}$) \\
    $H$ & External field \\
    $H_s$ & Spinodal field (really the pseudospinodal field) \\
    $T_c$ & Critical temperature \\
    $\Delta F$ & Free energy cost of a nucleating droplet \\
    $d$ & Spacial dimensionality \\
    $L$ & Linear length of one lattice side \\
    $N$ & Number of spins ($L^d$) \\
    $\ave{x}$ & Mean of $x$, or $x$'s interacting spins (context dependent)  \\
    $\sigma$ & Spin or standard deviation (context dependent) \\
    MCS & Monte Carlo Step ($N$ spin updates) \\
    FLOPS & Floating point operations per second \\
    CPU & Central processing unit \\
    GPU & Graphics processing unit \\
    \bottomrule
    \end{tabular}
    \end{table}

\pagebreak
\section*{Acknowledgments}
I would like to thank my advisor Harvey Gould for pairing me up with this interesting and stimulating project, as well as for his guidance and instruction. I would also like to thank collaborators Bill Klein and particularly Kip Barros, my effective boss, for designing the primary simulation engine, and for his constant assistance in acquiring and understanding results. Thanks also go to Ranjit Chacko for frequent elucidating discussions. Furthermore, I would like to thank Ron Babich and Mike Clark at the Center for Computational Science at Boston University for being generous enough to allow me access to their graphics cards for the simulations. I would lastly like to thank Sarah Brent for ice cream.

\pagebreak

\bibliographystyle{unsrt}
\bibliography{thesisRefs}

\end{document}
