\documentclass[11pt,a4paper]{book}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage{siunitx}
\usepackage[version=4]{mhchem}
\usepackage{amssymb}
\usepackage{authblk}
\usepackage{bm}
\usepackage{graphicx}
\usepackage{hyperref}
\hypersetup{hypertex=true,
            colorlinks=true,
            linkcolor=blue,
            anchorcolor=blue,
            citecolor=blue}
\usepackage{yhmath}
\usepackage{braket}
\usepackage{mathrsfs}
\usepackage{booktabs}


\title{Eikonal Model for One-nucleon Knockout}
\author{Yazhou Sun}
\affil{Anyang Normal University}
\affil{Institute of Modern Physics}

\begin{document}
\maketitle
\tableofcontents

\chapter{Introduction}

Eikonal model uses eikonal wavefunction to approximate the relative motion of
two-body scattering system, where the second-derivative term (kinetic energy) in
the Schr\"odinger equation is omitted provided the incident energy is sufficiently
high, which is called eikonal approximation. It is this omission that reduces
the equation to a first-derivative differential equation and made it tractable
analytically. This model is widely used in intermediate-energy collisions, typically
\SI{30}{\MeV} $\leqslant$ $E_{\rm lab}$/nucleon $\leqslant$ \SI{2}{\GeV}, suitable
for nucleon knockout reactions to take place.

While eikonal approximation describes two-body scattering, Glauber model develops
it to accommodate scattering between two composite systems, like nuclei.
The gist is rather intuitive: Glauber treated the scattering of the two packages
of nucleons as the scattering of each of the nucleons in the projectile and that
in the target. After neglecting multiple scattering, i.e. we assume each pair of
nucleons only scatter once, the scattering phase shift can be expressed in terms
of the nucleon-density distribution of the projectile and the target, which is
dubbed the optical-limit of Glauber model, or the so-called $t-\rho-\rho$ approximation.

As one of the prevalent reaction models for nucleon knockout in intermediate energy
collisions, Glauber model has been implemented with dedicated
codes\footnote{e.g. Compt. Phys. Comm. 175 (2006) 372--380}. To the author's knowledge,
most of them are written in FORTRAN, which is sort of ``esoteric'' for young students
and especially those non-theorists. The experimentalists use C++ vastly to be
compatible with high-energy data analysis framework \emph{ROOT} and the simulation
toolkit \emph{Geant4}. A C++ theoretical code is easier to understand and customize,
and most importantly, it greatly improves the independence of experimentalists,
whose ambitions and efficiencies are significantly affected by their extent of
cooperation with the theorists, which in turn benefits the theorists as well, as
they'd be more focused on theoretical innovation and have in-depth discussions on the
details of theoretical calculations with the experimentalists.

As part of the project for C++ implementation of the reaction code, this document
gives the theoretical derivations that underlies the algorithms of the code.

\chapter{Eikonal Model for Two-body Scattering}

\section{Eikonal Wavefunction}
\label{sec:2.1}

Consider the scattering of two particles. One is designated as the projectile, and
the other the target. They are both treated as having no inner structure, given the
incident energy range. Denote the wavefunction of the projectile with respect to
(w.r.t) the target as $\psi(\bm r)$, then we can write down the Schr\"odinger
equation it is subjected to as:

\begin{equation}
  \label{eq:2.1}
  \mathrm i\hbar\frac{\partial \psi}{\partial t}=
  -\frac{\hbar^2}{2\mu}\nabla^2\psi+V({\bm r})\psi
\end{equation}

$\mu\equiv\dfrac{m_{\rm p}m_{\rm t}}{m_{\rm p}+m_{\rm t}}$ is the reduced mass,
where $m_{\rm p}$ and $m_{\rm t}$ are the masses of the target and the projectile.
$V(\bm r)$ is the scattering potential, which is essentially the interaction between
the projectile and the target. Since $V(\bm r)$ does not contain time,
Eq.~\ref{eq:2.1} can be reduced to the stationary Schr\"odinger equation, i.e. the
eigenvalue equation of the Hamiltonian:

\begin{equation}
  \label{eq:2.2}
  \left[-\frac{\hbar^2}{2\mu}\nabla^2+V({\bm r})\right]\psi=E\psi
\end{equation}

Denote $k\equiv\dfrac{\sqrt{2\mu E}}{\hbar}=\dfrac{\mu v}{\hbar}$ as the relative
momentum of the projectile, where $v$ is the incident velocity of the projectile
w.r.t the target (note that before the collision, $V(r\to-\infty)=0$
so $E=\dfrac{p^2}{2\mu}=\dfrac{1}{2}\mu v^2$), and $U(\bm r)\equiv\dfrac{2\mu}{\hbar^2}V(\bm r)$,
Eq.~\ref{eq:2.2} is cast into a simpler form:

\begin{equation}\label{eq:2.3}
  [\nabla^2+k^2-U(\bm r)]\psi=0
\end{equation}

Eikonal model does not simply drop the second derivative term of Eq.~\ref{eq:2.3},
Since the wavefunction oscillates violently for high-energy scattering, the second
derivative term can not just be ignored. Instead, we separate the oscillating
factor by rewriting the wavefunction as

\begin{equation}\label{eq:2.4}
  \psi(\bm r)=(2\pi)^{-3/2}\exp(\mathrm{i}\bm k\cdot \bm r)\phi(\bm r)
\end{equation}

$\phi(\bm r)$ is the so-called modulation wavefunction, and we expect this
one to change slowly along with $\bm r$, justifying the omission of its second
derivative. The terminology ``eikonal'' comes from the resemblance of Eq.~\ref{eq:2.4}
to the eikonal function used in optics. The wavefunction of an electromagnetic wave
in vacuum is $A(\bm r, t)=A_0\exp[\mathrm{i}(\bm k_0\cdot\bm r-\omega t)]$, or
$A(\bm r)=A_0\exp[\mathrm{i}(\bm k_0\cdot\bm r)]$. For propagation in medium with
isotropic and uniform refraction index $n$, we have $\bm k=n\bm k_0$ and
$A(\bm r)=A_0\exp[\mathrm{i}(n\bm k_0\cdot\bm r)]$. For instances where $n$ might
be a function of $\bm r$, the wavefunction is written as

\begin{equation}\label{eq:2.5}
  A(\bm r)=A_0\exp\left[\mathrm{i}\int_{L}{n(\bm r')\bm k_{0}\cdot d\bm r'}\right]
\end{equation}

The path integral is along the optical path $L$ from $\bm 0$ to $\bm r$. Denote
$\chi(\bm r)\equiv\exp\{\mathrm{i}\int_{L}{[n(\bm r')-1]\bm k_{0}\cdot d\bm
r'\}}$, we write Eq.~\ref{eq:2.5} in a more explicit form:

\begin{equation}\label{eq:2.6}
  A(\bm r)\equiv A_0\exp(\mathrm{i}\bm{k_0}\cdot\bm r)\exp[\mathrm{i}\chi(\bm r)]
\end{equation}

It is then seen that the influence of the scattering potential on the wavefunction
is wrapped in the phase $\chi(\bm r)$. This is how Eq.~\ref{eq:2.4} is inspired.
Taking $z$ direction the same as $\bm k$, and substituting
$\psi(\bm r)=(2\pi)^{-3/2}\exp(\mathrm{i}kz)\phi(\bm r)$ in Eq.~\ref{eq:2.3}, we have

\begin{equation}\label{eq:2.7}
  [2\mathrm{i}\bm k\cdot\nabla+\nabla^2]\phi(\bm r)=U(\bm r)\phi(\bm r)
\end{equation}

$\nabla^2\phi(\bm r)$ is dropped because $\bm k$ is sufficiently large and $U(\bm r)$
changes slowly in terms of the wavelength of the projectile, so that

\begin{equation}\label{eq:2.8}
  |\nabla^2\phi(\bm r)|\ll|2\mathrm{i}\bm k\cdot\nabla\phi(\bm r)|
\end{equation}

To understand this, let's treat $\nabla^2\phi(\bm r)$ as a perturbation, and estimate
this term with the solved (0-order) wavefunction. Eq.~\ref{eq:2.7} now reduces to

\begin{equation}\label{eq:2.9}
  2\mathrm{i}k\frac{\partial\phi}{\partial z}\approx U(\bm r)\phi(\bm r)
\end{equation}

With the boundary condition $\phi(\bm r)\vert_{z\to-\infty}=1$, the solution is

\begin{equation}\label{eq:2.10}
  \phi(\bm r)=\exp\left[-\frac{\mathrm{i}}{2k}
  \int_{-\infty}^{z}U(x,y,z')\mathrm{d}z'\right]
  =\exp\left[-\frac{\mathrm{i}}{\hbar v}
  \int_{-\infty}^{z}V(x,y,z')\mathrm{d}z'\right]
\end{equation}

So we can see that the derivative of $\phi(\bm r)$ falls on to the derivative of
$U(\bm r)$. One can easily verify that

\begin{equation*}
  \left|\frac{\nabla^2\phi}{\bm k\cdot\nabla\phi}\right|
  =\left|\frac{\int_{-\infty}^z{\nabla^2 U\mathrm{d}z'}-\frac{\mathrm i}{2k}
  \left[
  \left(\int_{-\infty}^z{\frac{\partial U}{\partial x}\mathrm{d}z'}\right)^2+
  \left(\int_{-\infty}^z{\frac{\partial U}{\partial y}\mathrm{d}z'}\right)^2+U^2
  \right]
  }{kU}\right|
\end{equation*}

Assuming $U(\bm r)$ takes some form of a wave, its derivative adds to it
a factor of its wave number $k_U$. Eq.~\ref{eq:2.8} is valid as long as $k\gg k_U$,
i.e. $\lambda\ll \lambda_U$, or put it another way, that $U(\bm r)$ changes slowly
within one $\lambda$, where $\lambda=2\pi k$ and $\lambda_U=2\pi k_U$.
This is well satisfied for nuclear potential, which often takes the form of a
Woods-Saxon (WS) well with a flat bottom and slowly ascending edge. However this
is not the case for a Coulomb potential $V\propto r^{-1}$, because it drops rapidly
around its singularity $r=0$. As nuclear force is dominant over Coulomb interaction
in atomic nuclei, approximately the eikonal model still applies. The effect of
Coulomb interaction in eikonal model has been treated separately in Sec.~\ref{sec:coul}.

Anyway, the validity of the eikonal approximation should be eventually judged by
experiment.

\section{Derivation of Scattering Amplitude}
\label{sec:2.2}
This section gives the derivation of the general scattering amplitude using Green's
function. The process is quite typical and can be found in most quantum mechanics
textbooks. The solved eikonal wavefunction would then be inserted in the results
to yield the expressions of scattering amplitude and cross sections in eikonal model.

Let's take a second look at Eq.~\ref{eq:2.3} in Sec.~\ref{sec:2.1},

\begin{equation}
\label{eq:2.11}
  (\nabla^2+k^2)\psi(\bm r)=U(\bm r)\psi(\bm r)
\end{equation}
We introduce the Green's function $G_0(\bm r, \bm r')$ corresponding to operator
$(\nabla^2+k^2)$, i.e.

\begin{equation}\label{eq:2.12}
  (\nabla^2+k^2)G_0(\bm r, \bm r')=\delta^3(\bm r-\bm r')
\end{equation}
$\nabla$ is derivative over $\bm r$, unless otherwise specified.
The solution to Eq.~\ref{eq:2.12} is

\begin{equation}\label{eq:2.13}
  \psi(\bm r)=\int{G_0(\bm r, \bm r')U(\bm r')\psi(\bm r')\mathrm{d}\bm r'}
\end{equation}

To get an explicit form of $G_0(\bm r, \bm r')$, let

\begin{equation}\label{eq:2.14}
  G_0(\bm r, \bm r')\equiv\int{g_0(\bm r')\exp(\mathrm{i}\bm k'\cdot \bm r)
  \mathrm{d}\bm k'}
\end{equation}
Then we have

\begin{equation}\label{eq:2.15}
  (\nabla^2+k^2)G_0(\bm r, \bm r')=\int{(k^2-k'^2)g_0(\bm r')\exp(\mathrm{i}
  \bm k'\cdot\bm r)\mathrm{d}\bm k'}=\delta^3(\bm r-\bm r')
\end{equation}
A reasonable guess is

\begin{equation}\label{eq:2.16}
  g_0(\bm r')=(2\pi)^{-3}\frac{\exp(-\mathrm{i}\bm k'\cdot\bm r')}{k^2-k'^2}
\end{equation}
Which gives

\begin{equation}\label{eq:2.17}
  G_0(\bm r, \bm r')=(2\pi)^{-3}\int{\frac{\exp[\mathrm{i}\bm k'
  \cdot(\bm r-\bm r')]}{k^2-k'^2}\mathrm{d}\bm k'}
\end{equation}
Let $\bm R\equiv\bm r-\bm r'$, then

\begin{equation}\label{eq:2.18}
\begin{aligned}
  G_0(\bm r, \bm r') &= (2\pi)^{-3}\int_0^{+\infty}k'^2\mathrm{d}k'
  \int_0^\pi{\sin{\theta}\mathrm{d}\theta}\int_0^{2\pi}\mathrm{d}\varphi
  {\frac{\exp(\mathrm{i}k'R\cos{\theta})}{k^2-k'^2}} \\
  &= \frac{\mathrm{i}}{(2\pi)^{2}R}\int_{0}^{+\infty}
  {\frac{k'\mathrm{d}k'}{k'^2-k^2}}
  [\exp(\mathrm{i}k'R)-\exp(-\mathrm{i}k'R)] \\
  &= \frac{\mathrm{i}}{(2\pi)^{2}R}\int_{-\infty}^{+\infty}
  {\frac{\exp(\mathrm{i}k'R)k'\mathrm{d}k'}{k'^2-k^2}} \\
  &= \frac{\mathrm{i}}{(2\pi)^{2}2R}\int_{-\infty}^{+\infty}
  {\mathrm{d}k'\left[\frac{1}{k'-k}+\frac{1}{k'+k}\right]\exp(\mathrm{i}k'R)}
\end{aligned}
\end{equation}

The integral can be evaluated with residue theorem. The contour is a semi-circle
centered at $k'=0$ in the upper half of the complex $k'$ plane, enclosed by $AOB$
in the real $k'$ axis, as represented in Fig.~\ref{fig:2.1}.
Unfortunately, $AOB$ passes tow poles at $k'=\pm k$, where the integral diverges
if the two poles are approached along the $\operatorname{Re} k'$ axis, which can
be avoided if we set the contour to bypass the two poles using a small semi-circle
at each pole with radius $\varepsilon$. As $\varepsilon\to 0$ and $R\to \infty$,
the integral along path $\wideparen{BDA}$ goes to zero and the integral over the
closed contour reduces to the integral over the real axis, which is evaluated as
the residue of the integrand. Taking the contour in Fig.~\ref{fig:2.1}, we get

\begin{figure}
  \begin{center}
    \includegraphics[scale=0.4]{residue.pdf}
    \caption{The enclosed path to evaluate the integral of Eq.~\ref{eq:2.18}.}
    \label{fig:2.1}
  \end{center}
\end{figure}

\begin{equation}
  \label{eq:2.19}
  G_0(\bm r, \bm r')=\frac{\mathrm{i}}{(2\pi)^{2}2R} \cdot 2\pi\mathrm{i}
  \exp(\mathrm{i}kR)=-\frac{\exp(\mathrm{i}kR)}{4\pi R}
\end{equation}

Other ways of bypassing the two poles are also allowed mathematically, yet the
current one is adopted because it gives a physical outgoing scattering wave.

Inserting Eq.~\ref{eq:2.19} in Eq.~\ref{eq:2.13} gives

\begin{equation}
  \label{eq:2.20}
  \psi(\bm r)=-\frac{1}{4\pi}\int{\frac{\exp(\mathrm{i}kR)}{R}U(\bm r')
  \psi(\bm r')\mathrm{d}\bm r'}
\end{equation}

This expression can be further transformed to give it more physical meaning. Note
that $\bm r$ is the observation area, far outside where the potential $U(\bm r')$
reaches, i.e. $r'\ll r$, from which follows

\begin{equation}
  \label{eq:2.21}
  R=|\bm r-\bm r'|\approx r-\bm r'\cdot\bm{\hat r}
\end{equation}
In fact,
\begin{equation}
  \label{eq:2.22}
  \begin{aligned}
    |\bm r-\bm r'|&=\sqrt{r^2+r'^2-2\bm r'\cdot\bm r} \\
    &=r\sqrt{1-2\frac{\bm r'\cdot\bm r}{r^2}+\left(\frac{r'}{r}\right)^2} \\
    &\approx r\left[1+\frac{1}{2}\left(-2\frac{\bm r'\cdot\bm r}{r^2}\right)\right] \\
    &=r-\bm r'\cdot\bm{\hat r}
  \end{aligned}
\end{equation}
Where $\bm{\hat r}\equiv\dfrac{\bm r}{r}$ is the unit vector of $\bm r$.
\emph{Under the first-order approximation of $r^{-1}$}, we can replace
$R^{-1}$ with $r^{-1}$ in Eq.~\ref{eq:2.20}.
Situation is a little bit delicate when it comes to $\exp(\mathrm{i}kR)$,
where $k$ is a large number. It is necessary, and does not complicate the problem
very much, to take into account the first-order term of $r'$ in $\exp(\mathrm{i}kR)$.
The above considerations give

\begin{equation}
  \label{eq:2.23}
  \psi(\bm r)=-\frac{1}{4\pi}\frac{\exp(\mathrm{i}kr)}{r}
  \int{\exp(-\mathrm{i}\bm k'\cdot\bm r')U(\bm r')\psi(\bm r')\mathrm{d}\bm r'}
\end{equation}
The dependence of $\psi(\bm r)$ on the direction of $\bm r$ is wrapped in the
integral of Eq.~\ref{eq:2.23} by $\bm k'\equiv k\hat{\bm r}$. So $\bm k'$ represents
the outgoing momentum.

The general solution to Eq.~\ref{eq:2.11} includes a homogeneous term, so finally
we have
\begin{equation}
  \label{eq:2.24}
  \begin{aligned}
    \psi(\bm r)&=(2\pi)^{-3/2}\left[\exp(\mathrm{i}\bm k\cdot\bm r)-
    \frac{(2\pi)^{3}}{4\pi}\frac{\exp(\mathrm{i}kr)}{r}
    \int{(2\pi)^{-3/2}\exp(-\mathrm{i}\bm k'\cdot\bm r')U(\bm r')\psi(\bm r')
    \mathrm{d}\bm r'}\right] \\
    &=(2\pi)^{-3/2}\left[\exp(\mathrm{i}\bm k\cdot\bm r)-
    \frac{\exp(\mathrm{i}kr)}{r}\cdot 2\pi^2\Braket{\bm k'|U|\psi}\right] \\
    &\equiv(2\pi)^{-3/2}\left[\exp(\mathrm{i}\bm k\cdot\bm r)+
    \frac{\exp(\mathrm{i}kr)}{r}f(\theta,\varphi)\right]
  \end{aligned}
\end{equation}
Note that the normalization factor does not need to be $(2\pi)^{-3/2}$ after
adding the incident wave into $\psi(\bm r)$. Since the scattered part of the incident
wave constitutes a very small portion of the total wave, the difference in the
normalization factor is ignored. It is expedient to have the normalization factor
the same as before the scatter happens. $\dfrac{\exp(\mathrm{i}kr)}{r}$ represents
a spherical wave, as $\bm k$ is aligned with $\bm r$, since the factor $\dfrac{1}{r}$
keeps the flux per solid angle constant. The spherical wave is modulated by
\begin{equation}
  \label{eq:2.25}
  f(\theta,\varphi)=-2\pi^2\Braket{\bm k'|U|\psi}
  =-2\pi^2\left(\frac{2\mu}{\hbar^2}\right)\Braket{\bm k'|V|\psi}
\end{equation}
$f(\theta,\phi)$ has the dimension of length, and is called the
\emph{scattering amplitude}. For potential $U(\bm r)$ with rotational symmetry,
$U(\bm r)=U(r)$, $f(\theta,\phi)$ dependence on azimuthal angle $\phi$ vanishes,
$f(\theta,\phi)=f(\theta)$.

It turns out that the scattering state is the sum of the incident wave and an
outgoing spherical scattering wave, with the orientation distribution of its
strength described by $f(\theta)$. Note that Eq.~\ref{eq:2.24} is an asymptotic
form for observers at a large distance from the scatter, i.e. the target, as can
be told from the approximations we have adopted to reach here. It can also be easily
verified that the scattering wave $\dfrac{\exp(\mathrm{i}kr)}{r}f(\theta)$ satisfies
Eq.~\ref{eq:2.3}: $[\nabla^2+k^2-U(\bm r)]\psi=0$ at large $r$, where $U(r)\to 0$,
using $\nabla^2$ in spherical coordinate system
\begin{equation}
  \label{eq:2.26}
  \nabla^2=\dfrac{1}{r^2}\dfrac{\partial}{\partial r}r^2\dfrac{\partial}
  {\partial r}+\dfrac{1}{r^2\sin{\theta}}\dfrac{\partial}{\partial\theta}\sin{\theta}
  \dfrac{\partial}{\partial\theta}+\dfrac{1}{r^2\sin^2{\theta}}
  \dfrac{\partial^2}{\partial\varphi^2}
\end{equation}

$f(\theta)$ plays a pivotal role in the calculation of the differential scattering
cross sections. It can be presumed that the strength $\mathrm{d}n$ of the detected
scattered particle is dependent on the scattering angle $\Omega(\theta, \varphi)$,
and proportional to the strength of the incident particles. To be clear, let's
define the problem as follows: for incident flux $\bm j_{\rm in}$ at impact
parameter $\bm b$, the scattering flux at orientation $(\Omega,d\Omega)$ is

\begin{equation}
  \label{eq:2.27}
  \mathrm{d}n=\sigma(\Omega)j_{\rm in}(\bm b)\mathrm{d}\Omega
\end{equation}

We denote $A\equiv(2\pi)^{-2/3}$, $v\equiv\dfrac{k\hbar}{\mu}$ and c.c. for
``complex conjugate'' as a matter of convenience. For incident flux,
\begin{equation}
  \label{eq:2.28}
  j_{\rm in}(\bm b)=\hat{\bm z}\cdot\bm{j_{\rm in}}(\bm b)
  =|A|^2\frac{\hbar}{2\mu\mathrm{i}}
  \left[\exp(-\mathrm{i}kz)\frac{\mathrm{d}}{\mathrm{d}z}\exp(\mathrm{i}kz)
  -\mathrm{c.c.}\right]=v|A|^2
\end{equation}
$j_{\rm in}$ is independent of the impact parameter $\bm b$ for a plane wave.

As for the outgoing flux, things get a little bit complicated, for the wavefunction
is the sum of incident wave $\phi_{\bm k}=A\exp(\mathrm{i}kz)$ and scattering wave
$\psi_{\rm sc}=A\dfrac{\exp(\mathrm{i}kr)}{r}f(\theta)$. The scattering current is
\begin{equation}
  \label{eq:2.29}
  \begin{aligned}
    \bm j_{\rm out}&=\frac{\hbar}{2\mu\mathrm{i}}\left[
    \psi^{\ast}(\bm r)\nabla\psi(\bm r)-\mathrm{c.c.}\right] \\
    &=\frac{\hbar}{\mu}\operatorname{Im}[\phi_{\bm k}^{\ast}\nabla\phi_{\bm k}+
    \psi_{\rm sc}^{\ast}\nabla\psi_{\rm sc}+
    \phi_{\bm k}^{\ast}\nabla\psi_{\rm sc}+\psi_{\rm sc}^{\ast}\nabla\phi_{\bm k}] \\
    &\equiv\bm{j}_{\rm in}+\bm{j}_{\rm sc}+\bm{j}_{\rm ex}
  \end{aligned}
\end{equation}
$\bm{j}_{\rm ex}=\phi_{\bm k}^{\ast}\nabla\psi_{\rm sc}+\psi_{\rm sc}^{\ast}
\nabla\phi_{\bm k}$ is the exchange term.
We have worked out $\bm{j}_{\rm in}$ in Eq.~\ref{eq:2.28}, whose $(x,y)$ component is 0.
A particle is a wavepacket with the (small) size of that particle, so $\bm{j}_{\rm in}$
only matters at very forward angles. It is usually neglected for $\theta\neq0$.
As with $\bm{j}_{\rm sc}$, since detection is made in radial direction, only projection
on $\hat{\bm r}$ is needed,
\begin{equation}
  \label{eq:2.30}
  \begin{aligned}
    \bm{j}_{\rm sc}\cdot\hat{\bm r}
    &=\frac{\hbar}{\mu}\operatorname{Im}\left(\psi_{\rm sc}^{\ast}
    \frac{\partial}{\partial r}\psi_{\rm sc}\right) \\
    &=\frac{\hbar}{\mu}|A|^2|f(\theta)|^2\operatorname{Im}\left[
    \frac{\exp(-\mathrm{i}kr)}{r}
    \frac{\partial}{\partial r}\frac{\exp(\mathrm{i}kr)}{r}\right] \\
    &=\frac{\hbar}{\mu}|A|^2|f(\theta)|^2\frac{k}{r^2} \\
    &=v|A|^2|f(\theta)|^{2}r^{-2}\\
  \end{aligned}
\end{equation}

Similar derivation done to $\bm{j}_{\rm ex}$ leads to
\begin{equation}
  \label{eq:2.31}
  \begin{aligned}
    \bm{j}_{\rm ex}\cdot\hat{\bm r}
    &=\frac{\hbar}{\mu}\operatorname{Im}
    \left(\phi_{\bm k}^{\ast}\frac{\partial}{\partial r}\psi_{\rm sc}+
    \psi_{\rm sc}^{\ast}\frac{\partial}{\partial r}\phi_{\bm k}\right) \\
    &=\frac{v|A|^2}{r^2}\operatorname{Im}\left[
    f(\theta)(\mathrm{i}r-k^{-1})\exp(\mathrm{i}k(r-z))+
    f^{\ast}(\theta)\mathrm{i}r\cos\theta\exp(\mathrm{i}k(z-r))\right] \\
  \end{aligned}
\end{equation}
Because $\exp[\mathrm{i}k(r-z)]=\exp[\mathrm{i}kr(1-\cos\theta)]$ oscillates violently
for any $\theta\neq0$, the integrated contribution of $\bm{j}_{\rm ex}\cdot\hat{\bm r}$
over a finite sensitive area of a detecting unit leads to a destructive interference\footnote
{Introduction to Nuclear Reactions, C.A. Bertulani, P. Danielewicz,
Institute of Physics Publishing, Bristol, UK, 2004, p.30}, thus is ignored.

Eventually, for detetion at $\theta\neq0$, only $\bm{j}_{\rm sc}\cdot\hat{\bm r}$ remains, which gives
\begin{equation}
  \label{eq:2.32}
  \mathrm{d}n=\bm{j}_{\rm out}\cdot\mathrm{d}\bm{S}
  \approx(r^2\mathrm{d}\Omega)\bm{j}_{\rm sc}\cdot\hat{\bm r}
  =v|A|^2|f(\theta)|^2\mathrm{d}\Omega
\end{equation}

Inserting Eq.~\ref{eq:2.32} and Eq.~\ref{eq:2.28} into Eq.~\ref{eq:2.27}, we get
a simple expression for the differential cross section

\begin{equation}
  \label{eq:2.33}
  \sigma(\Omega)=|f(\theta)|^2
\end{equation}
Note that $|f(\theta)|^2$ is only for the elastic reaction channel, as the magnitude
of the momentum $\bm k$ does not change during the scatter.

\section{Optical Theorem}

Since we are gonna use optical theorem to calculate the total reaction cross section,
it is better and convenient to prove it here. The optical theorem says that the
total cross section of elastic and inelastic channels $\sigma_{\rm tot}$ is related
to the imaginary part of the scattering amplitude $f(\theta)$ by
\begin{equation}
  \label{eq:op0}
  \sigma_{\rm tot}=\frac{4\pi}{k}\operatorname{Im}f(\theta=0^\circ)
\end{equation}

There are many ways to prove this theorem. The most commonly seen is the partial
wave method for scattering in central potential, which entails lengthy formulae
and derivations. Here we provide a cheap and cogent proof from the
\href{https://www.classe.cornell.edu/~dlr/teaching/p6574/lectures/lecture9.pdf}
{Internet\footnote{https://www.classe.cornell.edu/~dlr/teaching/p6574/lectures/lecture9.pdf}}.
The basic idea is that, the incident plane wave $\phi_{\bm k}$ and the scattered
spherical wave $\psi_{\rm sc}$ interfere with each other, resulting in a destructive
interference $j_{\rm ex}$ in the forward direction, or put it in a classical way,
the shadow of the target. Well, the lost current does not just disappear for nothing,
but accounts for all the scattered flux $j_{\rm in}\sigma_{\rm tot}$ through elastic
and inelastic channels. We will show this in the text to follow.

We have total flux $\bm j_{\rm out}=\bm j_{\rm in}+\bm j_{\rm sc}+\bm j_{\rm ex}$.
As has been discussed, integration of $\bm j_{\rm ex}$ at $\theta\neq0^\circ$
over finite area  leads to destructive interference. Currents lost from
$\bm j_{\rm ex}$ at $\theta=0^\circ$ should compensate for the scattered flux
$\bm j_{\rm sc}$. Similar to Eq.~\ref{eq:2.27} and Eq.~\ref{eq:2.32}, we have
\begin{equation}
  \label{eq:op1}
  \mathrm{d}n_{\rm tot}=\sigma_{\rm tot}(\Omega)j_{\rm in}\mathrm{d}{\Omega}
  \approx\bm j_{\rm sc}\cdot\mathrm{d}\bm S
\end{equation}
from which follows
\begin{equation}
  \label{eq:op2}
  j_{\rm in}\sigma_{\rm tot}
  =\oint{\bm j_{\rm sc}\cdot\mathrm{d}\bm S}
  =\int{\nabla\cdot\bm j_{\rm sc}\mathrm{d}v}
  =\int{\frac{\partial}{\partial t}|\psi_{\rm sc}|^2\mathrm{d}v}
\end{equation}
where we have used Gauss theorem. The integration is over a sphere centered at the
scatter with radius$\to\infty$. $\mathrm{d}\bm S$ points to the outside of the sphere.
Note that by the equation of continuity, some readers may wonder that there should
have been a minus sign before the last term of Eq.~\ref{eq:op2}.
Well, the scattering wave $\psi_{\rm sc}$ is not subject to the equation of
continuity by definition. The integral of $|\psi_{\rm sc}|^2$ over the whole space
is always on the rise over time, not the other way arond, originated from the ever
outgoing $\bm j_{\rm sc}$, and also accouting for $j_{\rm in}\sigma_{\rm tot}$.
Substituting $\psi=\phi_{\bm k}+\psi_{\rm sc}$ in Eq.~\ref{eq:op2}, we have
\begin{equation}
  \label{eq:op3}
  \begin{aligned}
    \sigma_{\rm tot}
    &=\frac{1}{j_{\rm in}}\frac{\partial}{\partial t}\int{\mathrm{d}v
    |\psi-\phi_{\bm k}|^2} \\
    &=\frac{1}{j_{\rm in}}\frac{\partial}{\partial t}\int{\mathrm{d}v
    \left[|\psi|^2+|\phi_{\bm k}|^2-2\operatorname{Re}(\phi_{\bm k}^\ast\psi)
    \right]} \\
    &=\frac{1}{j_{\rm in}}\frac{\partial}{\partial t}\left[1+1-
    2\operatorname{Re}\int{\mathrm{d}v(\phi_{\bm k}^\ast\psi)}\right] \\
    &=-\frac{2}{j_{\rm in}}\operatorname{Re}\int{\mathrm{d}v\left(
    \dot\phi_{\bm k}^\ast\psi+\phi_{\bm k}^\ast\dot\psi\right)} \\
    &=-\frac{2}{j_{\rm in}}\operatorname{Re}\int{\mathrm{d}v\left[
    \left(\frac{H_0\phi_{\bm k}}{\mathrm{i}\hbar}\right)^\ast\psi+
    \phi_{\bm k}^\ast\left(\frac{H\psi}{\mathrm{i}\hbar}\right)\right]} \\
    &=-\frac{2}{j_{\rm in}}\operatorname{Re}\frac{1}{\mathrm{i}\hbar}\left[
    -\Braket{\phi_{\bm k}|H_0|\psi}+\Braket{\phi_{\bm k}|H|\psi}\right] \\
    &=-\frac{2}{j_{\rm in}\hbar}\operatorname{Im}\Braket{\phi_{\bm k}|V|\psi}
  \end{aligned}
\end{equation}
where $H\equiv H_0+V$. Inserting Eq.~\ref{eq:2.25} and
Eq.~\ref{eq:2.28} into Eq.~\ref{eq:op3} and noting that $A=(2\pi)^{-3/2}$,
$v=\dfrac{k\hbar}{\mu}$ and $f(0)
=-(2\pi)^2\dfrac{2\mu}{\hbar^2}\Braket{\phi_{\bm k}|V|\phi}$,
we obtain the optical theorem:
\begin{equation}
  \label{eq:op4}
  \sigma_{\rm tot}=\frac{2}{\hbar}\frac{(2\pi)^3\mu}{k\hbar}
  \frac{\hbar^2}{2\pi^2 2\mu}\operatorname{Im}f(0)
  =\frac{4\pi}{k}\operatorname{Im}f(0)
\end{equation}

Then how about the contribution of $\bm j_{\rm ex}$ at $\theta=0^\circ$? It is
expected to balance the outgoing current $\bm j_{\rm sc}$ to conserve the total
probability in whole space. We drop the lowest-order $r$ term in Eq.~\ref{eq:2.31}
under the limit of $r\to\infty$, and integrate $\bm j_{\rm ex}\cdot\hat{\bm r}$
over a forward cone ($\varepsilon$ is a small angle)
\begin{equation}
  \label{eq:op5}
  \begin{aligned}
    \int_{\varepsilon}{\bm j_{\rm ex}\cdot\hat{\bm r}r^2\mathrm{d}\Omega}
    &=\int_0^{2\pi}{\mathrm{d}\varphi}\int_0^{\varepsilon}
    {r^2\bm j_{\rm ex}\cdot\hat{\bm r}\sin\theta\mathrm{d}\theta} \\
    &=2\pi v|A|^2 r\int_0^{\varepsilon}{\sin\theta\mathrm{d}\theta}\times \\
    &\quad\quad\operatorname{Im}\mathrm{i}\left\{f(\theta)
    \exp[\mathrm{i}kr(1-\cos\theta)]+
    f^\ast(\theta)\cos\theta\exp[\mathrm{i}kr(\cos\theta-1)]\right\} \\
    &\approx 2\pi v|A|^2 r\int_0^{\varepsilon}{\mathrm{d}(-\cos\theta)} \\
    &\quad\quad\operatorname{Im}\mathrm{i}\left\{f(\theta)
    \exp[\mathrm{i}kr(1-\cos\theta)]+
    f^\ast(\theta)\exp[\mathrm{i}kr(\cos\theta-1)]\right\} \\
    &\approx -2\pi v|A|^2 r\cdot2\operatorname{Re}\left\{f^\ast(0)
    \exp(-\mathrm{i}kr)\int_0^{\varepsilon}
    {\exp[\mathrm{i}kr(\cos\theta)]\mathrm{d}(\cos\theta)}\right\} \\
    &=-2\pi v|A|^2 r\cdot2\operatorname{Re}\left\{f^\ast(0)
    \frac{\exp(-\mathrm{i}kr)}{\mathrm{i}kr}
    \exp[\mathrm{i}kr(\cos\theta)]|_0^{\varepsilon}\right\} \\
    &=-\frac{2\pi v|A|^2}{k}\cdot2\operatorname{Im}\left\{
    f^\ast(0)\left[\exp[\mathrm{i}kr(\cos\varepsilon-1)]-1\right]\right\} \\
    &\approx-\frac{2\pi v|A|^2}{k}\cdot2\operatorname{Im}\left\{
    f^\ast(0)\left[\exp\left(-\mathrm{i}kr\frac{\varepsilon^2}{2}\right)-1\right]
    \right\} \\
  \end{aligned}
\end{equation}
Although $\varepsilon$ is a small number, it is not 0, otherwise the integral will
be 0. It has to include the shadow of the target, so $\varepsilon\neq0$. Moreover,
both $k$ and $r$ are large number, so $\exp[-\mathrm{i}kr\varepsilon^2/2]$ is a
highly oscillating term w.r.t $\varepsilon$, we take its average value, namely 0.
Inserting Eq.~\ref{eq:2.28} in Eq.~\ref{eq:op5} gives
\begin{equation}
  \label{eq:op6}
  \int_{\varepsilon}{\bm j_{\rm ex}\cdot\hat{\bm r}r^2\mathrm{d}\Omega}
  =-j_{\rm in}\cdot\frac{4\pi}{k}\operatorname{Im}f(0)
\end{equation}
Comparing it with Eq.~\ref{eq:op2}, it is immediately obvious that the cross section
for $j_{\rm ex}$ at $\varepsilon\sim0$ is
\begin{equation}
  \label{eq:op7}
  \sigma_{\rm ex}=-\frac{4\pi}{k}\operatorname{Im}f(0)=-\sigma_{\rm tot}
\end{equation}
which just compensates for the total cross section $\sigma_{\rm tot}$ and conserves
the total probability in whole space.

\section{Eikonal Reaction Cross Sections}
\label{sec:2.4}
Formulae in Sec.~\ref{sec:2.2} are general for two-body scattering. By substituting
eikonal wavefunction Eq.~\ref{eq:2.4} and Eq.~\ref{eq:2.10} in $f(\theta)$
(Eq.~\ref{eq:2.25}), eikonal scattering amplitude is obtained:

\begin{equation}
  \label{eq:2.34}
  f(\theta,\varphi)=-\frac{1}{4\pi}\int{\exp(-\mathrm{i}\bm\kappa\cdot\bm r)U(\bm r)
  \exp\left[-\frac{\mathrm{i}}{2k}\int^{z}_{-\infty}
  {U(x,y,z')\mathrm{d}z'}\right]\mathrm{d}\bm r}
\end{equation}
where $\bm \kappa\equiv\bm k'-\bm k$ is the transferred momentum. Let
$\bm r=z\hat{\bm k}+\bm b$, with $\bm b$ being $\bm r$'s component in the plane
perpendicular to the incident direction, i.e. the impact parameter. Since the
scattering angle $\theta$ is very small for high-energy scattering ($\kappa\ll k$),
the longitudinal momentum transfer $\kappa_{\parallel}=k(1-\cos\theta)\sim
k\dfrac{\theta^2}{2}$ is neglected, and we only keep the transverse component of
the momentum transfer $-\bm q\equiv\bm \kappa_{\perp}, q=\kappa_{\perp}=
2k\sin(\theta/2)\approx k\sin\theta\approx k\theta$, so

\begin{equation}
  \label{eq:2.35}
  \bm\kappa\cdot\bm r\approx-\bm q\cdot\bm b
\end{equation}
which gives
\begin{equation}
  \label{eq:2.36}
  f(\theta,\varphi)=-\frac{1}{4\pi}
  \int_{-\infty}^{\infty}{dzU(\bm b,z)
  \exp\left[-\frac{\mathrm{i}}{2k}\int^{z}_{-\infty}{U(\bm b,z')\mathrm{d}z'}\right]}
  \int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
\end{equation}
The integral over $z$ can be done analytically, which simplifies $f(\theta,\varphi)$ as
\begin{equation}
  \label{eq:2.37}
  f(\theta,\varphi)=\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b
  \exp(\mathrm{i}\bm q\cdot\bm b)\Gamma(\bm b)}
\end{equation}
where usually
\begin{equation}
  \label{eq:2.38}
  \Gamma(\bm b)\equiv 1-\exp\left[\mathrm{i}\chi(\bm b)\right]
\end{equation}
is called the profile function, and
\begin{equation}
  \label{eq:2.39}
  \chi(\bm b)\equiv-\frac{1}{2k}\int^{\infty}_{-\infty}{U(\bm b,z')\mathrm{d}z'}
  =-\frac{1}{\hbar v}\int^{\infty}_{-\infty}{V(\bm b,z')\mathrm{d}z'}
\end{equation}
is the phase shift. It is told from Eq.~\ref{eq:2.37} that the scattering amplitude
is proportional to the Fourier transform of the profile function.
For spherically symmetric potentials, $U(\bm b,z)=U(b,z)$, the integration over
$\varphi$ can be worked out
\begin{equation}
  \label{eq:2.40}
  f(\theta)=\frac{\mathrm{i}k}{2\pi}\int_{0}^{\infty}{b\mathrm{d}b\Gamma(b)
  \int_{0}^{2\pi}{\mathrm{d}\varphi\exp(\mathrm{i}qb\cos\varphi)}}
  =\mathrm{i}k\int_{0}^{\infty}{b\mathrm{d}b\Gamma(b)J_{0}(qb)}
\end{equation}
using
\begin{equation}
  \label{eq:2.41}
  \int_{0}^{2\pi}{d\varphi\exp(\mathrm{i}qb\cos\varphi)}=2\pi J_0(qb)
\end{equation}
$J_{0}(qb)$ is the 0-order Bessel function.

The total reaction cross section is given by optical theorem

\begin{equation}
  \label{eq:2.42}
  \sigma_{\rm tot}=\frac{4\pi}{k}\operatorname{Im}f(\theta= 0^{\circ})
  =\int{\mathrm{d}\bm b\left\{2-
  2\operatorname{Re}\exp\left[\mathrm{i}\chi(\bm b)\right]\right\}}
\end{equation}

Total elastic cross section is $\sigma_{\rm el}=
\int{\mathrm{d}\Omega|f(\theta,\varphi)|^2}$, but first to facilitate the calculation,
let's transform the operator $\int{\mathrm{d}\Omega}$ a little:

\begin{equation}
  \label{eq:2.43}
  \begin{aligned}
    \int{\mathrm{d}\Omega}&=\frac{1}{k^2}\int{k^2\sin\theta
    \mathrm{d}\theta\mathrm{d}\varphi} \\
    &=\frac{1}{k^2}\int{2k^2\sin\frac{\theta}{2}
    \cos\frac{\theta}{2}\mathrm{d}\theta\mathrm{d}\varphi} \\
    &=\frac{1}{k^2}\int{2k\sin\frac{\theta}{2}
    \mathrm{d}\left(2k\sin\frac{\theta}{2}\right)\mathrm{d}\varphi} \\
    &\approx\frac{1}{k^2}\int{q\mathrm{d}q\mathrm{d}\varphi}
    =\frac{1}{k^2}\int{\mathrm{d}\bm q}
  \end{aligned}
\end{equation}
Then the integration can be carried out,
\begin{equation}
  \label{eq:2.44}
  \begin{aligned}
    \sigma_{\rm el}&=\frac{1}{k^2}\frac{k^2}{(2\pi)^2}\int{\mathrm{d}\bm q
    \int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)
    \left\{1-\exp[\mathrm{i}\chi(\bm b)]\right\}}
    \int{\mathrm{d}\bm b'\exp(-\mathrm{i}\bm q\cdot\bm b')
    \left\{1-\exp[-\mathrm{i}\chi^{\ast}(\bm b')]\right\}}} \\
    &=\int{\mathrm{d}\bm b\mathrm{d}\bm b'\left\{1-\exp[\mathrm{i}\chi(\bm b)]\right\}
    \left\{1-\exp[-\mathrm{i}\chi^\ast(\bm b')]\right\}
    \frac{1}{(2\pi)^2}\int{\mathrm{d}\bm q\exp(\mathrm{i}\bm q\cdot(\bm b-\bm b'))}} \\
    &=\int{\mathrm{d}\bm b\mathrm{d}\bm b'\left\{1-\exp[\mathrm{i}\chi(\bm b)]\right\}
    \left\{1-\exp[-\mathrm{i}\chi^\ast(\bm b')]\right\}\delta^2(\bm b-\bm b')} \\
    &=\int{\mathrm{d}\bm b\left|1-\exp[\mathrm{i}\chi(\bm b)]\right|^2} \\
    &=\int{\mathrm{d}\bm b\left\{1-2\operatorname{Re}\exp[\mathrm{i}\chi(\bm b)]+
    \left|\exp[\mathrm{i}\chi(\bm b)]\right|^2\right\}}
  \end{aligned}
\end{equation}

Finally we get the reaction cross section, which is not elastic, thus corresponds
to the inelastic reaction channel, e.g., excitation or breakup,
\begin{equation}
  \label{eq:2.45}
  \sigma_{\rm rec}=\sigma_{\rm tot}-\sigma_{\rm el}
  =\int{\mathrm{d}\bm b\left\{1-\left|\exp[\mathrm{i}\chi(\bm b)]\right|^2\right\}}
\end{equation}

This equation has a significant physical interpretation. The total reaction cross
section is the sum of contributions from all possible impact parameters. The
contribution for impact parameter $(\bm b,\bm b+\mathrm{d}\bm b)$ is
$\mathrm{d}\sigma_{\rm rec}=
\mathrm{d}\bm b\{1-\left|\exp[\mathrm{i}\chi(\bm b)]\right|^2\}$,
from which we have
\begin{equation}
  \label{eq:2.46}
  P(\bm b)\equiv\frac{\mathrm{d}\sigma_{\rm rec}}{\mathrm{d}\bm b}
  =\frac{\mathrm{d}\sigma_{\rm rec}}{b\mathrm{d}b\mathrm{d}\theta}
  =1-\left|\exp[\mathrm{i}\chi(\bm b)]\right|^2
\end{equation}
The left-hand side (LHS) is the ratio of two area, representing the probability
(or the portion) to be inelastically scattered for incident flux through cross
section $b\mathrm{d}b\mathrm{d}\theta$. For which reason, we take
\begin{equation}
  \label{eq:2.47}
  P_{\rm SV}(\bm b)\equiv|S(\bm b)|^2\equiv1-P(\bm b)
  =\left|\exp[\mathrm{i}\chi(\bm b)]\right|^2
\end{equation}
as the probability that the particle remain ``intact'', or ``survives'' the scatter,
where
\begin{equation}
  \label{eq:2.48}
  S(\bm b)=\exp[\mathrm{i}\chi(\bm b)]
\end{equation}
is the so-called $S$-matrix. What Eqs.~\ref{eq:2.45}$\sim$\ref{eq:2.47} reveal
will save us much painful derivations as to deduce the formula for all kinds of
cross sections, including that of nucleon-knockout channels, for nucleon-nucleus
and nucleus-nucleus scattering.

It is also seen from Eq.~\ref{eq:2.45} that $\operatorname{Im}\chi(\bm b)\neq0$ is
necessary for non-trivial $\sigma_{\rm rec}$ to happen. Combined with Eq.~\ref{eq:2.39}
it is equivalent to require $U(\bm r)$ be complex, which signifies that inelastic
scattering originates from $\operatorname{Im}U(\bm r)$.

\chapter{Glauber Model for Nucleus-Nucleus Scattering}

More often than not, the inner structures of the projectile or the target cannot
be ignored w.r.t. the incident energy, e.g. the scattering of two nuclei at
intermediate energies, where the kinetic energy per nucleon exceeds the separation
energy of the valence nucleons.

One of the overarching goals of a scattering theory is to calculate the cross
sections of various reaction channels. We have seen the pivotal role of the
scattering amplitude in the calculation of cross sections in Sec.~\ref{sec:2.4}.
Despite the fact that $f(\theta,\varphi)$ (Eq.~\ref{eq:2.25}) is deduced assuming
structureless projectile and target, Glauber model directly extends Eq.~\ref{eq:2.25}
to the scattering of two many-body systems, with $\ket{\bm k'}$ and $\ket{\psi}$
the eikonal many-body wavefunctions and potential $V$ being the sum of that between
each pair of constituent particles from the projectile and the target.
The validity of the theory should be judged by its comparison with experiment.
Contemporarily it is one of the prevalent reaction models for intermediate-energy
nucleon-nucleus and nucleus-nucleus scattering. So let's first go through the
derivations and formulas of the model.

\section{Nucleon-Nucleus Scattering}
\label{sec:3.1}
We take care of the nucleon-nucleus scattering first before digging into
nucleus-nucleus scattering. Our starting point is Eq.~\ref{eq:2.25}, but
inserting many-body eikonal wavefunction
\begin{equation}
  \label{eq:3.1}
  \Psi(\bm r;\bm\eta')=(2\pi)^{-3/2}\exp(\mathrm{i}\bm k\cdot\bm r)
  \exp\left[-\frac{\mathrm{i}}{2k}\int_{-\infty}^z{U(\bm b,z',\bm\eta')\mathrm{d}z'}
  \right]\theta(\bm\eta')
\end{equation}
$\bm r$ and $\bm k$ are the position and momentum of the incident nucleon w.r.t.
the mass center of the target nucleus. $\bm\eta'\equiv\{\bm\eta_i=\bm z_i+\bm t_i\}$
represents all the position vectors of the constituent nucleons of the target in the
target-local frame, where $i=1,2,...,A_{\rm T}$, with $A_{\rm T}$ the mass number
of the target. $\bm t_i$ is $\bm \eta_i$'s projection on the transverse plane,
and $\bm t'\equiv\{\bm t_i\}$ represent the vector set $\{\bm t_i\}$. Strictly
speaking, the center-of-mass coordinates of the target nucleus have been excluded
from $\{\bm \eta_i\}$. Those vectors are illustrated in Fig.~\ref{fig:3.1}.
\begin{figure}
\begin{center}
  \includegraphics[scale=0.6]{Nn.pdf}
  \caption{Position vectors of the nucleon-nucleus system used in text.}
  \label{fig:3.1}
\end{center}
\end{figure}
The nucleonic structure of the target is accommodated in $\Psi(\bm r,\bm\eta')$
by the intrinsic many-body wavefunction of the target $\theta(\bm\eta')$.

The scattering amplitude is then expressed as
\begin{equation}
  \label{eq:3.2}
  f(\theta,\varphi)=-2\pi^2\Braket{\Psi_\beta|U|\Psi_0}
\end{equation}
After similar derivations done through Eq.~\ref{eq:2.34} to Eq.~\ref{eq:2.39}, we get
\begin{equation}
  \label{eq:3.3}
  f(\theta,\varphi)=\frac{\mathrm{i}k}{2\pi}
  \int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
  \Braket{\theta_{\beta}|\Gamma_{\rm tot}(\bm b,\bm\eta')|\theta_0}
\end{equation}
where $\theta_0$ and $\theta_\beta$ represent the initial and final intrinsic
target states. Similarly,
\begin{equation}
  \label{eq:3.4}
  \Gamma_{\rm tot}(\bm b,\bm\eta')\equiv 1-\exp\left[\mathrm{i}
  \chi_{\rm tot}(\bm b,\bm\eta')\right]
\end{equation}
is the total profile function, and
\begin{equation}
  \label{eq:3.5}
  \chi_{\rm tot}(\bm b,\bm\eta')\equiv-\frac{1}{2k}
  \int_{-\infty}^{\infty}{U(\bm b,z,\bm\eta')\mathrm{d}z}
\end{equation}
is the total phase shift.

As the total interaction is the sum of that between the incident nucleon and each
of the nucleons in the target nucleus, Glauber model takes the total phase
shift as the sum of phase shifts between the incident nucleon and each nucleon
in the target nucleus
\begin{equation}
  \label{eq:3.6}
  \chi_{\rm tot}(\bm b,\bm\eta')=\sum_{i=1}^{A_{\rm T}}{\chi_i(\bm b,\bm\eta_i)}
  =\sum_{i=1}^{A_{\rm T}}{\chi_i(\bm b-\bm t_i,z_i)}
\end{equation}
where the correlation between target nucleons are ignored, as the scattering force
is dominant in intermediate-energy collision. The substitution of $(\bm b-\bm t_i,z_i)$
for $(\bm b,\bm\eta_i)$ is due to $U(\bm b,z,\bm\eta')$'s translational symmetry.
$\chi_i(\bm b-\bm t_i,z_i)$ is the phase shift between the incident nucleon and
the $i$-th nucleon in the target, which, similar to Eq.~\ref{eq:2.39}, is
\begin{equation}
  \label{eq:3.7}
  \chi_i(\bm b-\bm t_i,z_i)=-\frac{1}{2k_{\rm NN}}
  \int_{-\infty}^{\infty}{U_{\rm NN}(\bm b-\bm t_i,z-z_i)\mathrm{d}z}
\end{equation}
$U_{\rm NN}$ is the nucleon-nucleon interaction potential, and $k_{\rm NN}$ is
the relative momentum of the colliding nucleon pair. So Glauber model dissects
the nucleon-nucleus scattering problem to a bunch of nucleon-nucleon (NN) ones.
$U_{\rm NN}$ is known from NN scattering experimental data at various incident
energies. Since the integral is over $(-\infty,\infty)$, we can always remove
the $z_i$ dependence of $\chi_i$ by varaible substitution, i.e.,
$\chi_i(\bm b-\bm t_i,z_i)=\chi_i(\bm b-\bm t_i)$. Still, Eq.~\ref{eq:3.4} is
unwieldy to calculate the profile function, when it comes to the integration over
$\bm\eta$ in $\Braket{\theta_{\beta}|\Gamma_{\rm tot}(\bm b,\bm \eta')|\theta_0}$.
We thus seek further simplification of Eq.~\ref{eq:3.4}. With the help of
Eqs.~\ref{eq:3.5}$\sim$\ref{eq:3.7}, we have
\begin{equation}
  \label{eq:3.8}
  \begin{aligned}
    \Gamma_{\rm tot}(\bm b,\bm\eta')
    &=\Gamma_{\rm tot}(\bm b-\bm t')
    =1-\exp\left[\mathrm{i}\chi_{\rm tot}(\bm b-\bm t')\right] \\
    &=1-\exp\left[\mathrm{i}\sum_{i=1}^{A_{\rm T}}{\chi_i(\bm b-\bm t_i)}\right] \\
    &=1-\prod_{i=1}^{A_{\rm T}}{\exp\left[\mathrm{i}\chi_i(\bm b-\bm t_i)\right]} \\
    &\equiv 1-\prod_{i=1}^{A_{\rm T}}{[1-\Gamma_i(\bm b-\bm t_i)]} \\
    &=\sum_{i=1}^{A_{\rm T}}{\Gamma_i(\bm b-\bm t_i)}-
    \sum_{i,j=1}^{A_{\rm T}}{\Gamma_i(\bm b-\bm t_i)\Gamma_j(\bm b-\bm t_j)}+\cdots \\
    &\approx\sum_{i=1}^{A_{\rm T}}{\Gamma_i(\bm b-\bm t_i)}
  \end{aligned}
\end{equation}
$\Gamma_i(\bm b-\bm t_i)\equiv 1-\exp\left[\mathrm{i}\chi_i(\bm b-\bm t_i)\right]$
is the profile function between the incident nucleon and the $i$-th nucleon in
the target. Terms of $\Gamma_i$ of second order or higher are dropped in the
last step of Eq.~\ref{eq:3.8}, which represent multiple-time scattering, i.e.,
the events where the incident nucleon experiences a cascade of scattering with
multiple nucleons in the target. The possibility is even lower for intermediate-energy
scatterings. This approximation is given the name of the \emph{optical limit} of
the Glauber model, which greatly simplifies the calculation.
Inserting Eq.~\ref{eq:3.8} in Eq.~\ref{eq:2.37} gives
\begin{equation}
  \label{eq:3.9}
  \begin{aligned}
    f(\theta,\varphi)&\approx\frac{\mathrm{i}k}{2\pi}
    \int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \Braket{\theta_\beta|\sum_{i=1}^{A_{\rm T}}{\Gamma_i(\bm b-\bm t_i)}|\theta_0} \\
    &=\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \sum_{i=1}^{A_{\rm T}}{\prod_{i=1}^{A_{\rm T}}{\int{\mathrm{d}\bm\eta_i}
    {\rho_i(\bm\eta_i)\Gamma_i(\bm b-\bm t_i)}}} \\
    &=\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \sum_{i=1}^{A_{\rm T}}{\int{\mathrm{d}\bm\eta_i}
    {\rho_i(\bm\eta_i)\Gamma_i(\bm b-\bm t_i)}} \\
    &\approx\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \int{\mathrm{d}\bm\eta}{\sum_{i=1}^{A_{\rm T}}{\rho_i(\bm\eta)}
    \Gamma_{\rm NN}(\bm b-\bm t)} \\
    &\equiv\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \int{\mathrm{d}\bm\eta}{\rho_{\rm T}(\bm\eta)\Gamma_{\rm NN}(\bm b-\bm t)}
  \end{aligned}
\end{equation}
As far as the probability distribution of the target concerns, the difference
between the initial ($\theta_0$) and final target state ($\theta_\beta$) can be
safely neglected. $\theta\equiv\theta_0\approx\theta_\beta$ is used instead.
$\rho_i(\bm\eta_i)$ is the density distribution of the $i$-th nucleon in the target.
So $|\theta(\bm\eta')|^2=\prod_{i=1}^{A_{\rm T}}{\rho_i(\bm\eta_i)}$ and
$\int{\rho_i(\bm\eta_i)\mathrm{d}\bm\eta_i}=1$. And since the nuclear force is
dominant over Coulomb one, profile functions are treated to be symmetric over
isospin, i.e. $\forall i\in[i,A_{\rm T}], \Gamma_i\approx\Gamma_{\rm NN}$ and
$\chi_i\approx\chi_{\rm NN}$.

$\rho_{\rm T}(\bm\eta)\equiv\sum_{i=1}^{A_{\rm T}}{\rho_i(\bm\eta)}$ is the
nucleonic density distribution of the target, and we have
\begin{equation}
  \label{eq:3.10}
  \int{\mathrm{d}\bm\eta\rho_{\rm T}(\bm\eta)}=A_{\rm T}
\end{equation}

As calculating cross sections of various channels using
Eqs.~\ref{eq:2.42}$\sim$\ref{eq:2.45}, we need the $S$-matrix, or the total phase
shift $\chi_{\rm tot}$ according to Eq.~\ref{eq:2.48}. A comparison of
Eq.~\ref{eq:2.37} and Eq.~\ref{eq:2.38} with Eq.~\ref{eq:3.9} gives (note that
the profile function is the inverse Fourier transform of the scattering amplitude,
see Eq.~\ref{eq:2.37})
\begin{equation}
  \label{eq:3.11}
  \begin{aligned}
    \Gamma_{\rm tot}(\bm b)
    &\equiv1-\exp\left[\mathrm{i}\chi_{\rm tot}(\bm b)\right]
    \approx-\mathrm{i}\chi_{\rm tot}(\bm b) \\
    &=\int{\mathrm{d}\bm \eta\rho_{\rm T}(\bm\eta)\Gamma_{\rm NN}(\bm b-\bm t)} \\
    &=\int{\mathrm{d}\bm \eta\rho_{\rm T}(\bm\eta)\cdot\frac{1}{2\pi\mathrm{i}k_{\rm NN}}
    \int{\mathrm{d}\bm q\exp[-\mathrm{i}\bm q\cdot(\bm b-\bm t)]f_{\rm NN}(\bm q)}} \\
    &=\frac{1}{2\pi\mathrm{i}k_{\rm NN}}
    \int{\mathrm{d}\bm\eta\exp(\mathrm{i}\bm q\cdot\bm t)\rho_{\rm T}(\bm\eta)
    \int{\mathrm{d}\bm q\exp(-\mathrm{i}\bm q\cdot\bm b)f_{\rm NN}(\bm q)}} \\
  \end{aligned}
\end{equation}
$f_{\rm NN}(\bm q)$ is the NN scattering amplitude.
Since $q=k_{\rm NN}\sin\theta$, and $\bm q$'s direction gives the azimuthal angle
$\varphi$, $f_{\rm NN}(\bm q)$ is equivalent to $f_{\rm NN}(\theta,\varphi)$.
Either one of them will be used according to context. Denoting the 2-D Fourier
transform of $\rho_{\rm T}(\bm\eta)$ as
\begin{equation}
  \label{eq:3.12}
  \rho_{\rm T}(\bm q)\equiv\int{\mathrm{d}\bm\eta\exp(\mathrm{i}\bm q\cdot\bm t)
  \rho_{\rm T}(\bm\eta)}
\end{equation}
Eq.~\ref{eq:3.11} gives
\begin{equation}
  \label{eq:3.13}
  \chi_{\rm tot}(\bm b)=\frac{1}{2\pi k_{\rm NN}}
  \int{\mathrm{d}\bm q\exp(-\mathrm{i}\bm q\cdot\bm b)\rho_{\rm T}(\bm q)
  f_{\rm NN}(\bm q)}
\end{equation}
If the NN potential and $\rho_{\rm T}(\bm\eta)$ are spherically symmetric, the
integration over $\varphi$ is worked out using Eq.~\ref{eq:2.41}
\begin{equation}
  \label{eq:3.14}
  \chi_{\rm tot}(b)=\frac{1}{k_{\rm NN}}\int_0^{\infty}{J_0(qb)\rho_{\rm T}(q)
  f_{\rm NN}(q)q\mathrm{d}q}
\end{equation}
Eq.~\ref{eq:3.14} is a frequently used formula in practice, because it is simple
and applicable in most circumstances. A widely used parameterization for
$f_{\rm NN}(q)$ is
\begin{equation}
  \label{eq:3.15}
  f_{\rm NN}(q)=\frac{k_{\rm NN}}{4\pi}\sigma_{\rm NN}(\mathrm{i}+\alpha_{\rm NN})
  \exp\left(-\beta_{\rm NN}q^2\right)
\end{equation}
$\sigma_{\rm NN}$ is the total NN cross section, as can be verified using optical
theorem. $\alpha_{\rm NN}$ and $\beta_{\rm NN}$ are obtained by fitting the NN
scattering experimental data at intermediate energies, like the Ray
parameterization\footnote{L.~Ray, Phys. Rev. C $\bm{20}$, 1857 (1979)} and the
Horiuchi parameterization\footnote{W. Horiuchi, \emph{et al}, Phys. Rev. C $\bm{75}$,
044607 (2007)}. We will discuss the option of the parameterization later.

The $S$-matrix, total cross section $\sigma_{\rm tot}$ and the reaction cross
section $\sigma_{\rm rec}$ for nucleon-nucleus scattering are expressed as
Eq.~\ref{eq:2.48}, Eq.~\ref{eq:2.42} and Eq.~\ref{eq:2.45} respectively, by
substituting $\chi$ with $\chi_{\rm tot}$.

\section{Nucleus-Nucleus Scattering}
We take care of the nucleus-nucleus scattering case in this chapter.
The formulae for the scattering amplitude, the total phase shift, the $S$-matrix,
and the expressions for all kinds of cross sections will be obtained by simply
extending the approach employed in Sec.~\ref{sec:3.1}, i.e. by introducing the
intrinsic wavefunction of the incident nucleus, in addition to that of the target
nucleus. Correspondingly, we denote $\bm r_i$ as the position vector of the
$i$-th nucleon in the incident nucleus, and $\bm s_i$ its transverse component,
as represented by Fig.~\ref{fig:3.2}. Similarly, we define $\bm r'\equiv\{\bm r_i\}$
and $\bm s'\equiv\{\bm s_i\}$.

\begin{figure}
\begin{center}
  \includegraphics[scale=0.5]{PT.pdf}
  \caption{Position vectors of the nucleus-nucleus system used in text.}
  \label{fig:3.2}
\end{center}
\end{figure}

Now we can start just from Eq.~\ref{eq:3.9} by inserting the projectile intrinsic
wavefunction $\psi(\bm r')$:
\begin{equation}
  \label{eq:3.16}
  \begin{aligned}
    f(\theta,\varphi)&\approx\frac{\mathrm{i}k}{2\pi}
    \int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \Braket{\theta_\beta\psi_\beta|
    \sum_{i,j=1}^{A_{\rm P},A_{\rm T}}{\Gamma_{i,j}(\bm b,\bm s_i,\bm t_j)}
    |\theta_0\psi_0} \\
    &=\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \sum_{i,j=1}^{A_{\rm P},A_{\rm T}}{\prod_{i,j=1}^{A_{\rm P},A_{\rm T}}
    {\int{\mathrm{d}\bm r_i\mathrm{d}\bm\eta_j\rho_i^{\rm P}(\bm r_i)
    \rho_j^{\rm T}(\bm\eta_j)\Gamma_{i,j}(\bm b+\bm s_i-\bm t_j)}}} \\
    &=\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \sum_{i,j=1}^{A_{\rm P},A_{\rm T}}{\int{\mathrm{d}\bm r_i\mathrm{d}\bm\eta_j
    \rho_i^{\rm P}(\bm r_i)\rho_j^{\rm T}(\bm\eta_j)
    \Gamma_{i,j}(\bm b+\bm s_i-\bm t_j)}} \\
    &\approx\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \int{\mathrm{d}\bm r\mathrm{d}\bm\eta\sum_{i=1}^{A_{\rm P}}{\rho_i^{\rm P}(\bm r)}
    \sum_{i=1}^{A_{\rm T}}{\rho_i^{\rm T}(\bm\eta)}\Gamma_{\rm NN}(\bm b+\bm s-\bm t)} \\
    &\equiv\frac{\mathrm{i}k}{2\pi}\int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \int{\mathrm{d}\bm r\mathrm{d}\bm\eta\rho_{\rm P}(\bm r)\rho_{\rm T}(\bm\eta)
    \Gamma_{\rm NN}(\bm b+\bm s-\bm t)} \\
  \end{aligned}
\end{equation}
Note that $\bm r$ in Eq.~\ref{eq:3.16} is used as an integration variable for
convenience, not to be mixed with the relative position of the projectile w.r.t.
the target. As is with Eq.~\ref{eq:3.10}, $\rho_{\rm P}$ is the nucleonic density
of the projectile and is normalized to the projectile mass number $A_{\rm P}$.

Now that Eq.~\ref{eq:3.9} is generalized to Eq.~\ref{eq:3.16}, we can of course
easily do the same to generalize Eq.~\ref{eq:3.13} and Eq.~\ref{eq:3.14}. The
resulting expression for the total phase shift of nucleus-nucleus scattering is
\begin{equation}
  \label{eq:3.17}
  \chi_{\rm tot}(\bm b)=\frac{1}{2\pi k_{\rm NN}}
  \int{\mathrm{d}\bm q\exp(-\mathrm{i}\bm q\cdot\bm b)\rho_{\rm P}^\ast(\bm q)
  \rho_{\rm T}(\bm q)f_{\rm NN}(\bm q)}
\end{equation}
and for central NN potential we have
\begin{equation}
  \label{eq:3.18}
  \chi_{\rm tot}(b)=\frac{1}{k_{\rm NN}}\int_0^{\infty}{J_0(qb)
  \rho_{\rm P}^\ast(q)\rho_{\rm T}(q)f_{\rm NN}(q)q\mathrm{d}q}
\end{equation}
Similar to Eq.~\ref{eq:3.12}, $\rho_{\rm P}(\bm q)$ is the 2-D Fourier transform
of $\rho_{\rm P}(\bm r)$:
\begin{equation}
  \label{eq:3.19}
  \rho_{\rm P}(\bm q)\equiv\int{\mathrm{d}\bm r\exp(\mathrm{i}\bm q\cdot\bm s)
  \rho_{\rm P}(\bm r)}
\end{equation}


\chapter{One-nucleon Knockout Reaction}
Now we have prepared the building blocks to construct the formulae for one-nucleon
knockout at intermediate energies under the frame of eikonal approximation and
Glauber model. Before everything, it is necessary to specify the physical problem
here. The one-nucleon knockout reaction to be studied is of the type
\begin{equation*}
  (P=c+v)+T\to c+X
\end{equation*}
$P$, $c$, $v$ and $T$ are the projectile nucleus, the core, the valence nucleon
and the target nucleus, respectively. Since exotic nuclei are rich in species and
provide a vast and attractive testing ground for both nuclear reaction models and
nuclear structure theories, radioactive ion beams are widely used nowadays. On the
other hand, the target nuclei are tightly bound and stable. At least for the
knockout of loosely bound valence nucleon, as far as the one-nucleon knockout
cross sections are concerned, the excitation of the target nucleus is ignored.
Under this physical picture the one-nucleon knockout reaction cross sections are
deduced in the sections to follow.

\section{One-nucleon Knockout Cross Section}

Since the effect of the inner coordinates of the nuclei has been taken into account
in the nucleonic density distributions, we change the coordinate notation to better
describe the situation where one valence nucleon in the projectile is knocked out
by the target, as depicted in Fig.~\ref{fig:4.1}. To be clear, the projectile is
treated as a core plus valence nucleon ($\rm c+\rm v$) system. As has been mentioned
in Eq.~\ref{eq:2.47} of Sec.~\ref{sec:2.4}, $|S|^2$ is interpreted as the probability
that the projectile are elastically scattered, which makes $1-|S|^2$ the probability
that the projectile are inelastically scattered. Since the phase shift is additive,
we have
\begin{equation}
  \label{eq:4.1}
  |S|^2=\prod_{i=1}^{A}{|S_i|^2}
\end{equation}
which signifies that $|S|^2$ is the probability that all the constituent nucleons
in a nucleus are at most elastically scattered, so that the whole nucleus loses
or gains no energy during the process. On the other hand, if either of the valence
nucleon or the core is inelastically scattered, they will differ in velocity upon
the exit channel, which separates them from one another, achieving inelastic breakup
(or stripping) of the valence nucleon from the core. Since experimentally the
one-nucleon knockout channel is identified by the detection of the core, We thus
take the expectation value of $P_{-v}=|S_{\rm c}|^2(1-|S_{\rm v}|^2)$ as the
probability that the valence nucleon is \emph{absorbed} while the core reaches
the detectors intact. Integration over the impact parameter $\bm b$ gives the
total cross section of the stripping channel
\begin{equation}
  \label{eq:4.2}
  \sigma_{\rm str}=\int{\mathrm{d}\bm b_{\rm v}
  \Braket{\phi_0|\left(1-\left|S_{\rm v}\right|^2\right)\left|S_{\rm c}\right|^2|\phi_0}
  }
\end{equation}
$\phi_0(\bm r)$ is the ground state relative wavefunction of the valence nucleon
w.r.t. the core. $\bm b_{\rm {v(c)}}$ represents the transverse component of
$\bm r_{\rm {v(c)}}$. As $\bm r_{\rm v}=\bm r_{\rm c}+\bm r$ and $\bm R=\bm r_{\rm c}+
\dfrac{\bm r}{A_{\rm P}}$, the integration over the impact parameter can be done
over $\bm b$, $\bm b_{\rm v}$ or $\bm b_{\rm c}$. $S_{\rm c}$ and $S_{\rm v}$ are
calculated following Eq.~\ref{eq:2.48}, where the total phase shifts are given by
Eq.~\ref{eq:3.13}(\ref{eq:3.14}) and Eq.~\ref{eq:3.17}(\ref{eq:3.18}) respectively.

\begin{figure}
  \includegraphics[scale=0.5]{vcT.pdf}
  \caption{The position vectors used in one-nucleon knockout scattering.
  $v$ and $c$ stand for valence and core, respectively. $\bm R$ is the position
  vector pointing from the center of mass of the target to that of the projectile.}
  \label{fig:4.1}
\end{figure}

It is worthwhile to verify that Eq.~\ref{eq:4.2} includes all the cross section
contributed by inelastic breakup. Following Eq.~\ref{eq:4.2}, it is easy to give
the probability for the knockout of the core as $P_{-\rm c}=(1-|S_{\rm c}|^2)|S_{\rm v}|^2$,
and that where both the core and the valence nucleon are inelastically scattered,
$P_{-\rm {cv}}=(1-|S_{\rm c}|^2)(1-|S_{\rm v}|^2)$. Well, since for the projectile
to be absorbed with $P_{-\rm P}=1-|S_{\rm c} S_{\rm v}|^2$, either one of $v$ or $c$,
or both of them, must get hurt. It is natural to require that $P_{-\rm P}=
P_{-\rm c}+P_{-\rm v}+P_{-\rm{cv}}$, which, as can be easily verified, is true.
So we comment that $P_{-\rm v}=|S_{\rm c}|^2(1-|S_{\rm v}|^2)$ literally includes
all the contribution as it can arise from the channel where the valence nucleon is
inelastically scattered with the target nucleus, while the core remain intact, i.e.
elastically scattered with the target.

As is endorsed by several references\footnote{P.G. Hansen, J.A. Tostevin, Direct
reactions with exotic nuclei, Annu. Rev. Nucl. Sci.: 2003, 53:219-261\label{ftnt:DRXN}}
\footnote{C.A. Bertulani, A. Gade, MOMDIS: a Glauber model computer code for knockout
reactions, Comput. Phys. Commun.: 2006, 175: 372-380\label{momdis}}, there is another
mechanism for the removal of the valence nucleon, i.e. the diffractive dissociation,
or elastic breakup. The mechanism specifies a reaction channel where the valence
nucleon ``dissociates from the residue through their two-body interaction with the
target, each being at most elastically scattered. As a result, the removed nucleon
is present in the forward beam with essentially the beam velocity, and the target
remains in its ground state''\textsuperscript{\ref{ftnt:DRXN}}. The diffraction
channel generalize the formula for cross section of the elastic scattering, i.e.
Eq.~\ref{eq:2.44}, to calculate its cross section. Similar to the derivations of
Eq.~\ref{eq:2.44}, and with the help of Eq.~\ref{eq:3.16}, we have
\begin{equation}
  \label{eq:4.3}
  \begin{aligned}
    \sigma_{\rm diff}&=\int{\mathrm{d}\bm k\int{\mathrm{d}\Omega
    \left|f_{\bm k0}(\theta,\varphi)\right|^2}} \\
    &=\int{\mathrm{d}\bm k\frac{1}{k_{\rm NN}^2}\int{\mathrm{d}\bm q
    \left|
    \frac{\mathrm{i}k_{\rm NN}}{2\pi}
    \int{\mathrm{d}\bm b\exp(\mathrm{i}\bm q\cdot\bm b)}
    \Braket{\theta_\beta\psi_\beta^{\rm c}\phi_{\bm k}|
    \sum_{i,j=1}^{A_{\rm P},A_{\rm T}}{\Gamma_{i,j}(\bm b,\bm s_i,\bm t_j)}
    |\theta_0\psi_0^{\rm c}\phi_0}
    \right|^2}}\\
    &=\int{\mathrm{d}\bm k\int{\mathrm{d}\bm b\left|
    \Braket{\theta_\beta\psi_\beta^{\rm c}\phi_{\bm k}|
    \sum_{i,j=1}^{A_{\rm P},A_{\rm T}}{\Gamma_{i,j}(\bm b,\bm s_i,\bm t_j)}
    |\theta_0\psi_0^{\rm c}\phi_0}\right|^2 }}\\
    &=\int{\mathrm{d}\bm k\int{\mathrm{d}\bm b\left|
    \Braket{\phi_{\bm k}|[1-\exp(\mathrm{i}\chi_{\rm tot}(\bm b_{\rm c},\bm b_{\rm v}))]
    |\phi_0}\right|^2}} \\
    &=\int{\mathrm{d}\bm k\int{\mathrm{d}\bm b_{\rm v}
    \left|\Braket{\phi_{\bm k}|(1-S_{\rm c}S_{\rm v})|\phi_0}\right|^2}} \\
  \end{aligned}
\end{equation}
$\phi_{\bm k}$ represents the continuum state of $\rm c$ and $\rm v$, i.e. the
two parts are separated and free. $\psi_{\rm 0(\beta)}^{\rm c}$ is the ground
(final) intrinsic state of the core. To avoid integration into continuum, we
\emph{lump the bound excited states of the $\rm c+\rm v$ system into its continuum
states} to get the closure
\begin{equation}
  \label{eq:4.4}
  \Ket{\phi_0}\Bra{\phi_0}+\Ket{\phi_{\bm k}}\int{\mathrm{d}\bm k}\Bra{\phi_{\bm k}}
  \approx{\bm 1}
\end{equation}
Inserting Eq.~\ref{eq:4.4} into Eq.~\ref{eq:4.3} gives
\begin{equation}
  \label{eq:4.5}
  \begin{aligned}
    \sigma_{\rm diff}&\approx\int{\mathrm{d}\bm b_{\rm v}
    \Bra{\phi_0|(1-S_{\rm c}^\ast S_{\rm v}^\ast)}\left(1-\Ket{\phi_0}\Bra{\phi_0}\right)
    \Ket{(1-S_{\rm c}S_{\rm v})|\phi_0}} \\
    &=\int{\mathrm{d}\bm b_{\rm v}
    \left[\Braket{\phi_0|\left|1-S_{\rm c}S_{\rm v}\right|^2|\phi_0}-
    \left|\Braket{\phi_0|(1-S_{\rm c}S_{\rm v})|\phi_0}\right|^2\right]} \\
    &=\int{\mathrm{d}\bm b_{\rm v}
    \left[\Braket{\phi_0|\left|S_{\rm c}S_{\rm v}\right|^2|\phi_0}-
    \left|\Braket{\phi_0|S_{\rm c}S_{\rm v}|\phi_0}\right|^2\right]} \\
  \end{aligned}
\end{equation}

Eq.~\ref{eq:4.5} is practically used to evaluate the diffractive dissociation cross
section of one-nucleon knockout. It contains extra channels where the valence nucleon
is not separated from the core, but only the two are in bound excited states, so as to
overestimate $\sigma_{\rm diff}$, though the overestimated part is believed to be
insignificant ($\sim$10\% of the total one-nucleon knockout cross
sections\textsuperscript{\ref{ftnt:DRXN}}).

Besides, one may wonder, where does the energy come from to counterbalance the
separation energy of the valence nucleon in diffractive dissociation, as the whole
process is stated to be ``elastic''? If the target remains int its ground state,
the energy most probably comes from the kinetic energy of the projectile, and since
there is kinetic energy transferred to the inner excitation of the projectile,
it is not appropriate to claim the diffractive dissociation channel as elastic.
Well, nevertheless, it is still possible that the valence nucleon and the core
both at most elastically scatter with the target, so as to be separated from the
inelastic breakup channel.

\section{Momentum Distribution}
\label{sec:momdis}
The high incident energy makes one observable carry important structural information
of the projectile---the differential cross section distribution over the momentum of
the removed nucleon in projectile-rest frame, called the \emph{momentum distribution}.
It includes the longitudinal momentum distribution $\dfrac{\mathrm{d}\sigma}{\mathrm{d}k_z}$
and the transverse momentum distribution $\dfrac{\mathrm{d}\sigma}{\mathrm{d}\bm k_\perp}$
and $\dfrac{\mathrm{d}\sigma}{\mathrm{d}k_y}$. The reaction cross section is a direct
measurement of the reaction probability, thus this momentum distribution reflects
the momentum distribution of the struck (valence) nucleon, hence carrying information
of its quantum state in the projectile. Actually momentum distribution is a vital
indication of the angular momentum quantum number $l$, as can be told from the shape
of the peak of the momentum distribution of the core.

Experimentally, usually only the residue, namely the core, is detected in a one-nucleon
removal reaction, because it is an easy and reliable approach to identify the
one-nucleon removal channel, compared with the light fragments (the removed nucleon).
Still, coincident detection of the light fragments is completely feasible and useful,
and has been implemented in several international experiments. So the measured
momentum distribution is actually that of the core momentum $\bm k_{\rm c}$, which
has a close relationship with the momentum of the valence nucleon.

Denote the momentum of the projectile as $\bm k_{\rm P}$, that of the core and the
valence nucleon in lab and projectile-rest frame as $\bm k_{\rm c}$,
$\bm k_{\rm c}^{\rm{c.m.}}$, $\bm k_{\rm v}$, and $\bm k_{\rm v}^{\rm{c.m.}}$, and
$\beta,\gamma$ as the velocity and the Lorentz factor associated with $\bm k_{\rm P}$.
``c.m.'' stands for center of mass. We have
\begin{equation}
  \label{eq:4.6}
  \left\{
  \begin{aligned}
    0&=\bm k_{\rm c}^{\rm{c.m.}}+\bm k_{\rm v}^{\rm{c.m.}} \\
    \bm k_{\rm c,\perp}&=\bm k_{\rm c,\perp}^{\rm{c.m.}} \\
    k_{\rm c,z}&=\gamma\cdot\beta c\cdot\gamma_{\rm c}^{\rm c.m.}m_{\rm c}+
    \gamma k_{\rm c,z}^{\rm{c.m.}} \\
    &\approx\beta c\gamma m_{\rm c}+\gamma k_{\rm c,z}^{\rm{c.m.}} \\
    &=\frac{m_{\rm c}}{m_{\rm P}}k_{\rm P}-\gamma k_{\rm v,z}^{\rm{c.m.}}
  \end{aligned}
  \right.
\end{equation}
The second and third equation in Eq.~\ref{eq:4.6} is obtained by applying a
Lorentz boost of $\beta$ to the four momentum of the core in projectile-rest frame
$P_{\rm c}^{\rm c.m.}=(\bm k_{\rm c}^{\rm{c.m.}}\hbar,
\mathrm{i}\gamma_{\rm c}^{\rm c.m.}m_{\rm c}c)$.
The Lorentz factor $\gamma_{\rm c}^{\rm c.m.}$ associated with the intrinsic motion
of the core in the projectile c.m. frame is ignored and explained in the next paragraph.
Usually theory will give $k_{\rm v,z}^{\rm{c.m.}}$, while experiment provides
$k_{\rm c,z}$. One uses the last one of Eq.~\ref{eq:4.6} to bridge theory with
experiment output. The $\beta$ straggling (usually resulting from the production
of secondary beams with a primary beam impinging on a primary target) of the
projectile goes to $\gamma$ and $k_{\rm P}$, while that of the residue (i.e. the
core) is taken into account in $k_{\rm c,z}$.

Keep in mind that in principle $m_{\rm c}\gamma_{\rm c}^{\rm c.m.}$ take into account
the intrinsic movement of the core in the projectile-rest frame, where $m_{\rm c}$
is the proper mass of the core, and $\gamma_{\rm c}^{\rm c.m.}$ is the Lorentz factor
related to that intrinsic movement. However, we deliberately mention this matter
just to justify ignoring it. Firstly, the effect is small. Usually intrinsic
momentum for a nucleon in a nucleus is less than \SI{300}{MeV/c}, corresponding
to typically $\beta_{\rm c}^{\rm c.m.}<0.0214$ and $\gamma_{\rm c}^{\rm c.m.}<1.00023$
for one neutron konckout from $^{16}$C, readily negligible compared with the uncertainty
brought about by experiment ($\sim$10\%) and theory ($\gtrsim$10\%). Secondly,
the calculation is more complicated than it seems. Since we do the integral in
coordinate representation, one has to explicitly go to momentum representation to
calculate $\gamma_{\rm c}^{\rm c.m.}$ before doing any coordinate integral, which
ends up increasing the total calculation by several orders of magnitude. So
$\gamma_{\rm c}^{\rm c.m.}$ is ignored fair and square.

Ignoring the relativistic effect of the incident projectile, $\gamma\approx1$.
Eq.~\ref{eq:4.6} gives something we are familiar with in classical kinematics
 \begin{equation}
  \label{eq:4.7}
  \bm k_{\rm c}^{\rm{c.m.}}\approx\frac{m_{\rm c}}{m_{\rm P}}\bm k_{\rm P}-\bm k_{\rm v}
\end{equation}
Now it is clear that even with relativistic effect, $\bm k_{\rm v}^{\rm{c.m.}}$ is
still linearly related with $\bm k_{\rm c}$. But it is not necessarily true that
the observed momentum of the core is equal to $\bm k_{\rm c}$, unless the removal
of the valence nucleon does not stir the core too much. Now we \emph{assume it to be so},
known as ``sudden approximation'', which believes that the one-nucleon removal
at intermediate energies is so fast that the core does not have enough time to
react to it before the valence nucleon leaves, or put it another way, the core is
a ``spectator'' to the knockout process. One of the practical results of the sudden
approximation is that the mass $A_{\rm P}-1$ residue is left by the removal of the
valence nucleon just the way it was in the projectile, thus its momentum considered
as unchanged until it is detected.
So the momentum distribution of the residue is indicative of the $l$ value, or the
specific orbit, of the valence nucleon in the projectile.

The stripping cross section Eq.~\ref{eq:4.2} can be expanded in spectral decomposition as
\begin{equation}
  \label{eq:4.8}
  \begin{aligned}
    \sigma_{\rm str}&=\int{\mathrm{d}\bm b_{\rm v}
    \left(1-|S_{\rm v}(\bm b_{\rm v})|^2\right)
    \Braket{\phi_0|S_{\rm c}^\ast S_{\rm c}|\phi_0}} \\
    &=\int{\mathrm{d}\bm b_{\rm v}\left[1-|S_{\rm v}(\bm b_{\rm v})|^2\right]
    \Braket{\phi_0|S_{\rm c}^\ast|\bm k_{\rm c}}\int{\mathrm{d}\bm k_{\rm c}
    \Braket{\bm k_{\rm c}|S_{\rm c}|\phi_0}}} \\
    &=\frac{1}{(2\pi)^3}\frac{1}{2J+1}\sum_{M=-J}^J{
    \int{\mathrm{d}\bm b_{\rm v}\left[1-|S_{\rm v}(\bm b_{\rm v})|^2\right]
    \int{\mathrm{d}\bm k_{\rm c}
    \left|\int{\mathrm{d}\bm r\,\exp(-\mathrm{i}\bm k_{\rm c}\cdot\bm r)
    S_{\rm c}(\bm b_{\rm c})\Psi_{JM}(\bm r)}\right|^2}}}
  \end{aligned}
\end{equation}
$\Braket{\bm r|\bm k_{\rm c}}=(2\pi)^{-3/2}\exp(\mathrm{i}\bm k_{\rm c}\cdot\bm r)$
is a plain wave of the core. $\Psi_{JM}(\bm r)$ is the ground state wavefunction
of the valence nucleon w.r.t to the core, with total angular momentum number $J$
and its projection $M$. Assuming the incident beam is unpolarized, the cross section
is averaged over $M$ with equal weight on each $M$ value. Unless otherwise specified,
the sum over all the angular momentum projections shall be simply notated as $\sum_M$.
Differentiating Eq.~\ref{eq:4.8} over $\bm k_{\rm c}$ gives the momentum distribution we want
\begin{equation}
  \label{eq:4.9}
  \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}\bm k_{\rm c}}=\frac{1}{(2\pi)^3}
  \frac{1}{2J+1}\sum_M{\int{\mathrm{d}\bm b_{\rm v}
  \left[1-|S_{\rm v}(\bm b_{\rm v})|^2\right]
  \left|\int{\mathrm{d}\bm r\,\exp(-\mathrm{i}\bm k_{\rm c}\cdot\bm r)
  S_{\rm c}(\bm b_{\rm c})\Psi_{JM}(\bm r)}\right|^2}}
\end{equation}

Similarly one can deduce the momentum distribution for diffractive dissociation channel as
\begin{equation}
  \label{eq:4.10}
  \begin{aligned}
    \frac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}\bm k_{\rm c}}&=\int{\mathrm{d}\bm b_{\rm v}
    \left[\left|\Braket{\bm k_{\rm c}|S_{\rm c}S_{\rm v}|\phi_0}\right|^2-
    \Braket{\phi_0|S_{\rm c}^\ast S_{\rm v}^\ast|\phi_0}\Braket{\phi_0|\bm k_{\rm c}}
    \Braket{\bm k_{\rm c}|S_{\rm c}S_{\rm v}|\phi_0} \right]} \\
    &=\frac{1}{(2\pi)^3}\frac{1}{2J+1}\sum_M{
    \int{\mathrm{d}\bm b_{\rm v}|S_{\rm v}(\bm b_{\rm v})|^2
    \left[\left|\int{\mathrm{d}\bm r\,\exp(-\mathrm{i}\bm k_{\rm c}\cdot\bm r)
    S_{\rm c}(\bm b_{\rm c})\Psi_{JM}(\bm r)}\right|^2-I \right]}}
  \end{aligned}
\end{equation}
where
\begin{equation}
  \label{eq:4.11}
  I\equiv\int{\mathrm{d}\bm r S_{\rm c}^\ast(\bm b_{\rm c})\left|\Psi_{JM}(\bm r)\right|^2}
  \int{\mathrm{d}\bm r\exp(\mathrm{i}\bm k_{\rm c}\cdot\bm r)\Psi_{JM}^\ast(\bm r)}
  \int{\mathrm{d}\bm r\exp(-\mathrm{i}\bm k_{\rm c}\cdot\bm r)S_{\rm c}(\bm b_{\rm c})\Psi_{JM}(\bm r)}
\end{equation}

The specific form of the initial bound-state wavefunction $\Ket{\Psi_{JM}}\equiv\Ket{JM}$
of the projectile nucleus is needed to work out the integrals.
The intrinsic spins of $P$, $c$ and $v$ are denoted by $J$, $I_{\rm c}$ and $s$,
respectively, with the corresponding magnetic substates labeled by $M$, $M_{\rm c}$
and $m_s$. The orbital angular momentum of $v$ w.r.t. $c$ has quantum number $l$ and $m$.
Usually the angular momenta are coupled as $\bm l+\bm s=\bm j$ and $\bm j+\bm I_{\rm c}=\bm J$.
The projection of $\bm j$ has quantum number $m_j$. We have
\begin{equation}
  \label{eq:4.12}
  \left\{
  \begin{aligned}
    \Ket{JM}&=\sum_{m_j,M_{\rm c}}\Ket{jm_j I_{\rm c}M_{\rm c}}\Braket{jm_j I_{\rm c}M_{\rm c}|JM} \\
    \Ket{jm_j}&=\sum_{m,m_s}{\Ket{lm\frac{1}{2}m_s}\Braket{lm\frac{1}{2}m_s|jm_j}}
  \end{aligned}
  \right.
\end{equation}
A central potential between $c$ and $v$ is assumed:
\begin{equation}
  \label{eq:4.13}
  V(r)=V_0(r)+V_{\rm C}(r)+\bm s\cdot\bm l V_{\rm SO}(r)
\end{equation}
where $V_0(r)$, $V_{\rm C}(r)$ and $\bm s\cdot\bm l V_{\rm SO}(r)$ are the strong interaction,
the Coulomb interaction and the spin-orbit coupling interaction respectively, and
$\bm s\cdot\bm l=[j(j+1)-l(l+1)-s(s+1)]\hbar^2/2$. $\Ket{jm_j}$ can be factorized
\begin{equation}
  \label{eq:4.14}
  \Braket{\bm r|jm_j}\equiv\psi_{ljm_j}(\bm r)=R_{nlj}(r)\phi_{ljm_j}(\theta,\varphi)
\end{equation}
$R_{nlj}(r)\equiv\dfrac{u_{nlj}(r)}{r}$ satisfies the radial Schr\"odinger equation
\begin{equation}
  \label{eq:4.15}
  -\frac{\hbar^2}{2\mu}\left[\frac{\mathrm{d}^2}{\mathrm{d}r^2}-\frac{l(l+1)}{r^2}\right]u_{nlj}(r)
  +V(r)u_{nlj}(r)=E_{nlj}u_{nlj}(r)
\end{equation}
$n$ is the number of nodes in $R_{nlj}(r)$ (the origin excluded).

The expectation value for $S$-matrices are supposed to be independent of the
intrinsic spins $\bm I_{\rm c}$ and $\bm s$. We are expecting this because $S$-matrices
are independent of spin coordinates, and intuitively the uniform average in
$\bm l\oplus\bm s\oplus \bm I_{\rm c}$ space over $M$ is not supposed to impair
the uniformity of the average in $\bm l$ space over $m$, i.e.
\begin{equation}
  \label{eq:4.16}
  \frac{1}{2J+1}\sum_M{\Braket{JM|f(S)|JM}}=
  \frac{1}{2l+1}\sum_m{\Braket{lm|f_{nlj}^R(S)|lm}}
\end{equation}
$f(S)$ is a function of the $S$-matrix, and
$f_{nlj}^R(S)\equiv\Braket{R_{nlj}|f(S)|R_{nlj}}$.
But Eq.~\ref{eq:4.16} is not immediately obvious.
It is worthwhile to present the proof here.

Without loss of generality, let $f$ be a quantity that is independent of
spin. Denote the state vector by $\Ket{\alpha_{nljm_j}}\equiv\Ket{R_{nlj}}\Ket{jm_j}$,
with $\Ket{R_{nlj}}$ the radial function and $\Ket{jm_j}$ the eigenfunction of the
angular momentum $\bm j$ and its projection $j_z$, whose corresponding quantum numbers
are $j$ and $m_j$, respectively. Denote $\Braket{R_{nlj}|f|R_{nlj}}\equiv
f_{nlj}^R$, and $m_s$ the quantum number of the projection $s_z$ of
the spin $\bm s$.
We have for the average of $\Braket{jm_j|f|jm_j}$ over $m_j$ as
\begin{equation}
  \label{eq:A.1}
  \begin{aligned}
    \frac{1}{2j+1}&\sum_{m_j}{\Braket{\alpha_{nljm_j}|f|\alpha_{nljm_j}}} \\
    &=\frac{1}{2j+1}\sum_{m_j}
    {\Braket{jm_j|\Braket{R_{nlj}|f|R_{nlj}}|jm_j}} \\
    &=\frac{1}{2j+1}\sum_{\substack{m_j \\ m,m_s \\ m',m'_s}}
    {\Braket{jm_j|lmsm_s}\Braket{lmsm_s|f_{nlj}^R|lm'sm'_s}
    \Braket{lm'sm'_s|jm_j}} \\
    &=\frac{1}{2j+1}\sum_{\substack{m_j \\ m,m_s \\ m',m'_s}}
    \Braket{lm|f_{nlj}^R|lm'}\delta_{m_s,m'_s}
    {\Braket{jm_j|lmsm_s}\delta_{m_s,m_j-m}
    \Braket{lm'sm'_s|jm_j}}\delta_{m_s',m_j-m'} \\
    &=\frac{1}{2j+1}\sum_{m}\Braket{lm|f_{nlj}^R|lm}
    \sum_{m_j}\left|\Braket{lms(m_j-m)|jm_j}\right|^2,
  \end{aligned}
\end{equation}
where we have used in the above derivations the closures
\begin{equation}
  \label{eq:A.2}
  \begin{aligned}
    \sum_{m,m_s}\Ket{lmsm_s}\Bra{lmsm_s}&={\bm 1} \\
    \sum_{m',m'_s}\Ket{lm'sm'_s}\Bra{lm'sm'_s}&={\bm 1}.
  \end{aligned}
\end{equation}

For $j=l+\dfrac{1}{2}$, Eq.~\ref{eq:A.1} becomes
\begin{equation}
  \label{eq:A.3}
  \begin{aligned}
    \frac{1}{2j+1}&\sum_{m_j}
    {\Braket{\alpha_{nljm_j}|f_{nlj}|\alpha_{nljm_j}}} \\
    &=\frac{1}{2l+2}\sum_m\Braket{lm|f_{nlj}^R|lm}
    \left[
      \left|\Braket{lms\frac{1}{2}|l+\frac{1}{2},m+\frac{1}{2}}\right|^2+
      \left|\Braket{lms(-\frac{1}{2})|l+\frac{1}{2},m-\frac{1}{2}}\right|^2
    \right] \\
    &=\frac{1}{2l+2}\sum_m\Braket{lm|f_{nlj}^R|lm}
    \left(\frac{l+m+1}{2l+1}+\frac{l-m+1}{2l+1}\right) \\
    &=\frac{1}{2l+1}\sum_m\Braket{lm|f_{nlj}^R|lm}.
  \end{aligned}
\end{equation}

For $j=l-\dfrac{1}{2}$, Eq.~\ref{eq:A.1} becomes
\begin{equation}
  \label{eq:A.4}
  \begin{aligned}
    \frac{1}{2j+1}&\sum_{m_j}
    {\Braket{\alpha_{nljm_j}|f_{nlj}|\alpha_{nljm_j}}} \\
    &=\frac{1}{2l}\sum_m\Braket{lm|f_{nlj}^R|lm}
    \left[
      \left|\Braket{lms\frac{1}{2}|l-\frac{1}{2},m+\frac{1}{2}}\right|^2+
      \left|\Braket{lms(-\frac{1}{2})|l-\frac{1}{2},m-\frac{1}{2}}\right|^2
    \right] \\
    &=\frac{1}{2l}\sum_m\Braket{lm|f_{nlj}^R|lm}
    \left(\frac{l-m}{2l+1}+\frac{l-m}{2l+1}\right) \\
    &=\frac{1}{2l+1}\sum_m\Braket{lm|f_{nlj}^R|lm}.
  \end{aligned}
\end{equation}


Using notation $\psi_{nljm}(\bm r)\equiv R_{nlj}(r)Y_{lm}(\theta,\varphi)$, Eqs.~\ref{eq:4.2},
\ref{eq:4.5}, \ref{eq:4.9} and \ref{eq:4.10} are recast to the form that facilitates
numerical evaluation. They are listed as follows. Note that the $S$-matrices are assumed
to be independent of the direction of the impact parameters.
\begin{equation}
  \label{eq:4.19}
  \sigma_{\rm str}=\frac{2\pi}{2l+1}
  \int_{0}^{\infty}{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]
  \int{\mathrm{d}\bm r\left|S_{\rm c}(b_{\rm c})\right|^2
  \sum_m\left|\psi_{nljm}(\bm r)\right|^2}}
\end{equation}

\begin{equation}
  \label{eq:4.20}
  \sigma_{\rm diff}=\frac{2\pi}{2l+1}
  \int_{0}^{\infty}{\mathrm{d}b_{\rm v}\,b_{\rm v}\left|S_{\rm v}(b_{\rm v})\right|^2}
  \sum_m\left[
  \int{\mathrm{d}\bm r\left|S_{\rm c}(b_{\rm c})\right|^2\left|\psi_{nljm}(\bm r)\right|^2}-
  \left|\int{\mathrm{d}\bm r S_{\rm c}(b_{\rm c})\left|\psi_{nljm}(\bm r)\right|^2}\right|^2
  \right]
\end{equation}

\begin{equation}
  \label{eq:4.21}
  \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}\bm k_{\rm c}}=\frac{1}{(2\pi)^2}\frac{1}{2l+1}
  \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]
  \sum_m\left|
  \int{\mathrm{d}\bm r\exp(-\mathrm{i}\bm k_{\rm c}\cdot\bm r)S_{\rm c}(b_{\rm c})\psi_{nljm}(\bm r)}
  \right|^2}
\end{equation}

\begin{equation}
  \label{eq:4.22}
  \frac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}\bm k_{\rm c}}=\frac{1}{(2\pi)^2}\frac{1}{2l+1}
  \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left|S_{\rm v}(b_{\rm v})\right|^2
  \sum_m\left[\left|
  \int{\mathrm{d}\bm r\exp(-\mathrm{i}\bm k_{\rm c}\cdot\bm r)S_{\rm c}(b_{\rm c})\psi_{nljm}(\bm r)}
  \right|^2-I\right]}
\end{equation}
where
\begin{equation}
  \label{eq:4.23}
  I\equiv\int{\mathrm{d}\bm r S_{\rm c}^\ast(b_{\rm c})\left|\psi_{nljm}(\bm r)\right|^2}
  \int{\mathrm{d}\bm r\exp(\mathrm{i}\bm k_{\rm c}\cdot\bm r)\psi_{nljm}^\ast(\bm r)}
  \int{\mathrm{d}\bm r\exp(-\mathrm{i}\bm k_{\rm c}\cdot\bm r)S_{\rm c}(b_{\rm c})\psi_{nljm}(\bm r)}
\end{equation}
Note that $\bm r\equiv(\bm\rho,z)=\bm r_{\rm v}-\bm r_{\rm c}$, and
\begin{equation}
  \label{eq:4.24}
  \begin{aligned}
    b_{\rm c}&=|\bm b_{\rm v}-\bm\rho| \\
    &=\sqrt{\rho^2+b_{\rm v}^2-2\rho b_{\rm v}\cos(\varphi-\varphi_{\rm v})} \\
    &=\sqrt{r^2\sin^2\theta+b_{\rm v}^2-2rb_{\rm v}\sin\theta\cos(\varphi-\varphi_{\rm v})}
  \end{aligned}
\end{equation}

Since the integral against $\phi$ spans over $(0,2\pi)$, its dependence on $\phi_{\rm v}$
vanishes, as can be easily seen by translation of integration interval.
Integrating Eqs.~\ref{eq:4.21} and \ref{eq:4.22} over the transverse component of
$\bm k_{\rm c}$ ($\bm k_\perp$) gives the longitudinal momentum distribution.
Keep in mind that
\begin{equation}
  \label{eq:4.25}
  \int{\mathrm{d}\bm k_\perp
  \exp\left[-\mathrm{i}\bm k_\perp\cdot(\bm\rho-\bm\rho')\right]}
  =(2\pi)^2\delta(\bm\rho-\bm\rho')
\end{equation}
One gets
\begin{equation}
  \label{eq:4.26}
  \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}k_z}=\frac{1}{2l+1}
  \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]
  \int{\mathrm{d}\bm\rho|S_{\rm c}(b_{\rm c})|^2
  \sum_m\left| \int_{-\infty}^\infty{\mathrm{d}z\exp(-\mathrm{i}k_z z)\psi_{nljm}(\bm r)} \right|^2}}
\end{equation}
and
\begin{equation}
  \label{eq:4.27}
  \frac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}k_z}
  =\frac{1}{2l+1}\int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left|S_{\rm v}(b_{\rm v})\right|^2
  \sum_m{\left[\int{\mathrm{d}\bm\rho\left|S_{\rm c}(b_{\rm c})\right|^2
  \left|\int_{-\infty}^\infty{\mathrm{d}z\exp(-\mathrm{i}k_z z)\psi_{nljm}(\bm r)}\right|^2}-I\right]}}
\end{equation}
where
\begin{equation}
  \label{eq:4.28}
  I\equiv\int{\mathrm{d}\bm r S_{\rm c}^\ast(\bm b_{\rm c})\left|\psi_{nljm}(\bm r)\right|^2}
  \int{\mathrm{d}\bm\rho S_{\rm c}(b_{\rm c})
  \left|\int_{-\infty}^\infty{\mathrm{d}z\exp(-\mathrm{i}k_z z)\psi_{nljm}(\bm r)}\right|^2}
\end{equation}
or
\begin{equation}
  \label{eq:4.29}
  \begin{aligned}
    \frac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}k_z}
    =&\frac{1}{2l+1}\int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left|S_{\rm v}(b_{\rm v})\right|^2}\\
    &\times\sum_m{\int{\mathrm{d}\bm\rho
    \left|\int_{-\infty}^\infty{\mathrm{d}z\exp(-\mathrm{i}k_z z)\psi_{nljm}(\bm r)}\right|^2
    \left[\left|S_{\rm c}(b_{\rm c})\right|^2-S_{\rm c}(b_{\rm c})
    \int{\mathrm{d}\bm r S_{\rm c}^\ast(b_{\rm c})\left|\psi_{nljm}(\bm r)\right|^2}\right]
    }}
  \end{aligned}
\end{equation}

The transverse momentum distributions are obtained similarly by integrating
Eqs.~\ref{eq:4.21} and \ref{eq:4.22} over the longitudinal component of $\bm k_{\rm c}$,
i.e. $k_z$, which are rarely used in practice as they are susceptible to Coulomb
repulsion, and the corresponding numerical calculation is more time-consuming.
Their expressions are not so significant. We shall leave the derivations to the
readers for exercise.

\section{Coulomb Effect}
\label{sec:coul}

The Coulomb potential is not treated via eikonal method due to its singular behavior
at $r=0$. It is noticed that the integral to calculate eikonal phase shift also
diverges. We try to incorporate Coulomb contribution by adding a specific phase
shift $\chi_{\rm C}(b)$ to eikonal phase shift $\chi_{\rm N}(b)$, so that the total
phase shift $\chi(b)=\chi_{\rm C}+\chi_{\rm N}$ reproduces Coulomb scattering amplitude
$f_{\rm C}(\theta)$ in absence of nuclear force $U_{\rm N}(r)$ using Eq.~\ref{eq:2.37}.

\subsection{Coulomb Amplitude}

The simple form of Coulomb potential $V_{\rm C}=\dfrac{Q_{\rm P}Q_{\rm T}}{4\pi\varepsilon r}$
allows analytical solution for $f_{\rm C}(\theta)$, where $Q_{\rm P}$ and $Q_{\rm T}$ are
the charge of the projectile and the target, respectively. To get the Coulomb amplitude
$f_{\rm C}(\theta)$, one needs to solve the Schr\"odinger equation
\begin{equation}
  \label{eq:4.30}
  \left(-\frac{\hbar^2}{2\mu}\nabla^2+\frac{Q_{\rm P}Q_{\rm T}}{4\pi\varepsilon r}\right)
  \phi_{\rm C}(\bm r)=E\phi_{\rm C}(\bm r)
\end{equation}
By introducing the dimensionless \emph{Sommerfeld parameter}
\begin{equation}
  \label{eq:4.31}
  \eta\equiv\frac{Q_{\rm P}Q_{\rm T}}{4\pi\varepsilon_0}\frac{1}{\hbar v}
\end{equation}
and the \emph{half distance of closest approach} in a head-on collision
\begin{equation}
  \label{eq:4.32}
  a\equiv\frac{Q_{\rm P}Q_{\rm T}}{4\pi\varepsilon_0}\frac{1}{2E}
\end{equation}
where
\begin{equation}
  \label{eq:4.33}
  \eta=ka
\end{equation}
the Schr\"odinger equation Eq.~\ref{eq:4.30} is simplified as
\begin{equation}
  \label{eq:4.34}
  \left[\nabla^2+k^2-\frac{2\eta k}{r}\right]\phi_{\rm C}(\bm r)=0
\end{equation}

The wavefunction $\phi_{\rm C}$ does not have the asymptotic form as Eq.~\ref{eq:2.24},
due to the long-range nature of the Coulomb potential. It is expected that
$\phi_{\rm C}(\bm r)$ drops slower than $r^{-1}$. In fact, for Coulomb potential
approximation Eq.~\ref{eq:2.21} is no longer valid. $r'$ being far smaller than $r$
relies on the fact that $U_{\rm N}(\bm r')$ does not reach very far so as to be a well
localized potential, which is not true for Coulomb field. Following the prescription
in this book\footnote{Introduction to Nuclear Reactions, C.A. Bertulani, P. Danielewicz,
Institute of Physics Publishing, Bristol, UK, 2004, p.71}, $\phi_{\rm C}$ is assumed
to have the form
\begin{equation}
  \label{eq:4.35}
  \phi_{\rm C}=C\exp(\mathrm{i}kz)g(r-z)
\end{equation}
Using notation $\xi\equiv r-z$, $g(\xi)$ is conveniently solved in parabolic coordinates
$(\xi,\zeta,\varphi)$, where $\varphi$ is the azimuthal angle, $\xi=r-z$ and $\zeta=r+z$.
The Laplacian has a much simpler form than in spherical coordinates, which appears
\begin{equation}
  \label{eq:4.36}
  \nabla^2=\frac{2}{r}\left[\frac{\partial}{\partial\xi}+\xi\frac{\partial^2}{\partial\xi^2}+
  \frac{\partial}{\partial\zeta}+\zeta\frac{\partial^2}{\partial\zeta^2}+
  \frac{r}{2\xi\zeta}\frac{\partial^2}{\partial\varphi^2}\right]
\end{equation}
Parabolic coordinates are more often seen in literature in another form
$(u=\sqrt{\xi},v=\sqrt{\zeta},z)$ other than what's been shown here.
It may take the reader some effort to convince themselves of Eq.~\ref{eq:4.36}
by inserting into the general expression for Laplacian in orthogonal coordinate systems
\begin{equation}
  \label{eq:4.37}
  \nabla^2=\frac{1}{h_1h_2h_3}\left[
  \frac{\partial}{\partial u_1}\left(\frac{h_2h_3}{h_1}\frac{\partial}{\partial u_1}\right)+
  \frac{\partial}{\partial u_2}\left(\frac{h_3h_1}{h_2}\frac{\partial}{\partial u_2}\right)+
  \frac{\partial}{\partial u_3}\left(\frac{h_1h_2}{h_3}\frac{\partial}{\partial u_3}\right)
  \right]
\end{equation}
with the scale factors
\begin{equation}
  \label{eq:4.38}
  \left\{
  \begin{aligned}
      h_1 &=\frac{1}{2}\sqrt{\frac{\xi+\zeta}{\xi}} \\
      h_2 &=\frac{1}{2}\sqrt{\frac{\xi+\zeta}{\zeta}} \\
      h_3 &=\sqrt{\xi\zeta}
  \end{aligned}
  \right.
\end{equation}
Here as an illustration, only the calculation of $h_1$ is outlined below
\begin{equation}
  \label{eq:4.39}
  h_1=\left|\frac{\partial\bm r}{\partial\xi}\right|
  =\sqrt{\left(\frac{\partial x}{\partial\xi}\right)^2+
  \left(\frac{\partial y}{\partial\xi}\right)^2+
  \left(\frac{\partial z}{\partial\xi}\right)^2}
  = \frac{1}{2}\sqrt{\frac{\xi+\zeta}{\xi}}
\end{equation}
by using the inverse transform
\begin{equation}
  \label{eq:4.40}
  \left\{
  \begin{aligned}
    z&=\frac{\zeta-\xi}{2} \\
    r&=\frac{\zeta+\xi}{2} \\
    x&=\sqrt{r^2-z^2}\cos\varphi \\
    y&=\sqrt{r^2-z^2}\sin\varphi \\
  \end{aligned}
  \right.
\end{equation}
Inserting Eq.~\ref{eq:4.35}, Eq.~\ref{eq:4.34} is easily simplified as
\begin{equation}
  \label{eq:4.41}
  \xi g''(\xi)+(1-\mathrm{i}k\xi)g'(\xi)-\eta kg(\xi)=0
\end{equation}
by using the Laplacian in parabolic coordinate system, i.e. Eq.~\ref{eq:4.36},
where the primes stand for derivatives w.r.t. the argument. Now the original Schr\"odinger
equation has been cast into a confluent hypergeometric equation. To see this, we change
variables
\begin{equation}
  \label{eq:4.42}
  s\equiv\mathrm{i}k\xi,\, \omega(s)\equiv g(\xi),\, \alpha\equiv-\mathrm{i}\eta
\end{equation}
and Eq.~\ref{eq:4.41} becomes
\begin{equation}
  \label{eq:4.43}
  s\omega''(s)+(1-s)\omega'(s)-\alpha\omega(s)=0
\end{equation}
which has the confluent hypergeometric function $F(\alpha;1;s)$ as its solution.
Now the wavefuction takes an analytical form
\begin{equation}
  \label{eq:4.44}
  \phi_{\rm C}(\bm r)=C\exp(\mathrm{i}kz)F(\alpha;1;s)
\end{equation}

Of course we're not satisfied. Usually the scattering wavefunction can be expressed
as the sum of an incident wave and an outgoing scattering wave, with the latter
giving the scattering amplitude. And we only care about the incident flux at
$r=-\infty$ and outgoing flux at $r=\infty$. This may allow some simplifications.
Fortunately confluent hypergeometric function $F(\alpha;b;s)$ does have a summation
form at large $|s|$. One can find the formula as Eq.~13.5.1 in p.508 of \emph{
Handbook of Mathematical Functions. M. Abramowitz and I.A. Stegun, Dover Publications, 1970}.
To save the effort to find it for the readers, it is copied here
\begin{equation}
  \label{eq:4.45}
  \begin{aligned}
    \frac{F(\alpha;b;z)}{\Gamma(b)}=
    &\frac{\exp(\pm\mathrm{i}\pi\alpha)z^{-\alpha}}{\Gamma(b-\alpha)}
    \left\{\sum_{n=0}^{R-1}{\frac{(\alpha)_n(1+\alpha-b)_n}{n!}(-z)^{-n}+O\left(|z|^{-R}\right)}\right\} \\
    &+\frac{\exp(z) z^{\alpha-b}}{\Gamma(\alpha)}
    \left\{\sum_{n=0}^{S-1}{\frac{(b-\alpha)_n(1-\alpha)_n}{n!}z^{-n}+O\left(|z|^{-S}\right)}\right\} \\
  \end{aligned}
\end{equation}
The upper sign (+) is taken for $-\dfrac{1}{2}\pi<\operatorname{arg}z<\dfrac{3}{2}\pi$,
the lower sign for $-\dfrac{3}{2}\pi<\operatorname{arg}z<-\dfrac{1}{2}\pi$. We use $+$ sign
for $\operatorname{arg}s=\dfrac{\pi}{2}$ as $s$ is a pure imaginary number.
$(\alpha)_n\equiv\alpha(\alpha+1)\cdots(\alpha+n-1)$, and $(\alpha)_0\equiv1$.
Substituting $\alpha=-\mathrm{i}\eta$, $b=1$ and $z=s=\mathrm{i}k(r-z)$ in Eq.~\ref{eq:4.45},
noting that $\mathrm{i}^{\mathrm{i}}=e^{-\frac{\pi}{2}}$, one gets
\begin{equation}
  \label{eq:4.46}
  \begin{aligned}
    F(-\mathrm{i}\eta;1;\mathrm{i}k(r-z))&=\exp(\pi\eta/2)
    \frac{\exp\left\{\mathrm{i}\eta\ln[k(r-z)]\right\}}{\Gamma(1+\mathrm{i}\eta)}
    \sum_{n=0}^{\infty}{\frac{(-\mathrm{i}\eta)^2_n}{n!}[-\mathrm{i}k(r-z)]^{-n}} \\
    &+\exp(\pi\eta/2)\frac{\exp[\mathrm{i}k(r-z)]}{\Gamma(-\mathrm{i}\eta)}
    \frac{\exp\left\{-\mathrm{i}\eta\ln[k(r-z)]\right\}}{\mathrm{i}k(r-z)} \\ &\times
    \sum_{n=0}^{\infty}{\frac{(1+\mathrm{i}\eta)^2_n}{n!}[\mathrm{i}k(r-z)]^{-n}} \\
  \end{aligned}
\end{equation}
Note that this is an asymptotic form of $F(-\mathrm{i}\eta;1;\mathrm{i}k(r-z))$ as
$|k(r-z)|\to\infty$, which applies for detectors placed far from the target.
Let's further simplify Eq.~\ref{eq:4.46} by \emph{ignoring terms containing $(r-z)^{-1}$
with orders no smaller than 1 in the series} (keeping only the lowest-order term,
namely 1), to get
\begin{equation}
  \label{eq:4.47}
  \begin{aligned}
    F(-\mathrm{i}\eta&;1;\mathrm{i}k(r-z))\approx
    \frac{\exp(\pi\eta/2)}{\Gamma(1+\mathrm{i}\eta)}\exp(-\mathrm{i}kz) \\
    &\times\left\{\exp(\mathrm{i}kz)\exp\left\{\mathrm{i}\eta\ln[k(r-z)]\right\}+
    \frac{\exp(\mathrm{i}kr)}{r}\frac{\Gamma(1+\mathrm{i}\eta)}{\Gamma(-\mathrm{i}\eta)}
    \frac{\exp\left\{-\mathrm{i}\eta\ln[k(r-z)]\right\}}{\mathrm{i}k(1-\cos\theta)}
    \right\}
  \end{aligned}
\end{equation}

It is worth noting that the approximation here is not totally mathematically based.
It can be clearly told from Eq.~\ref{eq:4.47} that the second term in the curly
bracket indicates an outgoing spherical (scattering) wave, well the first is
roughly a plane wave. Adding the $(r-z)^{-1}$ term in the first series may make
the orders of $(r-z)$ in Eq.~\ref{eq:4.47} even, yet it does not change the natures
of the two waves, neither do the expression of the Coulomb scattering amplitude, as
will be shown below.

With Eq.~\ref{eq:4.47}, and noting that $r-z=2r\sin^2(\theta/2)$, the wavefunction
Eq.~\ref{eq:4.44} is expressed as the sum of an incident wave and an outgoing spherical wave
\begin{equation}
  \label{eq:4.48}
  \phi_{\rm C}(r,z)=\exp(\mathrm{i}kz)\exp\left\{\mathrm{i}\eta\ln[k(r-z)]\right\}+
  f_{\rm C}(\theta)\frac{\exp(\mathrm{i}kr)}{r}\exp[-\mathrm{i}\eta\ln(2kr)]
\end{equation}
where we have chosen the normalization constant $C=\Gamma(1+\mathrm{i}\eta)\exp(-\pi\eta/2)$,
and defined as the Coulomb scattering amplitude
\begin{equation}
  \label{eq:4.49}
  \begin{aligned}
    f_{\rm C}(\theta)=&\frac{\Gamma(1+\mathrm{i}\eta)}{\Gamma(-\mathrm{i}\eta)\mathrm{i}k(1-\cos\theta)}
    \exp\{-\mathrm{i}\eta\ln[\sin^2(\theta/2)]\} \\
    &=-\frac{\eta}{2k\sin^2(\theta/2)}\frac{\Gamma(1+\mathrm{i}\eta)}{-\mathrm{i}\eta\Gamma(-\mathrm{i}\eta)}
    \exp\{-\mathrm{i}\eta\ln[\sin^2(\theta/2)]\} \\
    &=-\frac{\eta}{2k\sin^2(\theta/2)}\frac{\Gamma(1+\mathrm{i}\eta)}{\Gamma(1-\mathrm{i}\eta)}
    \exp\{-\mathrm{i}\eta\ln[\sin^2(\theta/2)]\} \\
    &\equiv-\frac{\eta}{2k\sin^2(\theta/2)}\exp(2\mathrm{i}\sigma_0)
    \exp\{-\mathrm{i}\eta\ln[\sin^2(\theta/2)]\}
  \end{aligned}
\end{equation}
where
\begin{equation}
  \label{eq:4.50}
  \sigma_0=\operatorname{arg}\Gamma(1+\mathrm{i}\eta)
\end{equation}

Before we identify $f_{\rm C}(\theta)$ as the scattering amplitude in terms of
the calculation of the elastic scattering cross section in a Coulomb field,
it is worthwhile to note that the waves have logarithmic distortions in comparison
with those under a well localized potential, as is brought about by the additional
factors $\exp\{-\mathrm{i}\eta\ln[k(r-z)]\}$ and $\exp[-\mathrm{i}\eta\ln(2kr)]$.
It can be easily verified (using Eqs.~\ref{eq:2.28} and \ref{eq:2.30}) that the
distortions' influence boils down to addtional terms to the incident and the
scattering currents, which are proportional to $r^{-1}$ (i.e. $k\to k-\dfrac{\eta}{r}$),
and vanish asymptotically. So $f_{\rm C}(\theta)$ still gives the correct elastic
cross section in Coulomb scattering
\begin{equation}
  \label{eq:4.51}
  \dfrac{\mathrm{d}\sigma_{\rm C}}{\mathrm{d}\Omega}=|f_{\rm C}(\theta)|^2
  =\frac{a^2}{4\sin^4(\theta/2)}
\end{equation}

It is interesting to find that this quantum mechanical formula is identical to
the differential cross section of a classical Rutherford scattering.

\subsection{Coulomb Eikonal Phase}

An appropriate Coulomb eikonal phase $\chi_C(\rm b)$ should reproduce
Eq.~\ref{eq:4.49} in pure Coulomb scattering. In literature\footnote{Introduction
to Nuclear Reactions, C.A. Bertulani, P. Danielewicz, Institute of Physics
Publishing, Bristol, UK, 2004, p.30} $\chi_C(\rm b)$ takes the form

\begin{equation}
  \label{eq:4.52}
  \chi_{\rm C}(b)=2\eta\ln(kb)
\end{equation}
We now show that this formula just does the work.

Inserting Eq.~\ref{eq:4.52} in Eq.~\ref{eq:2.40}, using variable substitution
$x\equiv qb$, we have

\begin{equation}
  \label{eq:4.53}
  \begin{aligned}
    f(\theta)&={\mathrm{i}k}
      \int_0^\infty{b\mathrm{d}bJ_0(qb)\{1-\exp[\rm i\chi_{\rm C}(b)]\}} \\
    &=\frac{\mathrm{i}k}{q^2}\int_0^\infty{J_0(x)
      \left[1-\left(\frac{kx}{q}\right)^{\mathrm{i}2\eta}\right]x\mathrm{d}x} \\
    &=\frac{\mathrm{i}k}{q^2}\left[
      \int_0^\infty{J_0(x)x\mathrm{d}x}-
      \left(\frac{k}{q}\right)^{\mathrm{i}2\eta}
      \int_0^\infty{J_0(x)x^{\mathrm{i}2\eta+1}\mathrm{d}x}\right] \\
    &=\frac{\mathrm{i}k}{q^2}\left[
      2\mathrm{i}\eta\left(\frac{k}{q}\right)^{\mathrm{i}2\eta}
      \int_0^\infty{J_1(x)x^{\mathrm{i}2\eta}\mathrm{d}x}+
      \left.xJ_1(x)\right|_0^\infty-
      \left(\frac{k}{q}\right)^{\mathrm{i}2\eta}
      \left.J_1(x)x^{\mathrm{i}2\eta+1}\right|_0^\infty\right] \\
    &=\frac{\mathrm{i}k}{q^2}\left\{2\mathrm{i}\eta
      \left(\frac{k}{q}\right)^{\mathrm{i}2\eta}\cdot 2^{\mathrm{i}2\eta}
      \frac{\Gamma(1+\mathrm{i}\eta)}{\Gamma(1-\mathrm{i}\eta)}+
      xJ_1(x)\left[1-\exp\left(\mathrm{i}2\eta\ln{\frac{kx}{q}}\right)\right]_0^\infty
      \right\}
  \end{aligned}
\end{equation}
We have used $\int{x^\nu J_{\nu-1}(x)\mathrm{d}x}=x^\nu J_{\nu}(x)$ and integration
by part in the last but one step. The last step is obtained by using the formula
6.561.14 from the book by Gradshteyn and Ryzhik\footnote{Table of Integrals Series,
and Products, 7-th edition, 2007, p676.}, and copied below for a matter of reference:
\begin{equation}
  \label{eq:4.54}
  \int_0^\infty{x^\mu J_\nu(ax)\mathrm{d}x=2^\mu a^{-\mu-1}
  \frac{\Gamma(\frac{1}{2}+\frac{1}{2}\nu+\frac{1}{2}\mu)}
  {\Gamma(\frac{1}{2}+\frac{1}{2}\nu-\frac{1}{2}\mu)}},\,\,
  \operatorname{Re}{\nu}-1<\operatorname{Re}{\mu}<\frac{1}{2}, a>0
\end{equation}

The second term in Eq.~\ref{eq:4.53} $\sim\lim\limits_{x\to\infty}{\sqrt{\frac{2x}{\pi}}
\cos(x-\frac{3}{4}\pi)\left(e^{\mathrm{i}2\eta\ln{\frac{kx}{q}}}-1\right)}$. Well,
the limit does not exist. In fact, it oscillates around 0 violently as $x\to\infty$.
It is practically neglected in literature\footnote{Introduction to Nuclear Reactions,
C.A. Bertulani, P. Danielewicz, Institute of Physics Publishing, Bristol, UK, 2004, p.424}.
This may be justified that, the contributions to this term from various interval
$(0,b)$ add up to insignificance in real experimental detection. Again, the validity
of this treamtment is subject to the judgement of experiment.

We proceed to show that $f_{\rm C}(\theta)$ gives Eq.~\ref{eq:4.49}. Note that
$q=|(\bm k-\bm k')_\perp|=k\sin(\theta)$ is the transverse component of the momentum
transfer, for eikonal theory $\theta\sim0$, $|(\bm k-\bm k')_\parallel|$ is neglected,
so $q\sim|(\bm k-\bm k')|=2k\sin\dfrac{\theta}{2}$. With this, Eq.~\ref{eq:4.53} becomes
\begin{equation}
  \label{eq:4.55}
  \begin{aligned}
    f_{\rm C}(\theta)&=-\frac{2k\eta}{q^2}\left(\frac{2k}{q}\right)^{\mathrm{i}2\eta}
      \frac{\Gamma(1+\mathrm{i}\eta)}{\Gamma(1-\mathrm{i}\eta)} \\
      &=-\frac{\eta}{2k\sin^2(\theta/2)}\exp(2\mathrm{i}\sigma_0)
      \exp\{-\mathrm{i}\eta\ln[\sin^2(\theta/2)]\}
  \end{aligned}
\end{equation}
which is exactly Eq.~\ref{eq:4.49}.
The phase $\sigma_0$ is evaluated by
\begin{equation}
  \sigma_0=-\eta\gamma+\sum_{k=0}^{\infty}{\left(\frac{\eta}{k+1}-
  \arctan{\frac{\eta}{k+1}}\right)}
\end{equation}
where $\gamma=0.577721\ldots$ is Euler's constant.

The total scattering amplitude is usually rewritten to facilitate numerical evaluation
as
\begin{equation}
  \label{eq:4.56}
  \begin{aligned}
    f(\theta)&=\mathrm{i}k\int_0^\infty{b\mathrm{d}bJ_0(qb)
      \{1-\exp[\mathrm{i}\chi_{\rm N}(b)+\mathrm{i}\chi_{\rm C}(b)]\}} \\
      &=\mathrm{i}k\int_0^\infty{b\mathrm{d}bJ_0(qb)
      \{1-\exp[\mathrm{i}\chi_{\rm N}(b)+\mathrm{i}\chi_{\rm C}(b)]
      -1+\exp[\mathrm{i}\chi_{\rm C}(b)]\}}+f_{\rm C}(\theta) \\
      &=\mathrm{i}k\int_0^\infty{b\mathrm{d}bJ_0(qb)\exp[\mathrm{i}\chi_{\rm C}(b)]
      \{1-\exp[\mathrm{i}\chi_{\rm N}(b)]\}}+f_{\rm C}(\theta)
  \end{aligned}
\end{equation}
so that the integral over $b$ does not have to run very far since N-N potentials
are well localized to have $1-\exp[\mathrm{i}\chi_{\rm N}(b)]$ drop rapidly as $b$
runs over $R_{\rm p}+R_{\rm t}$.

\chapter {Numerical Implementation}

To compare theory with experiment, we have to extract from formulas for numbers.
Observables in knockout reactions are total and differential cross sections over
the momentum distributions of the heavy residue, as have been expressed by formulas
listed in Sec.~\ref{sec:momdis}. The foremost concerns when eavaluting these integrals
are: 1. how to work out the integrals effectively and 2. how to break the ``vague''
integrands into numerically tractable parts. There are a myriad of methods for
numerical integration and copious documents for them. The evaluation of the integrand
is another thing. Dedicated codes should be prepared for the implementation of some
special functions, i.e. the associated Legendre polynomials and Bessel functions.
Provided all above have been tackled, there is still one little tricky thing. The
radial function of the valence nucleon $R_l(r)$ is not given for free. It is dependent
on the quantum numbers and the separation energy of the nucleon. So a 2nd-order
ordinary differential equation has to be solved for almost each combination of
target and projectile.

The tasks to be finished here may seem complicated, yet it is readily achievable
since they are general problems in computing physics and there're standard practices
and detailed documentations as well as textbooks. For our problems here, one would
find \emph{Numerical Recipes in C - The Art of Scientific Computing, Cambridge U.
Press, 2002, 2nd Eidition} (denoted as \emph{Numerical Recipes in C} below) satisfying.
The book is a one-stop toolkit for scientific computing. It even provides readily
portable codes in C to be copied to your own program, which is also the main reference
of this chapter.

For a start, we depict a Gaussian expansion method\footnote{Physical Review C
\textbf{70}, 034609(2004), C.A. Bertulani and P.G. Hansen}which fits the
$S$-Matrix to sum of Gaussians to facilitate numerical evaluation of cross sections.
Then this chapter introduces a relevant code MOMD, its structures, modules and
usage.


\section{Gaussian Expansion Fit for $S$-Matrix}
It should be noted that the Gaussian expansion method is just a technique for
numerical evaluation, which saves the integration over $\phi$. The integrals can
be directly evaluated. The wavefunction $\psi_{nljm}(\bm r)$ has its explicit form
\begin{equation}
  \label{eq:5.1}
  \begin{aligned}
    \psi_{nljm}(\bm r)&=R_{nlj}(r)Y_{lm}(\theta,\phi) \\
      &=R_{nlj}(r)C_{lm}P_{lm}(\cos\theta)\exp(\mathrm{i}m\phi)
  \end{aligned}
\end{equation}
with
\begin{equation}
  \label{eq:5.2}
  C_{lm}=(-1)^m\sqrt{\frac{2l+1}{4\pi}}\sqrt{\frac{(l-m)!}{(l+m)!}}
\end{equation}
Eq.~\ref{eq:4.26} is recast into
\begin{equation}
  \label{eq:5.3}
  \begin{aligned}
    \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}k_z}&=\frac{1}{2l+1}
    \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]} \\
    &\qquad\times
    \sum_m{\int_0^\infty{\mathrm{d}\rho\,\rho\int_0^{2\pi}{d\phi|S_{\rm c}(b_{\rm c})|^2
    \left|
      \int_{-\infty}^{\infty}{\mathrm{d}z\exp(-\mathrm{i}k_z z)\psi_{nljm}(\bm r)}
    \right|^2}}} \\
    &=\frac{1}{2l+1}
    \int_0^\infty{\mathrm{d}b_{\rm v}b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]
    \sum_m{C_{lm}^2\int_0^\infty{\mathrm{d}\rho\,\rho
    \left|\mathcal{Z}_{lm}(k_z,\rho)\right|^2 \mathcal{S}(b_{\rm v},\rho)}}}
  \end{aligned}
\end{equation}
where
\begin{equation}
  \label{eq:5.4}
  \begin{aligned}
    \mathcal{Z}_{lm}(k_z,\rho)&=\int_{-\infty}^{\infty}{\mathrm{d}z
      \exp(-\mathrm{i}k_z z)R_{nlj}(r)P_{lm}(\cos\theta)} \\
    \mathcal{S}(b_{\rm v},\rho)&=\int_0^{2\pi}{\mathrm{d}\phi
      \left|S_{\rm c}\left(\sqrt{\rho^2+b_{\rm v}^2-2\rho b_{\rm v}\cos\phi}\right)\right|^2}
  \end{aligned}
\end{equation}

It is worth noting that the above direct prescription is enough to do the job in
numerical evaluation. Usually the intermediate integral $\mathcal{Z}_{lm}(k_z,\rho)$
and $\mathcal{S}(b_{\rm v},\rho)$ are stored in arrays for further use. It is
convenient and memory-saving to use a lambda-expression for $\mathcal{S}(b_{\rm v}, \rho)$,
which requires no allocation of array memory at all, and is easy to code. We do
suggest pre-calculated $\mathcal{Z}_{lm}(k_z,\rho)$ and store them in an array,
for this $z$-integral is free of $b_{\rm v}$ dependence and thus can be reused for
different $b_{\rm v}$. Besides, the integral have symmetry at $z=0$ and is either
a real number for odd $l+m$ or a pure imaginary number otherwise. Given that only
its norm is used, one does not need to store it as a complex number. Nor is it
necessary to use a 2-D array, since the $k_z$-relevant calculation is literally
independent of any other $k_z$'s. It suffices to treat the calculation of
$\dfrac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}k_z}$ as a blackbox, with $k_z$ just
an ordinary input.

It is evident that the bottleneck here concerning the calculation speed is
$\mathcal{S}(b_{\rm v}, \rho)$. The $\phi$-integral has to be iterated for each pair
of $(b_{\rm v}, \rho)$. It is an tempting idea that $\mathcal{S}(b_{\rm v}, \rho)$
be approxiated by some simpler form, e.g., by approximating $S_{\rm c}$ with a
Gaussian expansion, so that the $\phi$-integral is worked out analytically.
Now we take a look at the Gaussian expansion method. To avoid misunderstanding,
$R_{nlj}(r)$ is denoted as $R_l(r)$. The $S$-matrices are fitted to the expansion
\begin{equation}
  \label{eq:5.5}
  S_{\rm c}(b_{\rm c})=\sum_{j=0}^N{\alpha_j\exp\left[-\frac{b_{\rm c}^2}{\beta_j^2}\right]}
\end{equation}
where $\beta_j=\dfrac{R_L}{j}$, with typical values $N=20$ and $R_L$=10-20
\si{\femto\meter}, depending on the size of the system. Since
$S_{\rm c}(b_{\rm c})\sim1$ for $b_{\rm c}\gg R$, we take $\alpha_0=1$ for $\beta_0=\infty$.

The Gaussian method gives analytical result for $\mathcal{S}(b_{\rm v},\rho)$.
We start by inserting Eq.~\ref{eq:5.5} into Eq.~\ref{eq:4.21}, so that it also
yields expressions for transverse momentum distribution as byproducts at a relatively
low cost. One can deduce from the generating function of the Bessel functions to
get the equation (the Jacobi-Anger identity)
\begin{equation}
  \label{eq:5.6}
  \exp(\mathrm{i}z \cos\phi)=\sum_{p=-\infty}^{\infty}{\mathrm{i}^p J_p(z)e^{\mathrm{i}p\phi}}
\end{equation}
and insert it into Eq.~\ref{eq:4.21} to have
\begin{equation}
  \label{eq:5.7}
  \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}\bm k_{\rm c}}=\frac{1}{(2\pi)^2}\frac{1}{2l+1}
  \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]
  \sum_{m,j}{\left|\mathcal{F}_{lm,j}(k_\perp,k_z,b_{\rm v})\right|^2}}
\end{equation}
in which
\begin{equation}
  \label{eq:5.8}
  \begin{aligned}
  \mathcal{F}_{lm,j}(k_\perp,k_z,b_{\rm v})&=C_{lm}\alpha_j\int{\mathrm{d}\rho\,\rho
    \mathrm{d}z\mathrm{d}\phi R_l(r)P_{lm}(\cos\theta)\exp[(-\rho^2-b_{\rm v}^2)/\beta_j^2]} \\
    &\quad\times\exp(2\rho b_{\rm v}\cos\phi/\beta_j^2)e^{\mathrm{i}m\phi}
    \exp[-\mathrm{i}k_\perp\rho\cos(\phi_k-\phi)]\exp(-\mathrm{i}k_z z) \\
    &=C_{lm}\alpha_j\exp(-b_{\rm v}^2/\beta_j^2)\int{\mathrm{d}\rho\,\rho\exp(-\rho^2/\beta_j^2)
      \mathrm{d}z\exp(-\mathrm{i}k_z z)R_l(r)P_{lm}(\cos\theta)} \\
      &\quad\times\int_0^{2\pi}{\mathrm{d}\phi e^{\mathrm{i}m\phi}
      \exp[-\mathrm{i}k_\perp\rho\cos(\phi_k-\phi)]
      \exp\left[\frac{2\rho b_{\rm v}\cos\phi}{\beta_j^2}\right]}
  \end{aligned}
\end{equation}
Using the expansion Eq.~\ref{eq:5.6}, we have
\begin{equation}
  \label{eq:5.9}
  \begin{aligned}
    &\int_0^{2\pi}{d\phi e^{\mathrm{i}m\phi}
      \exp[-\mathrm{i}k_\perp\rho\cos(\phi_k-\phi)]
      \exp\left[\frac{2\rho b_{\rm v}\cos\phi}{\beta_j^2}\right]} \\
    &=\sum_{p,p'}{\mathrm{i}^{p+p'}J_p(-k_\perp\rho)
      J_{p'}\left[-\mathrm{i}\frac{2\rho b_{\rm v}}{\beta_j^2}\right]}
      \int_0^{2\pi}{d\phi e^{\mathrm{i}m\phi}
      \exp[\mathrm{i}p(\phi_k-\phi)]\exp(\mathrm{i}p'\phi)} \\
    &=\sum_{p,p'}{\mathrm{i}^{p+p'}(-1)^{p}J_p(k_\perp\rho)
      (-1)^{p'}\mathrm{i}^{p'}I_{p'}\left[\frac{2\rho b_{\rm v}}{\beta_j^2}\right]}
      \cdot 2\pi\delta_{m-p,p'}e^{\mathrm{i}p\phi_k} \\
    &=2\pi(-\mathrm{i})^{m}\sum_{p}{\mathrm{i}^{m-p}J_p(k_\perp\rho)
      I_{m-p}\left[\frac{2\rho b_{\rm v}}{\beta_j^2}\right]}e^{\mathrm{i}p\phi_k} \\
  \end{aligned}
\end{equation}
where we have made use of the properties of Bessel functions
\begin{equation}
  \label{eq:5.10}
  \begin{aligned}
    J_\nu(-x)&=(-1)^\nu J_\nu(x) \\
    J_\nu(\mathrm{i}x)&=\mathrm{i}^\nu I_\nu(x)
  \end{aligned}
\end{equation}
and the integral

\begin{equation}
  \label{eq:5.11}
  \int_0^{2\pi}{d\phi e^{\mathrm{i}m\phi}e^{-\mathrm{i}m'\phi} = 2\pi\delta_{m,m'}}
\end{equation}
So
\begin{equation}
  \label{eq:5.12}
  \begin{aligned}
    \mathcal{F}_{lm,j}(k_\perp,k_z,b_{\rm v})
      &=2\pi(-\mathrm{i})^{m} C_{lm}\alpha_j\exp(-b_{\rm v}^2/\beta_j^2) \\
      &\times\sum_p{\mathrm{i}^{m-p}e^{\mathrm{i}p\phi_k}}
      \int{\mathrm{d}\rho\,\rho J_p(k_\perp\rho)\exp(-\rho^2/\beta_j^2)
      I_{m-p}\left[\frac{2\rho b_{\rm v}}{\beta_j^2}\right]} \\
      &\times\int_{-\infty}^\infty{\mathrm{d}z\exp(-\mathrm{i}k_z z)R_l(r)P_{lm}(\cos\theta)}
  \end{aligned}
\end{equation}
Inserting it into Eq.~\ref{eq:5.7} and integrating over $\phi_k$ gives
\begin{equation}
  \label{eq:5.13}
  \frac{\mathrm{d}\sigma_{\rm str}}{k_\perp\mathrm{d}k_\perp\mathrm{d}k_z}=\frac{2\pi}{2l+1}
  \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]}
  \sum_{m,p}{C_{lm}^2\left|\mathcal{A}_{lmp}(k_\perp,k_z,b_{\rm v})\right|^2}
\end{equation}
where
\begin{equation}
  \label{eq:5.14}
  \begin{aligned}
    \mathcal{A}_{lmp}(k_\perp,k_z,b_{\rm v})&=\sum_j{\alpha_j\exp[-b_{\rm v}^2/\beta_j^2]
      \int_0^\infty{\mathrm{d}\rho\,\rho J_p(k_\perp\rho)\exp(-\rho^2/\beta_j^2)
      I_{m-p}\left[\frac{2\rho b_{\rm v}}{\beta_j^2}\right]}} \\
      &\times\int_{-\infty}^\infty{\mathrm{d}z\exp(-\mathrm{i}k_z z)R_l(r)P_{lm}(\cos\theta)}
  \end{aligned}
\end{equation}
In reaching Eq.~\ref{eq:5.14} we again have used Eq.~\ref{eq:5.11} and dropped off
the irrelevant phase $\mathrm{i}^{m-p}$.

To get the \emph{longitudinal momentum distribution} Eq.~\ref{eq:5.13} is integrated
over $k_\perp$, and we obtain
\begin{equation}
  \label{eq:5.15}
  \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}k_z}=\frac{2\pi}{2l+1}
    \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]}
    \sum_{m,p}{C_{lm}^2
    \int_0^\infty{\mathrm{d}\rho\,\rho\left|\mathcal{B}_{lmp}(k_z,b_{\rm v},\rho)\right|^2}}
\end{equation}
where
\begin{equation}
  \label{eq:5.16}
  \begin{aligned}
    \mathcal{B}_{lmp}(k_z,b_{\rm v},\rho)&=\sum_j{\alpha_j\exp[-b_{\rm v}^2/\beta_j^2]
    \exp[-\rho^2/\beta_j^2]I_{m-p}\left[\frac{2\rho b_{\rm v}}{\beta_j^2}\right]} \\
    &\times\int_{-\infty}^\infty{\mathrm{d}z\exp(-\mathrm{i}k_z z)R_l(r)P_{lm}(\cos\theta)}
  \end{aligned}
\end{equation}
in which the orthogonality of Bessel functions is used
\begin{equation}
  \label{eq:5.17}
  \int_0^\infty{\mathrm{d}k_\perp\,k_\perp J_p(k_\perp\rho)J_{p}(k_\perp\rho')}
  =\frac{1}{\rho}\delta(\rho-\rho')
\end{equation}

And by the way, one also easily gets the \emph{transverse momentum distribution}
by integrating Eq.~\ref{eq:5.13} over $k_z$
\begin{equation}
  \label{eq:5.18}
  \begin{aligned}
    \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}\bm k_\perp}&\equiv
    \frac{\mathrm{d}\sigma_{\rm str}}{\mathrm{d}^2 k_\perp}
    =\frac{1}{2\pi}\frac{\mathrm{d}\sigma_{\rm str}}{k_\perp\mathrm{d}k_\perp} \\
    &=\frac{2\pi}{2l+1}
    \int_0^\infty{\mathrm{d}b_{\rm v}\,b_{\rm v}\left[1-\left|S_{\rm v}(b_{\rm v})\right|^2\right]}
    \sum_{m,p}{C_{lm}^2\int_{-\infty}^\infty{\left|\mathcal{D}_{lmp}(k_\perp,b_{\rm v},z)\right|^2}}
  \end{aligned}
\end{equation}
where
\begin{equation}
  \label{eq:5.19}
  \begin{aligned}
    \mathcal{D}_{lmp}(k_\perp,b_{\rm v},z)&=\sum_j{\alpha_j\exp[-b_{\rm v}^2/\beta_j^2]
      \int_0^\infty{\mathrm{d}\rho\,\rho J_p(k_\perp\rho)\exp(-\rho^2/\beta_j^2)
      I_{m-p}\left[\frac{2\rho b_{\rm v}}{\beta_j^2}\right]}} \\
      &\times\exp(-\mathrm{i}k_z z)R_l(r)P_{lm}(\cos\theta)
  \end{aligned}
\end{equation}

Of course it is not practical to run the sum over $p$ through all integers. First
of all, $I_{-p}(x)=I_p(x)$, so the sum only have to run for $p>=0$. Secondly, since
$I_p(x)\approx\dfrac{1}{n!}\left(\dfrac{x}{2}\right)^2$ for $x\ll p$, $I_p(x)$ drops
factorially with $p$. $I_p(x)\approx\dfrac{1}{\sqrt{2\pi x}}\exp(x)$ for $x\gg p$,
but is serverely dampened by the Gaussians of $b_{\rm v}$ and $\rho$, not to mention
that $R_l(r)$ drops to almost zero far before $2\rho b_{\rm v}$ grows comparable
to $\beta_j^2$. Given that $m$ is usually a small value (usually no more than 3),
the Fortran code MOMDIS\footnote{Compt. Phys. Comm. 175 (2006) 372--380} limits $p$
to be in the interval of $[-5,5]$, corresponding to a span of 3 orders of magnitude
for $\mathcal{B}_{lmp}$.

One may encounter overflow when evaluating $I_p(x)$ for $x>700$, because a 64-bit C++
double type can only represent number no more than 1.7977E+308. Fortunately we
have asymptotic form of $I_p(x)$ when $x\to\infty$
\begin{equation}
  \label{eq:bsi}
  I_p(x)\approx\frac{e^x}{\sqrt{2\pi x}}
  \sum_{k=0}^\infty\frac{(-)^k (4p^2-1)(4p^2-9)\cdots\left[4p^2-(2k+1)^2\right]}{(8z)^k k!}
\end{equation}
where the $e^x$ factor can be set aside to interact firstly with the Gaussians
of $b_{\rm v}$ and $\rho$ in Eq.~\ref{eq:5.16} to avert computer overflow.

Well, we regret to say, that despite all the efforts, the Gaussian expansion approah
failed to produce consistent results with Eq.~\ref{eq:5.3}. The reason so far is
not clear. The modified Bessel function routines are thoroughly tested. Expanding the range
over $p$ does not address the considerable discrepancy, either. Further investigation
reals that $\mathcal{B}_{lmp}(k_z,b_{\rm v},\rho)$ as a function of $\rho$ is roughly
correct for $b_{\rm v}=0$, yet deteriorates rapidly as $b_{\rm v}$ goes larger. We've noticed
that there is certain limit for the quality of the Gaussian fit, irrespective of the
number of Guassians one may use, neither varying $R_{\rm L}$ ameliorates the situation.
This might explain the disagreement. Temporarily there's nothing that can be done.
Anyway, the method does not show considerable improvement in calculation time, if there's
any improvement at all. We've decided to leave it alone for a while.

Practically those formulae above constitute the prescription for a numerical
implementation of the momentum distribution of one-nucleon knockout reactions.
The one-nucleon knockout cross section contributed by diffractive dissociation is
estimated to be $1/3\sim1/4$ of the total cross section. And
since diffractive dissociation can also be regarded as a direct removal channel
of the valence nucleon, the momentum distribution of the residue and that from
stripping should be much alike. Usually they're ignored in momentum distribution
calculations. Well, it does not pose much trouble to calculate it here. And the
relevant formula (Eq.~\ref{eq:4.29}) is made more explicit below:
\begin{equation}
  \label{eq:5.20}
  \frac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}k_z}=\frac{1}{2l+1}
    \int_0^\infty{\mathrm{d}b\,b
    \sum_m{C_{lm}^2\int_0^\infty{\mathrm{d}\rho\,\rho
    \left|\mathcal{Z}_{lm}(k_z,\rho)\right|^2 \mathcal{S'}(b,\rho)}}}
\end{equation}
where $\bm b$ is the transverse relative position of the center of mass (c.m.) of
the projectile w.r.t. the target, and
\begin{equation}
  \label{eq:5.21}
  \begin{aligned}
    \mathcal{S'}(b,\rho)=&\int_0^{2\pi}{\mathrm{d}\phi
      \left|S_{\rm c}(b_{\rm c})S_{\rm v}(b_{\rm v})\right|^2} \\
      &-\mathcal{I}_{lm}(b)\int_0^{2\pi}{\mathrm{d}\phi
      S_{\rm c}(b_{\rm c})S_{\rm v}(b_{\rm v})}
  \end{aligned}
\end{equation}
with
\begin{equation}
  \label{5.22}
  \left\{
  \begin{aligned}
    \mathcal{I}_{lm}(b)&=\int\mathrm{d}{\bm r}S^\ast_{\rm c}(b_{\rm c})S^\ast_{\rm v}(b_{\rm v})
    |\psi_{nljm}(\bm r)|^2 \\
    &=C_{lm}^2 \int_0^\infty{\mathrm{d}\rho\,\rho}
    \int_0^{2\pi}{\mathrm{d}\phi S_{\rm c}^\ast(b_{\rm c})S_{\rm v}^\ast(b_{\rm v})}
    \int_{-\infty}^\infty{\mathrm{d}z\left|R_l(r)P_{lm}(\cos\theta)\right|^2} \\
    b_{\rm c}&=\sqrt{b^2+\rho_{\rm c}^2-2b\rho_{\rm c}\cos\phi} \\
    b_{\rm v}&=\sqrt{b^2+\rho_{\rm v}^2+2b\rho_{\rm v}\cos\phi}
  \end{aligned}
  \right.
\end{equation}
where $\rho_{\rm c}=\dfrac{\rho}{A_{\rm P}}$ and
$\rho_{\rm v}=\dfrac{\rho(A_{\rm P}-1)}{A_{\rm P}}$. $b_{\rm c}$ and $b_{\rm c}$
are the incident parameters of the core and the valence nucleon respectively w.r.t
the target. One may have noticed that the last integration variable has shifted from
$\bm b_{\rm v}$
to $\bm b$. They are mathematically equivalent, yet numerically mean the difference
between rapid convergence and divergent catastrophe, which has been verified to be
true. Since the option of $b$ leads to quick convergence, we have enough reason
to believe that the integrand is much more sensitive to $b$ than $b_{\rm v}$. We
do find that the integral drops to 0 more rapidly along $b$, compared with $b_{\rm v}$.
This applies to the numerical evaluation of the total diffractive dissociation
cross sections, i.e. Eq.~\ref{eq:4.5}, where the integrand is nothing but the variance
of $S_{\rm c}S_{\rm v}$, which does tend to be decreasing more effecively along $b$
since $S_{\rm c}S_{\rm v}$ is more steady when $b_{\rm c}$ and $b_{\rm v}$ grows large
at approximately the same pace, instead of otherwise, as is the case with $b_{\rm v}$.
The dynamics under the hood is not clear, and not worth the potential efforts
required to find out from a pragmatic point of view. So we just give it up.

One more thing. Eq.~\ref{eq:5.21} involves comparison of two integrals, where one of
them has factor $\mathcal{I}_{lm}(b)$ which contains $|\psi_{nljm}|^2$, and is
calculated assuming that the single particle wavefunction is normalized, which is
usually poorly guaranteed. This is not fair for the other integral. So it is
necessary to renormalize $\psi_{nljm}$, to assure decent accuracy.

It is noteworthy that the computer code MOMDIS ignores the contribution of
diffractive dissociation to the total momentum distribution. We deem it worthwhile
to add it back. Here comes some minor complications after undertaking this proposal.
That is, due to roundoff there is no guarantee that Eq.~\ref{eq:5.21} gives a real number.
It is not to say that a non-zero imaginary part implicates something nonphysical,
but just a psychological preferrence. Now it is found that even Eq.~\ref{eq:5.20} is
a complex number, with considerable imaginary parts, let alone Eq.~\ref{eq:5.21},
or the intermediate results in between. Nevertheless, we do not need to panic, since
we do not find a scientific proof yet that it violates any mathematical or physical
law to have imaginary parts, not to mention the roundoff error. It
is a relief that the integration of Eq.~\ref{eq:5.20} over $k_z$ matches the total
diffractive dissocciation cross sections and has an imaginary part that is perfectly
negligible (due to roundoff error), which lends credence to our method here. What
is also interesting is that $\dfrac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}k_z}$
has significant minus values for some $k_z$'s, which only occurs so far for $m=0$.
Of course it does not make any sense to have cross sections that are below 0. It
is head-scratching and mind-boggling to imagine the possibility that a nucleus
with $m=0$ impinging on a target nucleus, where the core and the valence nucleon
only elastically scatter with the target, not only prohibits the sampling of its
valence nucleon at certain $k_z$'s, but absorbs the counterpart cross sections at
other $k_z$'s. Minus $\dfrac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}k_z}$ contradicts
with the notion that it is an observable. Practically the depletion represented
by the minus pit has to be compensated by denting the positive distributions else
where in $k_z$ domain, for which we have no clue how it is done. Besides, what does
the imaginary part of $\dfrac{\mathrm{d}\sigma_{\rm diff}}{\mathrm{d}k_z}$ even mean?
Alright. Let's just leave it alone.


\section{Evalulation of the Eikonal Phase}
As has been learned from previous discussions, the eikonal phase composes of several
parts. It turns out that upon numerical evaluation, each of them may entail specific
treatment to facilitate an computer implementation.

For a start, the Fourier transforms of the nucleonic densities (Eqs.~\ref{eq:3.12}
and \ref{eq:3.19}) are only done in transverse components, and we can also introduce
a Gaussian nucleon density function to accommodate the matter distribution of a
single nucleon by its convolution with the nucleonic densities of the nucleus.

As has been mentioned at the end of Sec.~\ref{sec:3.1}, different instances of
parameterization exists for nucleon-nucleon scattering amplitude $f_{\rm NN}(q)$.
Relevent details are listed, together with necessary explanations.

\subsection{Fourier Transform of the Nuclear Density}

We are talking about Fourier transform taking the following form

\begin{equation}
  \label{eq:5.23}
  \rho_{\rm N}(\bm q)=\int{\mathrm{d}\bm r\exp(\mathrm{i}\bm q\cdot\bm s)
  \rho_{\rm N}(\bm r)}
\end{equation}
where $\bm s\equiv\bm r_\perp$. Usually nuclear densities calculated from nuclear
structure theories such as Hartree-Fock theory and shell model are obtained under
the circumstances that nucleons are treated as point-like particles, which is
inappropriate from a quantum-mechanic point of view. The probability $\rho(\bm r)$
of finding a nucleon at point $\bm r$ in space takes contributions from nucleonic
density strength $\rho_{\rm N}(\bm r')$ at point $\bm r'$ and propogates through
$\bm r-\bm r'$, i.e. $\rho_p(\bm r-\bm r')$. Thus we have the real nuclear
density as the convolution

\begin{equation}
  \label{eq:5.24}
  \rho(\bm r)=\int{\mathrm{d}{\bm r'}\rho_p(\bm r-\bm r')\rho_{\rm N}(\bm r')}
\end{equation}
A (normalized) Gaussian nucleon density distribution takes the form
\begin{equation}
  \label{eq:5.25}
  \rho_p(r)=(\pi a^2)^{-3/2}\exp(-r^2/a^2)
\end{equation}

With the above equations, the Fourier transform of the final density is deduced
\begin{equation}
  \label{eq:5.26}
  \begin{aligned}
    \rho(\bm q)&=\int{\mathrm{d}\bm r\exp(\mathrm{i}\bm q\cdot\bm s)
    \rho(\bm r)} \\
    &=\int{\mathrm{d}\bm r\exp(\mathrm{i}\bm q\cdot\bm s)}
    \int{\mathrm{d}{\bm r'}\rho_p(\bm r-\bm r')\rho_{\rm N}(\bm r')} \\
    &=\int{\mathrm{d}\bm r'\exp(\mathrm{i}\bm q\cdot\bm s')\rho_{\rm N}(\bm r')}
    \int{\mathrm{d}(\bm r-\bm r')\exp[\mathrm{i}\bm q\cdot(\bm s-\bm s')]
    \rho_p(\bm r-\bm r')} \\
    &\equiv\rho_{\rm N}(\bm q)\rho_p(\bm q) \\
    &=\rho_{\rm N}(\bm q)\exp(-a^2q^2/4)
  \end{aligned}
\end{equation}
where we have in the last step used the integral formula
\begin{equation}
  \label{eq:5.27}
  \int_{-\infty}^{\infty}{e^{-ax^2+bx}\mathrm{d}x}=\sqrt{\frac{\pi}{a}}e^\frac{b^2}{4a}
\end{equation}

Usually $\rho_{\rm N}(\bm r)$ is spherically symmetric, which enables further
simplification for $\rho_{\rm N}(\bm q)$
\begin{equation}
  \label{eq:5.28}
  \rho_{\rm N}(\bm q)=\int{r^2\rho_{\rm N}(r)\mathrm{d}r}
  \int{\mathrm{d}\Omega\exp(\mathrm{i}\bm q\cdot\bm r)}
\end{equation}
For the angular part of the integral
\begin{equation}
  \label{eq:5.29}
  \begin{aligned}
    &\int{\mathrm{d}\Omega\exp(\mathrm{i}\bm q\cdot\bm r)} \\
    =&\int_0^\pi{\sin\theta\mathrm{d}\theta}
    \int_0^{2\pi}{\mathrm{d}\phi\exp(\mathrm{i}qr\sin\theta\cos\phi)} \\
    &=\int_0^\pi{\sin\theta\mathrm{d}\theta}\cdot 2\pi J_0(qr\sin\theta) \\
    &=2\pi\int_0^\pi{\sin\theta\mathrm{d}\theta}
    \sum_{n=0}^\infty{\frac{(-q^2r^2/4)^n}{(n!)^2}\sin^{2n}\theta} \\
    &=2\pi\sum_{n=0}^\infty{\frac{(-q^2r^2/4)^n}{(n!)^2}}
    \int_0^\pi{\mathrm{d}\theta\sin^{2n+1}\theta} \\
    &=4\pi\sum_{n=0}^\infty{\frac{(-q^2r^2/4)^n}{(n!)^2}}
    \frac{(2n)!!}{(2n+1)!!} \\
    &=\frac{4\pi}{qr}\sum_{n=0}^\infty{(-1)^n\frac{(qr)^{2n+1}}{(2n+1)!}} \\
    &=\frac{4\pi}{qr}\sin(qr)
  \end{aligned}
\end{equation}
Now we have
\begin{equation}
  \begin{aligned}
    \label{eq:5.30}
    \rho_{\rm N}(q)=\frac{4\pi}{q}\int_0^\infty{r\rho_{\rm N}(r)\sin(qr)\mathrm{d}r}
  \end{aligned}
\end{equation}

Since usually nuclear densities are given by equidistant arrays, the above integral
is evaluated by Simple's rule, or specifically by Filon's integration Formula
(Eq.~25.4.54 in p.890 of \emph{Handbook of Mathematical Functions. M. Abramowitz
and I.A. Stegun, Dover Publications, 1970}). We copy it here for reference as

\begin{equation}
  \label{eq:5.31}
  \begin{aligned}
    \int_{x_0}^{x_{2n}}{f(x)\sin(tx)\,\mathrm{d}x}&=
    h\Big[\alpha(th)\left(f_0\cos(tx_0)-f_{2n}\cos(tx_{2n})\right)+
    \beta(th)S_{2n}+\\
    &\gamma(th)S_{2n-1}+\frac{2}{45}th^4C'_{2n-1}\Big]-R_n
  \end{aligned}
\end{equation}
where
\begin{equation}
  \label{eq:5.32}
  \begin{aligned}
    S_{2n}&=\sum_{i=0}^n{f_{2i}\sin(tx_{2i})-
    \frac{1}{2}\left[f_{2n}\sin(tx_{2n})+f_0\sin(tx_0)\right]} \\
    S_{2n-1}&=\sum_{i=0}^n{f_{2i-1}\sin(tx_{2i-1})} \\
    C'_{2n-1}&=\sum_{i=0}^n{f^{(3)}_{2i-1}\cos(tx_{2i-1})} \\
    R_n&=\frac{1}{90}nh^5f^{(4)}(\xi)+O(th^7)
  \end{aligned}
\end{equation}
and
\begin{equation}
  \label{eq:5.33}
  \begin{aligned}
    \alpha(\theta)&=\frac{1}{\theta}+\frac{\sin(2\theta)}{2\theta^2}-
    \frac{2\sin^2\theta}{\theta^3} \\
    \beta(\theta)&=2\left[\frac{1+\cos^2\theta}{\theta^2}-
    \frac{\sin(2\theta)}{\theta^3}\right] \\
    \gamma(\theta)&=4\left(\frac{\sin\theta}{\theta^3}-
    \frac{\cos\theta}{\theta^2}\right) \\
  \end{aligned}
\end{equation}

Note that since $\rho_{\rm N}(r)$ is a nucleonic density distribution, it has to
be normalized to the nucleon number, i.e. $\int{\mathrm{d}\bm r \rho_{\rm N}(r)=A}$.
There's also a special case arising with the calculation of $S_{\rm v}$, where
$\rho_{\rm N}(r)$ for the incident nucleon is a delta function
$\rho_{\rm v}(r)=\delta(\bm r)$. It can be easily verified that in this case
Eq.~\ref{eq:5.23} is the corresponding nucleon number $\rho_{\rm v}(q)\equiv A=1$.

\subsection{NN Scattering Amplitude $f_{\rm NN}(q)$}
Parameterization of $f_{\rm NN}(q)$ (Eq.~\ref{eq:3.15}) involves three parameters,
i.e. $\sigma_{\rm NN}$, $\alpha_{\rm NN}$ and $\beta_{\rm NN}$, all of which are
functions of projectile's velocity $\beta$ w.r.t. the target. We hereby adopt the
prescription given by \href{https://dx.doi.org/10.1103/PhysRevC.41.1610}{S.K. Charagi
and S.K. Gupta}\footnote{https://dx.doi.org/10.1103/PhysRevC.41.1610} for NN
scattering cross section $\sigma_{\rm NN}$, which are actually
polynomial fit to NN scattering experiment data at a variety of energies. For incident
energy per atomic mass $E_{\rm k}>$\SI{10}{MeV/nucleon}
\begin{equation}
  \label{eq:5.34}
  \begin{aligned}
    \sigma_{pp}&=13.73-\frac{15.04}{\beta}+\frac{8.76}{\beta^2}+68.67\beta^4 \\
    \sigma_{np}&=-70.67-\frac{18.18}{\beta}+\frac{25.26}{\beta^2}+113.85\beta
  \end{aligned}
\end{equation}
The cross section is in \si{mb}. $\beta$ is calculated via
\begin{equation}
  \label{eq:5.35}
  \beta=\frac{\sqrt{2uE_{\rm k}+E_{\rm k}^2}}{u+E_{\rm k}}
\end{equation}
where $u$=\SI{931.494061}{MeV} is the atomic mass unit.

Well, for $E_{\rm k}\leqslant$\SI{10}{MeV/nucleon}, with $\sigma_{pp}$ in Eq.~\ref{eq:5.33} still used, $\sigma_{np}$ is substituted by
\begin{equation}
  \label{eq:5.36}
  \sigma_{np}=\left[\frac{2.73}{(1-0.0553E_{\rm k})^2+0.35E_{\rm k}}+
  \frac{17.63}{(1+0.344E_{\rm k})^2+6.8E_{\rm k}}\right]\times1000
\end{equation}
The factor 1000 is so that here $\sigma_{np}$ is also in \si{mb}.
Here $\beta$ is calculated by
\begin{equation}
  \label{eq:5.37}
  \beta=\sqrt{\frac{2E_{\rm k}}{u}}
\end{equation}

The states that the knocked nucleon could populate is limited compared with the
situation of free NN scattering, because some of the states are occupied by other
nucleons in the nucleus, which would supress nucleon knockout cross sections. This
is known as Pauli blocking, as it is originated from Pauli Exclusion Principle.
The Fortran code MOMDIS\footnote{Compt. Phys. Comm. 175 (2006) 372--380} provides
a simple prescription to accommodate Pauli blocking by altering $\sigma_{pp}$ and
$\sigma_{np}$ in Eq.~\ref{eq:5.34}
\begin{equation}
  \label{eq:5.38}
  \begin{aligned}
    \sigma_{pp}^{\rm Pauli}
    &=\frac{(1+7.772E_{\rm k}^{0.06}\rho_{\rm eff}^{1.48})}
    {1+18.01\rho_{\rm eff}^{1.46}}\sigma_{pp} \\
    \sigma_{np}^{\rm Pauli}
    &=\frac{(1+20.88E_{\rm k}^{0.04}\rho_{\rm eff}^{2.02})}
    {1+35.86\rho_{\rm eff}^{1.9}}\sigma_{np}
  \end{aligned}
\end{equation}
where $\rho_{\rm eff}=0.17$ and $E_{\rm k}$ is in \si{MeV/nucleon}. $\beta$ is
calculated according to Eq.~\ref{eq:5.35}. Eq.~\ref{eq:5.38} is temporarily made
applicable $\forall E_{\rm k}\in[10,2200]$ \si{MeV/nucleon}. More information
about this fit is found in Hussein, Rego, Bertulani, Phys. Rep. \textbf{201} (1991)
279 and Bertulani, J. Phys. G \textbf{27} (2001) L67.

Since the isospins of the nucleons are neglected in deduction of the eikonal cross
sections, $\sigma_{\rm NN}$ is an isospin average of $\sigma_{pp}$ and $\sigma_{np}$
\begin{equation}
  \label{eq:5.39}
  \sigma_{\rm NN}=\frac{(N_{\rm P}N_{\rm T}+Z_{\rm P}Z_{\rm T})\sigma_{pp}+
  (N_{\rm P}Z_{\rm T}+Z_{\rm P}N_{\rm T})\sigma_{np}}{A_{\rm P}A_{\rm T}}
\end{equation}
This concept of isospin average applies to $\alpha_{\rm NN}$ and $\beta_{\rm NN}$.
We below gives $\alpha_{\rm NN}$ and $\beta_{\rm NN}$ from Horiuchi, \emph{et al},
Phys. Rev. C \textbf{75}, 044607 in Tab.~\ref{tab:5.1}. Note that the original
table in Horiuchi's article uses $2\beta_{\rm NN}$, i.e. the $\beta_{\rm NN}$ we
are interested in are only half of the corresponding values in Tab.~\ref{tab:5.1}.

\begin{table}
  \caption{$\alpha_{\rm NN}$ and $\beta_{\rm NN}$ used in Eq.~\ref{eq:3.15} from
  Horiuchi, \emph{et al}, Phys. Rev. C \textbf{75}, 044607. The parameters here
  are already isospin-averaged.}
  \label{tab:5.1}
  \begin{center}
    \begin{tabular}{ccc}
      \toprule
      E(MeV/nucleon) & $\alpha_{\rm NN}$ & $2\beta_{\rm NN}(\si{\femto\meter}^2)$ \\
      \hline
      30  & 0.87  & 0.685 \\
      38  & 0.89  & 0.521 \\
      40  & 0.9   & 0.486 \\
      49  & 0.94  & 0.39  \\
      85  & 1.37  & 0.349 \\
      94  & 1.409 & 0.327 \\
      100 & 1.435 & 0.322 \\
      120 & 1.359 & 0.255 \\
      150 & 1.245 & 0.195 \\
      200 & 0.953 & 0.131 \\
      325 & 0.305 & 0.075 \\
      425 & 0.36  & 0.078 \\
      550 & 0.04  & 0.125 \\
      650 & -0.095 & 0.16 \\
      800 & -0.07  & 0.21 \\
      1000 & -0.275 & 0.21 \\
      \bottomrule
    \end{tabular}
  \end{center}
\end{table}

Horiuchi's parameter set is consistent over the energy range of [30,1000] \si{MeV/nucleon}.
In contrast, Ray's (L. Ray, Phys. Rev. C \textbf{20}, 1857) parameter set covers
[100,2200] \si{MeV/nucleon}; whereas Lenzi's (Lenzi, \emph{et al}, Phys. Rev. C \textbf{40},
2114) is complementary, covering [10,100] \si{MeV/nucleon}. They two are not consistent,
namely they don't join each other at the seam around $E_{\rm k}$=\SI{100}{MeV/nucleon}.
Usually Horiuchi's is preferred. The Ray's parameter set and the Lenzi's one are
listed in Tabs.~\ref{tab:5.2} and \ref{tab:5.3}.

Note that the parameters are only given at discrete energy points. The values at
an arbitrary energy is interpolated, usually with polynomials. Although the polynomial
through given points is unique, there's no standard practice as for the selection
of the polynormial's order. It turns out to be a potent factor to the calculated
value of the final cross sections. The Fortran code MOMDIS uses 6 points. It might
be too many.

\begin{table}
  \caption{$\alpha_{pp}$, $\beta_{pp}$, $\alpha_{np}$ and $\beta_{np}$ used
  in Eq.~\ref{eq:3.15} from L. Ray, Phys. Rev. C \textbf{20}, 1857.}
  \label{tab:5.2}
  \begin{center}
    \begin{tabular}{ccccc}
      \toprule
      E(MeV/nucleon) & $\alpha_{pp}$ & $\beta_{pp}(\si{\femto\meter}^2)$
               & $\alpha_{np}$ & $\beta_{np}(\si{\femto\meter}^2)$ \\
      \hline
       100 &  1.87 & 0.66 &  1.00 & 0.36 \\
       150 &  1.53 & 0.57 &  0.96 & 0.58 \\
       200 &  1.15 & 0.56 &  0.71 & 0.68 \\
       325 &  0.45 & 0.26 &  0.16 & 0.36 \\
       425 &  0.47 & 0.21 &  0.25 & 0.27 \\
       550 &  0.32 & 0.04 & -0.24 & 0.085 \\
       650 &  0.16 & 0.07 & -0.35 & 0.09 \\
       800 &  0.06 & 0.09 & -0.20 & 0.12 \\
      1000 & -0.09 & 0.09 & -0.46 & 0.12 \\
      2200 & -0.17 & 0.12 & -0.50 & 0.14 \\
      \bottomrule
    \end{tabular}
  \end{center}
\end{table}

\begin{table}
  \caption{$\alpha_{\rm NN}$ and $\beta_{\rm NN}$ used in Eq.~\ref{eq:3.15} from
  Lenzi, \emph{et al}, Phys. Rev. C \textbf{40}, 2114.}
  \label{tab:5.3}
  \begin{center}
    \begin{tabular}{ccc}
      \toprule
      E(MeV/nucleon) & $\alpha_{\rm NN}$ & $\beta_{\rm NN}(\si{\femto\meter}^2)$ \\
      \hline
      10 & 0.8  & 0 \\
      30 & 0.87 & 0 \\
      38 & 0.89 & 0 \\
      40 & 0.9  & 0 \\
      49 & 0.94 & 0 \\
      85 & 1.0  & 0 \\
      94 & 1.07 & 0 \\
      \bottomrule
    \end{tabular}
  \end{center}
\end{table}

\section{Solving the Bound State}
When talking about solving the bound state, we mean solving the radial wave function,
since it is the only part in a bound state that needs to be solved for a central
field. Eq.~\ref{eq:4.15} is recast to a simpler form ($u_l$ is used in $u_{nlj}$'s
stead for convenience)
\begin{equation}
  \label{eq:5.40}
  u_l''(r)=\left[\frac{l(l+1)}{r^2}+\frac{2\mu}{\hbar^2}V(r)-k^2\right]u_l(r)
\end{equation}
where
\begin{equation}
  \label{eq:5.41}
  V(r)=V_0(r)+\bm l\cdot\bm s V_{\rm S}(r)+V_{\rm C}(r)
\end{equation}
$V_0(r)$, $\bm l\cdot\bm s V_{\rm S}(r)$ and $V_{\rm C}(r)$ are the main central
field, the spin-orbit potential and the Coulomb potential respectively. So
\begin{equation}
  \label{eq:5.42}
    V_{\rm C}(r)=\left\{
    \begin{aligned}
      &\frac{Z_{\rm c}Z_{\rm v}e^2}{r} & \text{for } r>R_{\rm C} \\
      &\frac{Z_{\rm c}Z_{\rm v}e^2}{2R_{\rm C}}\left(3-\frac{r^2}{R_{\rm C}^2}\right)
      & \text{for } r\leqslant R_{\rm C}
    \end{aligned}
  \right.
\end{equation}
Since we are dealing with eigenfunctions of $\hat l^2$ and $\hat s^2$, and
$\bm j=\bm l+\bm s$,
\begin{equation}
  \label{eq:5.43}
  \bm l\cdot\bm s=(j^2-l^2-s^2)/2=\left[j(j+1)-l(l+1)-s(s+1)\right]/2
\end{equation}

We suppose the central potential $V_0(r)$ to have a Woods-Saxon form
\begin{equation}
  \label{eq:5.44}
  V_0(r)=\frac{V_0}{1+\exp\left(\frac{r-r_0}{a_0}\right)}
\end{equation}
and the spin-orbin potential takes a form of
\begin{equation}
  \label{eq:5.45}
  \begin{aligned}
    V_{\rm S}(r)&=-V_{\rm S0}\left(\frac{\hbar}{m_\pi c}\right)^2\frac{1}{r}
    \frac{\mathrm{d}}{\mathrm{d}r}\frac{1}
    {1+\exp\left(\frac{r-r_{\rm S}}{a_{\rm S}}\right)} \\
    &=V_{\rm S0}\left(\frac{\hbar}{m_\pi c}\right)^2\frac{1}{ra_{\rm S}}
    \frac{\exp\left(\frac{r-r_{\rm S}}{a_{\rm S}}\right)}
    {\left[1+\exp\left(\frac{r-r_{\rm S}}{a_{\rm S}}\right)\right]^2}
  \end{aligned}
\end{equation}

Finally Eq.~\ref{eq:5.40} is solved with boundary conditions
\begin{equation}
  \label{eq:5.46}
  u_l(0)=u_l(\infty)=0
\end{equation}
as $R_l(0)$ has finite value and on the other hand, $\int_0^{\infty}{r^2R^2_l(r)\mathrm{d}r}$
is finite, which requires that $u_l(\infty)=0$. The boundary condition gives discrete
values to the energies of the solved bound states.

About the values of the parametes in the potential. The depth $V_0$ of the central
Woods-Saxon is adjusted to reproduce the physical separation energy $S_N+E_{\rm x}(j^\pi)$
to each final state of interest, where $S_N$ is the one-nucleon seperation energy
of the projectile, and $E_{\rm x}(j^\pi)$ is the excitation energy of the final state
of the residue. It is found that the calculated one-nucleon konckout cross section
is especially sensitive to the rms radius $r_{\rm sp}$ of the bound-state wavefunction
as solved by the potential specified above, which is also found to be strongly
sensitive to the $r_0$ parameter of $V_0(r)$, less so for $a_0$, or the parameters
of the spin-orbit interaction. The common practice is to adjust $r_0$ so as to give
$r_{\rm sp}=\sqrt{\dfrac{A}{A-1}}r^{\rm HF}_{\rm rms}$, with $r^{\rm HF}_{\rm rms}$
being the Hartee-Fock single-particle orbit using usually the SkX interaction.
Note that $r_{\rm sp}$ thus obtained is single-particle-orbit sensitive.
$\sqrt{\dfrac{A}{A-1}}$ is a center-of-mass (CM) correction, since HF calculationis
is done with the origin at the CM of the nucleus, while the bound-state we solve
here is the relative motion of the valence nucleon w.r.t. the core. As the other
potential parameters play minor roles, the same $(r_0,a_0)$ is used for $V_{\rm S0}(r)$,
with usually $V_{\rm S0}=$\SI{-6}{MeV} or $\SI{-7.5}{MeV}$. The $r_0$ value is also
extended to the Coulomb potential: $R_{\rm C}=r_0A^{\frac{1}{3}}$.

\section{The ODE Set Solver}
It benefits a lot to know deeply into the numerical tools behind the physics. Since
it's them that actually do all the calculation work. Some of them are too basic
and straight forward that their descriptions could be put aside for the time being.
Well, there are ones that are both complicated in nature and closely related to user
interface, like the solver for the ordinary differential equation (ODE) set. To
give full illustration to them is imperative.

We use the ODE set solver from \emph{Numerical Recipes in C} to solve the radial
equation Eq.~\ref{eq:5.40}. The problem falls into the category of \emph{two point
boundary value problems}, as the boundary condition Eq.~\ref{eq:5.46} is at the
two end points of the independent variable $x=0$ and $x=\infty$.

\subsection{Two Point Boundary Value Problem: Shooting method}

The shooting method is used. The overall logic behind the solving routine is quite
simple and straightforward. To begin with, an initial value problem is easily
solved by starting with the initial values at the initial point $x_1$, and
integrating to the ending point $x_2$ with derivatives extracted from the differential
equation itself step by step, using \emph{steppers}, e.g., Runge-Kutta method.
Well, for a well-defined physical problem, the number of conditions equals to the
order of the ODE, which means that for every boundary condition at $x_2$, we have
one less at $x_1$. So one has to made an arbitrary guess for it to kick start the
integration, in hope that the solution may meet the boundary condition at the other
end $x_2$ when the integration is over. It is analogous to shooting in that making
the guess to meet the boundary condition at $x_2$ is like adjusting a gun to let
the bullet hit the target. No wonder that an iteration is needed to optimize the
initial guesses.

No words explains the underlying idea better than employing it to an example. No
example sheds more light on the solution of our problem here than our problem itself.
First of all, Eq.~\ref{eq:5.40} is transformed to a set of first-order differential
equations. $u_l$ is denoted as $u$ as a matter of convenience.
\begin{equation}
  \label{eq:5.47}
  \left\{
  \begin{aligned}
    \frac{\mathrm{d}u}{\mathrm{d}r}&=u_1 \\
    \frac{\mathrm{d}u_1}{\mathrm{d}r}&=\left[\frac{l(l+1)}{r^2}+
    \frac{2\mu}{\hbar^2}V(r)-u_2\right]u(r) \\
    \frac{\mathrm{d}u_2}{\mathrm{d}r}&=0
  \end{aligned}
  \right.
\end{equation}
where $u_2\equiv k^2=\dfrac{2\mu E_k}{\hbar^2}$ is the eigenvalue of the radial
equation. So to determine the solution, one has to provide three conditions
\begin{equation}
  \label{eq:5.48}
  \left\{
  \begin{aligned}
    u(0)&=0 \\
    u(\infty)&=0 \\
    u_1(\infty)&=0
  \end{aligned}
  \right.
\end{equation}
The first two of Eq.~\ref{eq:5.48} are the same as Eq.~\ref{eq:5.46}. The third
one has to be true otherwise the limiting value of $u(r)$ as $r\to\infty$ would
not remain a constant, i.e. 0.

Now for the stepper to start work, one has to give concret values to all of $u$,
$u_1$ and $u_2$. Then here comes a problem of option. One can choose to start from
$x=0$, then both $u(0)$ and $u_1(0)$ have to be guessed, so as to make $u(\infty)=0$
and $u_1(\infty)=0$, which boils down to a 2-D equation set. If you wish to reduce
the complexity, try starting from the other point $x=\infty$, where we have two
boundary conditions to be used and a 1-D root-finding problem. Of course this infinity
is in the sense of physics. $u(r)$ is practically zero in terms of numerical
evaluation at $r$ larger than \SI{6}{\femto\meter} for light nuclei with $A\leqslant 40$.
But note that even you start at $r_\infty$=\SI{200}{\femto\meter}, $u(r_\infty)$
cannot be set to zero, or the derivatives would remain 0 all the time, and the
solution would simply turn out to be $u(r)\equiv 0$. $u(r_\infty)$ just has to be
something, and we konw that strictly speaking, $u(r)$ is never 0 no matter how large
r is. Anyway $u(r)$ has not been normalized yet.

Speaking of normalization..., well, it's unfair if this normalization freedom can
only be enjoyed at $r=\infty$! With appropriate normalization constant, we can
practically assign $u_1(0)$ to, say, 1. Now starting point $x=0$ is more attractive,
since it's a natural start, with the following boundary conditions
\begin{equation}
  \label{eq:5.49}
  \left\{
  \begin{aligned}
    u(0)&=0 \\
    u_1(0)&=1 \\
    u(\infty)&=0
  \end{aligned}
  \right.
\end{equation}
It is supposed that $u_1(0)$ has a considerable value so that we can set $u_1(0)=1$,
or it has to be adjusted according to the numerical output.

\subsection{The Newton-Raphson Method for Multidimensional Root Finding}
Transcribbed from \emph{Numerical Recipes in C}, we here introduce the gist of a
globally convergent nonlinear equation set solver - the Newton-Raphon method. The
method is easy to understand and implement, as well as universal and robust. Used
in two boundary value problems to find appropriate initial guesses to meet the
conditions at the other end of the interval, this method is a key routine in solving
the radial wavefunction.

The overall principle behind the method is straightforward. For equation set
$\bm F(\bm x)=\bm 0$, one expands $\bm F(\bm x+\delta \bm x)$ at $\bm x$
\begin{equation}
  \label{eq:5.50}
  \bm F(\bm x+\delta \bm x)=\bm F(\bm x)+\bm J\cdot\delta \bm x+O(\delta \bm x^2)
\end{equation}
with the Jacobian
\begin{equation}
  \label{eq:5.51}
  J_{ij}\equiv\frac{\partial F_i}{\partial x_j}
\end{equation}
Now making $\bm x+\delta\bm x$ the root of Eq.~\ref{eq:5.50}, and supposing that
$\delta\bm x$ is very small, i.e. \emph{$\bm x$ is very close to the real root},
we drop $O(\delta \bm x^2)$ and get the correction to $\bm x$
\begin{equation}
  \label{eq:5.52}
  \delta \bm x=-\bm J^{-1}\cdot\bm F(\bm x)
\end{equation}

$\delta\bm x$ is solved using LU decomposition routines that can be found in chapter
2 of \emph{Numerical Recipes in C}.
The solved vector $\delta\bm x$ is the so-called Newton step. Problems have been
cracked at this point for linear equation set or for $\bm x$ that is extremely
near a real root. Well, usually we have little idea where the roots are at the
beginning and start in the middle of nowhere. Eq.~\ref{eq:5.52} does not always
take you all the way towards the roots, if not backwards.

Then someone noticed something. Since to zero $\bm F$ is equivalent to minimize
$\bm F\cdot\bm F$, we then try to minimize
\begin{equation}
  \label{eq:5.53}
  f=\frac{1}{2}\bm F\cdot\bm F
\end{equation}
and discover that the Newton step is a \emph{descent direction} for $f$
($0<\lambda\leqslant 1$):
\begin{equation}
  \label{eq:5.54}
  \begin{aligned}
    f(\bm x+\lambda\delta\bm x)-f(\bm x)&\approx\lambda\nabla f\cdot\delta\bm x \\
    &=\lambda(\bm F\cdot\bm J)\cdot(-\bm J^{-1}\cdot\bm F) \\
    &=-\lambda\bm F\cdot\bm F<0
  \end{aligned}
\end{equation}
which means that $f$ always goes downhill as long as $\lambda\delta\bm x$ is
sufficiently small. So step by step, we are bound to hunt down a zero along the way,
if not encountered by a local minimum of $\bm F$, in which case a different initial
guess should be tried to initiate a new searching path.

To avoid being too conservative or aggressive, also for the efficiency of the
algorithm, special attention is given to the step length, i.e. $\lambda$, for
each Newton step. We denote Newton step $\bm p\equiv\delta\bm x$ and define
\begin{equation}
  \label{eq:5.55}
  g(\lambda)\equiv f(\bm x+\lambda\bm p)
\end{equation}
so that
\begin{equation}
  \label{eq:5.56}
  g'(\lambda)=\nabla f\cdot\bm p
\end{equation}
The basic idea is that $\Delta f=f(\bm x)-f(\bm x+\lambda\bm p)$ is not too small,
or the program progresses slowly, and not unrealistically large in case that there
simply is no $\lambda$ to achieve it. Since function decrese is proportional to
both the slope and the steps taken, we set the following acceptance criterion for
$\lambda$
\begin{equation}
  \label{eq:5.57}
  \Delta f\leqslant\alpha\nabla f\cdot\lambda\bm p
\end{equation}
where $0<\alpha<1$. $\alpha=10^{-4}$ is a good choice to start the algorithm.
First we try the whole Newton step ($\lambda=1$) to see if it is accepted according
to Eq.~\ref{eq:5.57}. If not, $g(1)$ is available. Together with $g(0)$ and $g'(0)$,
modeling $g(\lambda)$ with a quadratic is then possible
\begin{equation}
  \label{eq:5.58}
  g(\lambda)\approx[g(1)-g(0)-g'(0)]\lambda^2+g'(0)\lambda+g(0)
\end{equation}
and it is a minimum when
\begin{equation}
  \label{eq:5.59}
  \lambda=-\frac{g'(0)}{2[g(1)-g(0)-g'(0)]}
\end{equation}
Similarly, if it fails to satisfy Eq.~\ref{eq:5.57} either, $g(\lambda)$ is available
to enable modeling of $g(\lambda)$ with a cubic. Readers can found detailed description
on this method in \emph{Numerical Recipes in C}, p.385. Changes in $\bm x$ would be
very small near the roots. So too small a $\Vert\delta\bm x\Vert$ would also exit the
iteration.

Usually $\nabla f$ is $\bm 0$ in a local minimum, and a non-zero value in its roots.
This is used to tell if the algorithm returns with a local minimum or a root.


\subsection{Shooting to a Fitting Point}
Shooting method used in combination with Newton minimization is a recommended
prescription for general solution of two point boundary value problems. Well,
elaborate as the approach is, it turns out ineffective in solving radial
Schr\"odinger equations. The method falls rapidly into highly-initial-value-sensitive
local minmums of the eigenenergy, casting the result to be unreliable. Moreover,
wrong eigenenergies should have given even smaller absolute values of wavefunctions
at infinity than the right ones do! We don't yet know the reason. Currently the
method is given up.

As the effort to seek general numerical algorithms for solving two point boundary
value problems is stymied, we turn to method that is specific to radial Schr\"odinger
equations. The search of such methods is fruitful. And we benefit the most from
the paper by G.Vanden Berghe, V. Fack and H.E. de Meyer\footnote{Journal of Computational
and Applied Mathematics 28 (1989) 391--401}.

Eq.~\ref{eq:5.40} is solved by integrating the equation from both sides of the
entire domain to some point in the middle, the so-called ``turning point'' $r_{\rm m}$.
$r_{\rm m}$ is defined as the first zero of the square bracket in the rhs of
Eq.~\ref{eq:5.40}
\begin{equation}
  f(E,r)\equiv\frac{l(l+1)}{r^2}+V(r)-E
\end{equation}
as one goes inwards from $r=\infty$.
The replacements $\dfrac{2\mu}{\hbar^2}V(r)\to V(r)$ and $\dfrac{2\mu}{\hbar^2}E\to E$
have been made as a matter of convenience. Usually $u(r)$ oscillates for
$r<r_{\rm m}$ (denoted as $u_{\rm i}(r)$), and dampens exponentially for $r>r_{\rm m}$
(denoted as $u_{\rm e}(r)$). We require that
\begin{equation}
  \left\{
  \begin{aligned}
    u_{\rm i}(r_{\rm m})&=u_{\rm e}(r_{\rm m}) \\
    u'_{\rm i}(r_{\rm m})&=u'_{\rm e}(r_{\rm m})
  \end{aligned}
  \right.
\end{equation}
so that the two parts joins ``smoothly'' at $r_{\rm m}$. To eliminate the
irrelevant normalization constant, the above condition simplifies to
\begin{equation}
  \label{eq:lile}
  \frac{u'_{\rm i}(r_{\rm m})}{u_{\rm i}(r_{\rm m})}=
  \frac{u'_{\rm e}(r_{\rm m})}{u_{\rm e}(r_{\rm m})}
\end{equation}
So that one does not bother assigning \emph{a priori} normalization constants at
the beginning of the integration. Furthermore, since the eigenenergy guess remains
constant during the integration, instead of introducing another dependent variable,
we just make it a parameter of $f(E,r)$, like what $l$ does.

There are some tricks in giving initial values to $u(r)$. As one integrates from
$r_0=0$ to $r=r_{\rm m}$, $r_0$ cannot be set to 0 directly, but a little bit larger
to avoid 0, which is a (regular) singular point. It turns out that while $r_0$
cannot take 0, it should be as small as possible, say $10^{-20}$ or $10^{-10}$,
which can liberate some 3 or 4 significant figures more than with moderate values,
such as $0.001$.

$u'(r_0)$ cannot take arbitrary values since $u(r_0)\neq 0$. Even though a normalization
constant does not matter, it matters to give a correct factor to $u'(r_0)$ w.r.t.
$u(r_0)$. For small $r_0$, $l(l+1)/r^2$ dominates, and Eq.~\ref{eq:5.40} is reduced to
\begin{equation}
  u''(r)=\left[\frac{l(l+1)}{r^2}+V(r_0)-E\right]u(r)
\end{equation}
The solution reads $rj_l(Kr)$, where $j_l(z)$ is the spherical Bessel function and
$K\equiv\sqrt{E-V(r_0)}$. $E-V(r_0)>0$ shoud thus be ensured.
And we know that it is physically true, since $T(r)=E-V(r)$ is the kinetic energy,
it is positive for any $r$. Now for starting at $r_0$, we can expand $u(r)$ to any
order of $r$, together with its derivative against $r$. These are solutions
that can be directly differentiated to get their derivatives. For $rj_l(Kr)$, since
$r_0$ is very small, it suffices to truncate the polynomial expansion to $r^4$ or
$r^6$
\begin{equation}
  \begin{aligned}
    u(r)&=\sqrt{\frac{2}{\pi}}Krj_l(Kr)=\sqrt{Kr}J_{l+\frac{1}{2}}(Kr) \\
    &=x^{l+1}\sqrt{\frac{2}{\pi}}
    \frac{1}{(2l+1)!}\times \\
    &\left[1-\frac{x^2}{2(2l+3)}+\frac{x^4}{8(2l+3)(2l+5)}
    -\frac{x^6}{48(2l+3)(2l+5)(2l+7)}+O(x^8)\right]
  \end{aligned}
\end{equation}
where we have defined $x\equiv Kr$ and used the relation
\begin{equation}
  j_l(x)=\sqrt{\frac{\pi}{2x}}J_{l+\frac{1}{2}}(x)
\end{equation}
$u(r)$ has been normalized, i.e. $\int_0^{\infty}{u(r)^2}\mathrm{d}r=1$. One thus
can easily derive its derivative agains $r$
\begin{equation}
  \begin{aligned}
    u'(r)&=\frac{\mathrm{d}u}{\mathrm{d}r} \\
    &= x^l\sqrt{\frac{2}{\pi}}\frac{K}{(2l+1)!}\times \\
    &\left[l+1-\frac{(l+3)x^2}{2(2l+3)}+\frac{(l+5)x^4}{8(2l+3)(2l+5)}
    -\frac{(l+7)x^6}{48(2l+3)(2l+5)(2l+7)}+O(x^8)\right]
  \end{aligned}
\end{equation}
Similarly, one gets for $r_\infty=\infty$ that
\begin{equation}
  u(r_\infty)=2\beta e^{-\beta r}
\end{equation}
with $\beta\equiv\sqrt{|E|}$ as $V(\infty)=0$.
It is found that compared with the value of $r_0$, the relative value
of $u'(r_0)$ w.r.t. $u(r_0)$ is less significant to the final eigenenergies.

Instead of searching $E$ to satisfy Eq.~\ref{eq:lile}, the paper by G.Vanden Berghe
\emph{et al} as mentioned before gives a formula for direct eigenenergy correction
in its Eq.~(2.14). Note that the minus sign in the front of the rhs should be removed.
It is verified by derivation and numerical practice.

We now give the derivation of the energy correction. Defining
\begin{equation}
  \label{eq:Ler}
  L(E,r)\equiv\frac{u'(E,r)}{u(E,r)}\equiv\frac{u'(r)}{u(r)}
\end{equation}
Eq.~\ref{eq:lile} is equivalent to
\begin{equation}
  \label{eq:lile1}
  L_{\rm i}(E, r_{\rm m})=L_{\rm e}(E, r_{\rm m})
\end{equation}
Suppose the energy guess for the real eigenenergy $E$ is $E+\Delta E$, then
\begin{equation}
  \begin{aligned}
    \label{eq:deltaE}
    L_{\rm e}(E+\Delta E, r_{\rm m})-L_{\rm i}(E+\Delta E, r_{\rm m})
    =&L_{\rm e}(E, r_{\rm m})-L_{\rm i}(E, r_{\rm m}) \\
    &+\Delta E\left[\frac{\partial{L_{\rm e}(E, r_{\rm m})}}{\partial{E}}
    -\frac{\partial{L_{\rm i}(E, r_{\rm m})}}{\partial{E}}
    \right] \\
    &+O(\Delta E^2) \\
    \approx&\Delta E\left[\frac{\partial{L_{\rm e}(E, r_{\rm m})}}{\partial{E}}
    -\frac{\partial{L_{\rm i}(E, r_{\rm m})}}{\partial{E}}
    \right]
  \end{aligned}
\end{equation}
where Eq.~\ref{eq:lile1} is used.

To further evaluate $\dfrac{\partial{L}}{\partial{E}}$ we need some preparations.
With Eq.~\ref{eq:Ler}, one can easily derive that
\begin{equation}
  \begin{aligned}
    u^2(r)\frac{\partial{L(E, r)}}{\partial{E}}-
    &u^2(r')\frac{\partial{L(E, r')}}{\partial{E}}
    =\left[u(t)\frac{\partial{u'(t)}}{\partial{E}}-
    u'(t)\frac{\partial{u(t)}}{\partial{E}}\right]_{t=r'}^{t=r} \\
    &=\int_{r'}^r{\frac{\partial}{\partial{t}}
    \left[u(t)\frac{\partial{u'(t)}}{\partial{E}}-
    u'(t)\frac{\partial{u(t)}}{\partial{E}}\right]\mathrm{d}t} \\
    &=\int_{r'}^r{\left[u(t)\frac{\partial{u''(t)}}{\partial{E}}
    -u''(t)\frac{\partial{u(t)}}{\partial{E}}\right]\mathrm{d}t} \\
    &=\int_{r'}^r{\left[u(t)\frac{\partial{(f(E,t)u(t))}}{\partial{E}}
    -f(E,t)u(t)\frac{\partial{u(t)}}{\partial{E}}\right]\mathrm{d}t} \\
    &=\int_{r'}^r{\left[u^2(t)\frac{\partial{f(E,t)}}{\partial{E}}
    +f(E,t)u(t)\frac{\partial{u(t)}}{\partial{E}}
    -f(E,t)u(t)\frac{\partial{u(t)}}{\partial{E}}\right]\mathrm{d}t} \\
    &=\int_{r'}^r{[-u^2(t)]\mathrm{d}t}
  \end{aligned}
\end{equation}
So that
\begin{equation}
  \begin{aligned}
    u_{\rm i}^2(r_{\rm m})\frac{\partial{L_{\rm i}(E, r_{\rm m})}}{\partial{E}}
    -u_{\rm i}^2(0)\frac{\partial{L_{\rm i}(E, 0)}}{\partial{E}}
    &=u_{\rm i}^2(r_{\rm m})\frac{\partial{L_{\rm i}(E, r_{\rm m})}}{\partial{E}}
    =-\int_0^{r_{\rm m}}{u_{\rm i}^2(t)\mathrm{d}t}
  \end{aligned}
\end{equation}
we get
\begin{equation}
  \frac{\partial{L_{\rm i}(E, r_{\rm m})}}{\partial{E}}
  =-\frac{1}{u_{\rm i}^2(r_{\rm m})}\int_0^{r_{\rm m}}{u_{\rm i}^2(t)\mathrm{d}t}
\end{equation}
Similarly one can deduce
\begin{equation}
  \frac{\partial{L_{\rm e}(E, r_{\rm m})}}{\partial{E}}
  =\frac{1}{u_{\rm e}^2(r_{\rm m})}\int_{r_{\rm m}}^\infty{u_{\rm e}^2(t)\mathrm{d}t}
\end{equation}
With Eq.~\ref{eq:deltaE}, we get
\begin{equation}
  \label{eq:deltaE1}
  \Delta E=\left[L_{\rm e}(E+\Delta E, r_{\rm m})-L_{\rm i}(E+\Delta E, r_{\rm m})\right]
  \times\left[
    \frac{1}{u_{\rm i}^2(r_{\rm m})}\int_0^{r_{\rm m}}{u_{\rm i}^2(t)\mathrm{d}t}
    +\frac{1}{u_{\rm e}^2(r_{\rm m})}\int_{r_{\rm m}}^\infty{u_{\rm e}^2(t)\mathrm{d}t}
  \right]^{-1}
\end{equation}
The above formula can be further simplied. It is a good practice to draw a graph
for the solutions for inspection of smoothness of the joint of $u_{\rm i}$ and
$u_{\rm e}$ at $r_{\rm m}$. So $u_{\rm i}$ and $u_{\rm e}$ are scaled to have the
same value $u(r_{\rm m})$ at $r_{\rm m}$. Furthermore, since the wavefunction has
to be normalized for further use, one has
\begin{equation}
  \frac{1}{u_{\rm i}^2(r_{\rm m})}\int_0^{r_{\rm m}}{u_{\rm i}^2(t)\mathrm{d}t}
  +\frac{1}{u_{\rm e}^2(r_{\rm m})}\int_{r_{\rm m}}^\infty{u_{\rm e}^2(t)\mathrm{d}t}
  =\frac{1}{u^2(r_{\rm m})}\int_0^\infty{u^2(r)\mathrm{d}r}=\frac{1}{u^2(r_{\rm m})}
\end{equation}
Inserting into Eq.~\ref{eq:deltaE1},
\begin{equation}
  \label{eq:deltaE2}
  \Delta E=u^2(r_{\rm m})\left[L_{\rm e}(E+\Delta E, r_{\rm m})
  -L_{\rm i}(E+\Delta E, r_{\rm m})\right]
  =u(r_{\rm m})[u_{\rm e}'(r_{\rm m})-u_{\rm i}'(r_{\rm m})]
\end{equation}
So starting with an initial guess $E_0$, $E$ is approximated by iterating
\begin{equation}
  \label{eq:deltaE3}
  E_{n+1}=E_n-\Delta E=E_n+u(r_{\rm m})[u_{\rm i}'(r_{\rm m})-u_{\rm e}'(r_{\rm m})]
\end{equation}
Indulge us a final reminder that $u(r)$ has to be normalized to apply Eq.~\ref{eq:deltaE2}
and the last step of Eq.~\ref{eq:deltaE3}.

OK. So far so good. Yet one may wonder, how should I pick the first guess $E_0$
so as to be close to $E$? First it should be noted that the iteration Eq.~\ref{eq:deltaE3}
converges very fast, especially when $E_n$ is close to $E$, arising from the nature
of Taylor expansion. So it is not very much necessary to put heavy effort on finding
a fantastic $E_0$. A common practice is $E_0=\frac{1}{2}V'_{\mathrm{min}}$, where
\begin{equation}
  V'_{\mathrm{min}}=\left[\frac{l(l+1)}{r^2}+V(r)\right]_{\mathrm{min}}
\end{equation}
One has to check that $u(r)$ has the correct number of nodes at each iteration.
Emperically, if too many nodes, $E_{n+1}=1.1E_n$; If to few nodes, $E_{n+1}=0.9E_n$.

The energy correction may go wild when the energy guess is far from $E$. The
corrected energy $E_{n+1}$ should be within $(V'_{\mathrm{min}},0)$, where $E<0$
ensures a bound state, and $E>V'_{\mathrm{min}}$ for $E=V+T>V$, where $T$ is the
kinetic energy. $\Delta E$ should be contracted while $E_{n+1}$ is out of the bounds.
Commonly $\Delta E$ is halved.


\subsection{The Runge-Kutta Integrator}

It is of special significance to be aware of what is going on under the hood,
since confidence is only pure when it comes from true understanding. Runge-Kutta
method is one of the popular integrators for ODEs. We have implemented the fifth-order
embedded Cash-Karp Runge-Kutta algorithm with adaptive stepsize control. It is of
prime quality, with alleged error $O(h^6)$ ($h$ is the stepsize). But what does
this mean anyway? Besides, what's the principle behind the Runge-Kutta methods?

To describe the question, we list the fourth-order Runge-Kutta formula
\begin{equation}
  \label{eq:5.60}
  \begin{aligned}
  k_1&=hf(x_n,y_n) \\
  k_2&=hf(x_n+\frac{h}{2},y_n+\frac{k_1}{2}) \\
  k_3&=hf(x_n+\frac{h}{2},y_n+\frac{k_2}{2}) \\
  k_4&=hf(x_n+h,y_n+k_2) \\
  y_{n+1}&=y_n+\frac{k_1}{6}+\frac{k_2}{3}+\frac{k_3}{3}+\frac{k_4}{6}+O(h^5)
  \end{aligned}
\end{equation}
where $f(x,y)\equiv\dfrac{\mathrm{d}y}{\mathrm{d}x}$.

When we say that the error is $O(h^5)$, it signifies nothing more than that
the error will only vanish for solutions of polynomials with order$\leqslant$4.
This is obvious in the last formula of Eq.~\ref{eq:5.60} since the solution will
have no terms of $h^5$ or higher hence no error at all. When we say that high
orders of error usually means high accuracy, it is in the sense that in an expansion
of Taylor series in terms of powers of $h$, terms of higher order is progressively
trivial.

Now let's talk about the second question: what is the principle behind the Runge-Kutta
method? How does it achieve accuracy so high without resorting to high orders of
derivatives of $y(x)$? Or put it another way, how is it deduced in the first place?
Well, it may be well-known that the derivation of fourth-order Runge-Kutta involves
so much laborious calculating effort that it is not included in ordinary textbooks.
Yet it does not discourage us from getting to know how it is done in general procedures.
The prescription for $n$-th order Runge-Kutta formula is
\begin{equation}
  \label{eq:5.61}
  \left\{
  \begin{aligned}
    K_1&=hf(x_i,y_i) \\
    K_2&=hf(x_i+a_2h,y_i+b_{21}K_1) \\
    K_3&=hf(x_i+a_3h,y_i+b_{31}K_1+b_{32}K_2) \\
    &\cdots \\
    K_n&=hf(x_i+a_nh,y_i+\sum_{i=1}^{n-1}{b_{ni}K_i}) \\
    y_{i+1}&=y_i+\sum_{i=1}^{n}{c_iK_i}+O(h^{n+1})
  \end{aligned}
  \right.
\end{equation}
where $y_i\equiv y(x_i)$. $a_i$, $b_{ij}$ and $c_i$ are coefficients to be solved.
While the derivation is lengthy and boring, the overall idea is pretty simple: one
just has to expand all the $K_i$'s in Taylor series in terms of powers of $h$ to
$O(h^{n+1})$ to insert into the last formula of Eq.~\ref{eq:5.61}, and compare
coefficients with the expansion of $y_{i+1}=y(x_i+h)$ to the same order of $h$.
Usually the number of equations are less than the number of unknowns. Some of the
coefficients are then assigned with conventional or popular values. One can catch
a glimpse of the enormity of the mental labor by just noting the fact that the
expansion of $K_i$ needs that of $K_{i-1}$, which in turn requires the expansion
of $K_{i-2}\cdots$ Besides, as a function of two variables, $f(x,y)$ has complicated
forms of derivatives, e.g. $y'=\mathrm{d}f/\mathrm{d}x=f'_x+f'_yf$ and
$y''=\mathrm{d}^2f/\mathrm{d}x^2=f''_{xx}+2f''_{xy}f+f''_{yy}f^2+f'_y(f'_x+f'_yf)$.
Derivations gets unwieldy quickly, starting with, say, $n=2$.
The author suggests using softwares such as \emph{Mathematica} to do the combination
of terms.

For error estimate and adaptive stepsize control, please refer to \emph{Numerical
Recipes in C}, ch.~16.2 at p.714.

\section{Romberg Integration}
Like numerical integration of ODEs, it is excruciating to treat the routine of
numerical evaluation of definite integrals blindly as a blackbox. Since scientific
activities is to understand, it just doesn't make sense if one cannot even explain
the results of his or her own research. Things get complicated and even superstitious
when they are locked in a box. Once it is understood, it immediately becomes trivial.

So let's be straightforward. The general prescription of numerical integration is
to find a set of numerical coefficients $A_i$ to satisfy for any function $f(x)$ that
\begin{equation}
  \label{eq:5.62}
  \int_{x_0}^{x_N}{f(x)\mathrm{d}x}=h\sum_{i=0}^{N}{A_i f_i}+O(h^{N+2})
\end{equation}
where $f_i\equiv f(x_i)$, with $x_i=x_0+ih$, and the constant integration step
$h=\dfrac{x_N-x_0}{N}$. As usual, the error term $O(h^{N+2})$ means that
Eq.~\ref{eq:5.62} is exact for $f(x)$ being polynomials with order no more than
$N$ (so that the integral of $f(x)$ has order no more than $N+1$). For the same
reason, it is enough to have Eq.~\ref{eq:5.62} apply to powers of $x$ of order
from 0 to $N$ (i.e. 1, $x$, $x^2$, $\cdots$, $x^N$) without the error term, so as
to let this equation be valid for any function $f(x)$. This interpretation of
$O(h^{N+2})$ also in turn presents us with an $(N+1)$-dimension linear equation
set to solve all the $A_i$'s. As a result, we get for $N=1$ the trapezoidal rule
\begin{equation}
  \label{eq:5.63}
  \int_{x_0}^{x_1}{f(x)\mathrm{d}x}=h\left[\frac{1}{2}f_1+\frac{1}{2}f_2\right]
  +O(h^3)
\end{equation}
and for $N=2$ the Simpson's rule
\begin{equation}
  \label{eq:5.64}
  \int_{x_0}^{x_2}{f(x)\mathrm{d}x}=h\left[\frac{1}{3}f_1+\frac{4}{3}f_2
  +\frac{1}{3}f_3\right] +O(h^5)
\end{equation}
Note that luckily Simpson's rule also applies for $f(x)=x^4$, as can be easily
verified. So its error term scales with $h^5$, instead of $h^4$.

As this trick goes along with higher orders $N$, complicated fractions arise in
$A_i$'s, making the formula unwieldy. Together with it comes minus values of $A_i$
(starting with $N\leqslant 8$), which makes the integration unstable (error in
$f_i$ can be inflated by the coefficients). Also there's no gurantee that high
orders of error bring about high accuracy. So usually only formulas with
$N\leqslant4$ are used in practice.

So to acquire high accuracy, one's got to use small $h$ and moderately high orders
of integration formula. Number of intervals in real numerical problems are of the
order of 100, so it is common practice to use the trapezoidal rule or the Simpson's
rule multiple times in successive, nonoverlapping intervals , i.e. $(x_0,x_1),(x_1,x_2),
\cdots,(x_{N-1},x_N)$ to get the \emph{composite} or \emph{extended} integration rule.
One can easily derive the extended trapezoidal rule
\begin{equation}
  \label{eq:5.65}
  \begin{aligned}
    \int_{x_0}^{x_N}{f(x)\mathrm{d}x}&=h\left[\frac{1}{2}f_0+f_1+\cdots
    +f_{N-1}+\frac{1}{2}f_N\right]+O\left(\frac{(x_N-x_0)^3}{N^2}\right) \\
    &\equiv T_N+O\left(\frac{(x_N-x_0)^3}{N^2}\right)
  \end{aligned}
\end{equation}
where the $N^2$ in the denominator is due to $N$ times of the original error term.
Likewise, we also have the extended Simpson's rule
\begin{equation}
  \label{eq:5.66}
  \begin{aligned}
    \int_{x_0}^{x_N}{f(x)\mathrm{d}x}&=h\Big[\frac{1}{3}f_0+ \\
    &\frac{4}{3}f_1+\frac{2}{3}f_2+\frac{4}{3}f_3+\frac{2}{3}f_4+\cdots
    +\frac{4}{3}f_{N-3}+\frac{2}{3}f_{N-2}+ \\
    &\frac{4}{3}f_{N-1} +\frac{1}{3}f_N\Big]+O\left(\frac{(x_N-x_0)^5}{N^4}\right) \\
    &\equiv S_N+O\left(\frac{(x_N-x_0)^5}{N^4}\right)
  \end{aligned}
\end{equation}
where the coefficients of terms from $f_2$ to $f_{N-2}$ alternate $\dfrac{4}{3}$
and $\dfrac{2}{3}$. It is interesting to find that
\begin{equation}
  \label{eq:5.67}
  S_{2N}=\frac{4}{3}T_{2N}-\frac{1}{3}T_N
\end{equation}
as can be easily verified by subsituting with Eqs.~\ref{eq:5.65} and \ref{eq:5.66}.
And we know from Eq.~\ref{eq:5.67} that the leading error of trapezoidal rule is
canceled to have increased the order of the error by 1.

To eliminate error terms of higher orders, we need the relevant information. A fact
about the extended trapezoidal rule is that its error only contains even powers
of the stepsize $h$, which follows directly from the \emph{Euler-Maclaurin Summation
Formula}

\begin{equation}
  \label{eq:em}
  \begin{aligned}
    \int_{x_0}^{x_N}{f(x)\mathrm{d}x}&=h\left[\frac{1}{2}f_0+f_1+f_2+\cdots+f_{N-1}+
    \frac{1}{2}f_{N}\right] \\
    &-\sum_{k=1}^{n-1}{\frac{h^{2k}}{(2k)!}B_{2k}\left[f^{(2k-1)}_N-f^{(2k-1)}_0\right]}
    -\frac{h^{2n+1}}{(2n)!}B_{2n}\sum_{k=0}^{N-1}{f^{(2n)}(a+kh+\theta h)} \\
    &=T_N-\sum_{k=1}^{\infty}{\frac{h^{2k}}{(2k)!}B_{2k}\left[f^{(2k-1)}_N-f^{(2k-1)}_0\right]}
  \end{aligned}
\end{equation}
$0<\theta<1$, $x_1=a$ and $x_N=b$. $B_n$ is a Bernoulli number, defined by
\begin{equation}
  \label{eq:bernoulli}
  \frac{t}{e^t-1}=\sum_{n=0}^{\infty}{B_n\frac{t^n}{n!}}
\end{equation}
So we can tell from Eq.~\ref{eq:em} that $T_N=T_N(h^2)$. Of course $T_N$ is exact
when $h=0$. With $T_N(h^2)$ evaluated at different values of $h$, $T_N(h^2)$ can
just be extrapolated to $h=0$. Frequently used is polynomial extrapolation. This
is the so-called Romberg integration. It is easy to tell that the method is exact
for $f(x)$ being polynomials with orders no more than $2n-1$. Otherwise the error
is $\sim O(h^{2n+1})$. Usually $n\leqslant 4$ is recommended, which reduces the
error down to $O(h^9)$. This is usually more than enough for most applications,
otherwise roundoff error may start to accumulate and spoil the final result, and
is also much more time-consuming. As \emph{Numerical Recipes in C} (p.138) put it:
``it is achievable when the convergence is moderately rapid, but not otherwise.''

\subsection{About Lambda Expression}

Finally we want to say something about the usage of lambda expression in numerical
integration, especially the embedded ones. Lambda expression is one of the most
handy approaches to feed a function to an integrator.

Since lambda expressions are not intrinsic types in C++, it is best passed as
parameters in reference. It is a good practice to add names to the inherently
anonymous lambdas, so as to render them conveniently portable.

Well, since we have gone this far, one tends to define the lambdas as static object,
since the definitions won't change over the running time. Here's where an
elusive problem arises. It is found that the parameters of the function that includes
the definition of a lambda only synchronize to the lambda by value once, i.e., one
will found the lambda returns the same value irrespective of the changes of its
mother function, which inserts into the lambda its (changing) parameter, so as to
represent intermediate results of an embedded integral.

The way out? REMOVE the ``static'' attributes of the lambdas. Then everything
turns out fine.

Another important thing. Be careful to use embedded lambda expressions. It could
be very dangerous. For unknown reasons the embedded lambdas exhibit strange behaviors
when used in multiple integration, and gives very wrong outcome. The common practice
is using arrays. It is alright to use lambdas in numerical integration, but avoid
using the embedded ones.

\section{Polynomial Interpolation}
Interpolation is widely used to calculate a function $f(x)$ at an arbitrary point
$x$ while we only have function values at $N$ points $y_1=f(x_1), y_2=f(x_2),\cdots,
y_N=f(x_N)$. For convenience, let's make $x_1<x_2<\cdots<x_N$.
Polynomial interpolation is one of the most popular interpolation methods. And the
math within it is also straightforward and easy to understand. The interpolating
polynomial of degree $N-1$ through the $N$ points above is given explicitly by
Lagrange's classical formula
\begin{equation}
  \label{eq:5.68}
  \begin{aligned}
    P(x)&=\frac{(x-x_2)(x-x_3)\cdots(x-x_N)}{(x_1-x_2)(x_1-x_3)\cdots(x_1-x_N)}y_1+
    \frac{(x-x_1)(x-x_3)\cdots(x-x_N)}{(x_2-x_2)(x_2-x_3)\cdots(x_2-x_N)}y_2 \\
    &+\cdots+\frac{(x-x_1)(x-x_2)\cdots(x-x_{N-1})}{(x_N-x_2)(x_N-x_3)
    \cdots(x_N-x_{N-1})}y_N
  \end{aligned}
\end{equation}
Instead of directly code Lagrange's formula, we here introduce \emph{Neville's
algorithm}. The latter gives mathematically the same results as the Lagrange's
classical formula, yet is elegant to program. The final result is produced through
iterations. Each time it iterated, the order of the newest interpolating polynomial
is incremented. The difference of the two sequential iterations is directly calculated
by an recurrence formula, and used as the error estimate upon exiting the final
iteration.

The basic idea of the \emph{Neville's algorithm} is like this. Denote the value
at $x$ of the 0-order interpolating polynomial though $(x_i,y_i)$ as $P_i$. Likewise,
let $P_{ij}$ ($i>j$) be the value at $x$ of the polynomial through $(x_i,y_i)$ and
$(x_j,y_j)$. Likewise we have $P_{23}, \cdots, P_{(N-1)N}$ and all the way to
$P_{123\cdots N}$, which is the final result we want.

First we have this nice recurrence
\begin{equation}
  \label{eq:5.69}
  P_{i(i+1)\cdots(i+m)}=
  \frac{(x-x_{i+m})P_{i(i+1)\cdots(i+m-1)}+(x_i-x)P_{(i+1)(i+2)\cdots(i+m)}}
  {x_i-x_{i+m}}
\end{equation}
This equation works at $x=x_{i+1},\cdots,x_{i+m-1}$, for which $P_{i(i+1)\cdots(i+m-1)}
=P_{(i+1)(i+2)\cdots(i+m)}=y_{i+1},\cdots,y_{i+m-1}=P_{i(i+1)\cdots(i+m)}$. Then
one can easily verify the recurrence for $x_i$ and $x_{i+m}$.
Define the corrections that increments the interpolation by 1 order
\begin{equation}
  \label{eq:5.70}
  \begin{aligned}
    C_{m,i}&\equiv P_{i\cdots(i+m)}-P_{i\cdots(i+m-1)} \\
    D_{m,i}&\equiv P_{i\cdots(i+m)}-P_{(i+1)\cdots(i+m)}
  \end{aligned}
\end{equation}
Inserting Eq.~\ref{eq:5.69}, the following recurrence for the iterative corrections
is easily derived
\begin{equation}
  \label{eq:5.71}
  \begin{aligned}
    C_{m+1,i}&=\frac{(x_i-x)(C_{m,i+1}-D_{m,i})}{x_i-x_{i+m+1}} \\
    D_{m+1,i}&=\frac{(x_{i+m+1}-x)(C_{m,i+1}-D_{m,i})}{x_i-x_{i+m+1}}
  \end{aligned}
\end{equation}
Now we almost get all the ingredients to kickstart the program. The remaining
question is, how do I get the first of the C's and D's to initiate the recurrence?
Well, isn't it great if we just set $C_{0,i}=D_{0,i}=P_i$? Then the corrections
become the interpolation values themselves. This is equivalent to defining
$\forall m<0$ that $P_{i\cdots(i+m)}\equiv 0$. From which follows
\begin{equation}
  \label{eq:5.72}
  \begin{aligned}
    C_{1,i}&=\frac{(x_i-x)(C_{0,i+1}-D_{0,i})}{x_i-x_{i+1}}
    =\frac{(x_i-x)(P_{i+1}-P_i)}{x_i-x_{i+1}} \\
    D_{1,i}&=\frac{(x_{i+1}-x)(C_{0,i+1}-D_{0,i})}{x_i-x_{i+1}}
    =\frac{(x_{i+1}-x)(P_{i+1}-P_i)}{x_i-x_{i+1}}
  \end{aligned}
\end{equation}
With Eq.~\ref{eq:5.70}, we reproduce Eq.~\ref{eq:5.69}
\begin{equation}
  \label{eq:5.73}
  \begin{aligned}
    P_i+C_{1,i}=\frac{(x_i-x)P_{i+1}+(x-x_{i+1})P_i}{x_i-x_{i+1}}=P_{i,i+1} \\
    P_{i+1}+D_{1,i}=\frac{(x_i-x)P_{i+1}+(x-x_{i+1})P_i}{x_i-x_{i+1}}=P_{i,i+1}
  \end{aligned}
\end{equation}
i.e. recurrence Eq.~\ref{eq:5.71} is satisfied for $m=1$, and thus alo works for
the $m$ that follow.


\section{LSM Fit}

LSM fit abbreviation for ``Least Squres Method''. It is one of the methods to fit
N data points $(x_i, y_i), i=1,\cdots,N$ to a model $y(x;a_1\cdots a_M)$ that
has $M$ adjustable parameters $\bm a=\{a_j\}, j=1,\cdots,M$. The method originates
from maximum likelihood estimation for data with normally distributed errors. The
``squres'' to be minimized is the sum of squares of discrepancies between the model
prediction and the measurements, weighted by $\dfrac{1}{\sigma_i}$, where $\sigma_i$
is the standard deviation of $y_i$, i.e.
\begin{equation}
  \label{eq:5.74}
  \chi^2\equiv\sum_{i=1}^N\left[\frac{y_i-y(x_i;\bm a)}{\sigma_i}\right]^2
\end{equation}

We arrange this section because the $S$-matrix is fitted by an expasion of Gaussians.
It is hateful to wrap something in blackboxes. Let's take a look at what is inside.
Obviously the closer the model fits the data, the smaller $\chi^2$ is. The minimum
of Eq.~\ref{eq:5.74} occurs where the derivative of $\chi^2$ with respect to all
$M$ parameters $a_k$ vanishes
\begin{equation}
  \label{eq:5.75}
  0=\sum_{i=1}^N{\frac{y_i-y(x_i;\bm a)}{\sigma_i^2}
  \frac{\partial y(x_i;\bm a)}{\partial a_k}}
\end{equation}
where $k=1,\cdots,M$. $M$ parameters $\bm a$ is solved with the above set of $M$
(usually nonlinear) equations. For simplicity we will use $y(x)$ in $y(x;\bm a)$'s
stead in the following text.

In most cases the model's dependence on the parameters is linear
\begin{equation}
  \label{eq:5.76}
  y(x)=\sum_{k=1}^{M}{a_j X_j(x)}
\end{equation}
where $X_1(x),\cdots,X_M(x)$ are known function series of x, e.g. polynomials,
Gaussians and harmonics. Then Eqs.~\ref{eq:5.74} and \ref{eq:5.75} are cast into
\begin{equation}
  \label{eq:5.77}
  \chi^2\equiv\sum_{i=1}^N\left[\frac{y_i-\sum_{j=1}^{M}{a_j X_j(x_i)}}{\sigma_i}\right]^2
\end{equation}
and
\begin{equation}
  \label{eq:5.78}
  0=\sum_{i=1}^N{\frac{y_i-\sum_{i=1}^{M}{a_j X_j(x_i)}}{\sigma_i^2}X_k(x_i)}
\end{equation}
Eqs.~\ref{eq:5.75} and \ref{eq:5.78} are called \emph{normal equations}. To better
express Eq.~\ref{eq:5.78}, we need some notations. First let's define $N\times M$
matrix
\begin{equation}
  \label{eq:5.79}
  A_{ij}=\frac{X_j(x_i)}{\sigma_i}
\end{equation}
and vector
\begin{equation}
  \label{eq:5.80}
  b_i=\frac{y_i}{\sigma_i}
\end{equation}
then Eq.~\ref{eq:5.78}
\begin{equation}
  \label{eq:5.81}
  \frac{X_k(x_i)X_j(x_i)}{\sigma_i^2}a_j=\frac{X_k(x_i)}{\sigma_i}\frac{y_i}{\sigma_i}
\end{equation}
is transformed to a compact form
\begin{equation}
  \label{eq:5.82}
  A^{\rm T}A\bm a=A^{\rm T}\bm b
\end{equation}
Note that since the number of data points $N$ is larger than the number of parameters
$M$ to fit the data for the parameters $\bm a$, the matrix $A$ is not square (has
more rows than columns). So it can not be inversed directly. Now we can give the
LSM estimate of $\bm a$
\begin{equation}
  \label{eq:5.83}
  \hat{\bm{a}}=(A^{\rm T}A)^{-1}A^{\rm T}\bm b
\end{equation}
With the partial derivative matrix of $y(x)$
\begin{equation}
  \label{eq:5.84}
  S_{ij}=\frac{\partial\hat{a}_i}{\partial b_j}=[(A^{\rm T}A)^{-1}A^{\rm T}]_{ij}
\end{equation}
the covariance matrix of the estimated $\hat{\bm{a}}$ is calculated according to
error propogation
\begin{equation}
  \label{eq:5.85}
    V(\hat{\bm{a}})=SV(\bm b)S^{\rm T}
    =(A^{\rm T}A)^{-1}A^{\rm T}A(A^{\rm T}A)^{-1}
    =(A^{\rm T}A)^{-1}
\end{equation}
Note that $V(\bm b)=E$ for $N$ independent measurements $(x_i,y_i)$.
$A^{\rm T}A$ is a symmetric matrix so it is also symmetric against
transposition. Specifically we have
\begin{equation}
  \label{eq:5.86}
  V(\hat{a}_i) = [(A^{\rm T}A)^{-1}]_{ii}
\end{equation}
with
\begin{equation}
  \label{eq:5.87}
  (A^{\rm T}A)_{ij}=\sum_{k=1}^{N}{\frac{X_i(x_k)X_j(x_k)}{\sigma_k^2}}
\end{equation}


\section{Function Minimization}

We do not intend to include as many topics as the field of numerical analysis
may concern. Function minimization is introduced because it is indeed though
slightly needed. However, as long as it enters our code, it deserves proper
attention and decent treatment.

The effective potential in the radial equation (Eq.~\ref{eq:5.40}) is minimzed to
provide first guess to the eigenvalue when solving the radial equation. The potential
is the sum of the Coulomb potential, the spin-orbit potential, the nuclear potential
and the centrifugal potential. As a function of the radius $r$, it is usually smooth,
yet could take various shapes. Since the minimization is conducted only once each
time solving the wavefunction, and no strict requirements are imposed to the solved
minimum (only used as a first guess to the eigenenergy), it suffices to do the job
with a simple yet robust \emph{golden section search}. Well, function minimization
algorithm itself is an interesting field, and it may serve future use to prepare
a potent minimization routine in advance. In light of this, we have transcribbed
\emph{Brent's method} from NR\footnote{Numerical Recipes in C - The Art of Scientific
Computing, Cambridge U. Press, 2002, 2nd Eidition, p403\label{NR}}. It is worthwhile
to mention a few general principles of the algorithm here.

The minimizing procedures are straightforward. Suppose we want to minimize $f(x)$
within an interval $[a,c]$. Starting with three points $(a,f(a))$, $(b,f(b))$ and
$(c,f(c))$ with $f(b)$ no larger than $f(a)$ or $f(c)$, a new point $x$ is calculated
at which a minimum occurs for the parabola that goes through the three points
\begin{equation}
  \label{eq:5.88}
  x=b-\frac{1}{2}
  \frac{(b-a)^2[f(b)-f(c)]-(b-c)^2[f(b)-f(a)]}{(b-a)[f(b)-f(c)]-(b-c)[f(b)-f(a)]}
\end{equation}
as you can easily derive.
This \emph{inverse parabolic interpolation} (will be referred to as ``parabolic''
for convenience in this section) converges much faster than golden section (referred
to as ``golden'') near a function's minimum, whose shape usually resembles a parabola.

More often than not, $f(x)$ could be very uncoorperative, especially at the start
of the search, when golden section defeats parabolic interpolation. Then we turn
to golden section. Brent gives the criterion on how to select between golden and
parabolic. The general principle is, whenever parabolic fails, we have no choice
but turn to golden. Inverse parabolic interpolation fails when (i) Eq.~\ref{eq:5.88}
points outside of $[a,b]$, or (ii) the step suggested by parabolic is not smaller
than half of the step of the iteration \emph{before last}, which hints that the
interation may not be converging to something. The original reference NR$^{\ref{NR}}$
believes that it is not a good practice to ``punish'' an algorithm for a single
bad step if it can make it up on the next one.

Well then, here's something that we want to take a note on. Fast and effective as
Brent may be, we should be well aware that it is for unconstrianed optimization.
Besides, it strangely requires an additional point $b$ apart from the interval border
$a$ and $c$, so that $b\in[a,b]$, $f(b)<f(a)$ and $f(b)<f(c)$. As far as our problem
here concerns, both of the traits are not nontrivial. First, there is a natural
constriant for the independent variable $r>=0$. Secondly, there simply does not
exist such a $b$ for, e.g. pure Woods-Saxon potential, which is monotonic throughout
the $r$ domain.

Fortunately it will not take you long after reading Brent's code that Brent does
take $[a,c]$ as the border that can not be passed. As has been pointed out previously,
if a parabolic step points outside the interval, golden enters in its stead. Well
we konw that golden only digs in between, and never explores outside. So the first
concern is a bluff. As for the second, it turns out that the middle point $b$
is only used to give an emotional sureness to the programmer that there indeed
exists a minimum within $[a,c]$, and also used for the first contraction of the
bracketing interval. We can just ignore it by setting $b$ to any point inside $[a,b]$.
Yes, $b=a$ is a pretty good choice!

\section{Extraction of the Experimental Core Momentum Distribution}

It is widely known that the theoretical distributions are easily transformed to
experiment distribution just by convolving it with experimental resolutions. Well
the practical implementation of the procedures is more complexed than it may appear.
The first to come is what resolutions are needed to enter in the convolution of
the theoretical momentum distribution. The second is the techniques of the convolution,
which involves the generation of random numbers subjecting to an arbitrarily given
distribution. We shall discuss the above two topics in the text to follow.

We shall start with the last equation of Eq.~\ref{eq:4.6} in Sec.~\ref{sec:momdis},
which is copied below

\begin{equation}
  \label{eq:5.89}
  k_{\rm c,z}=\frac{m_{\rm c}}{m_{\rm P}}k_{\rm P}-\gamma k_{\rm v,z}^{\rm c.m.}
\end{equation}

We know that usually intermediate-energy primary beams accelerated by synchrotrons
are highly monochromatic, meaning that the momentum straggling is rather small.
Well this ceases to be the case for secondary beams, which are the products of
primary beams fragmentation on a primary target.

The resulting momentum width in a certain nuclide species of the secondary cocktail
beam can in no way be ignored, which is calculated by quadratically subtracting the
momentum resolution of the detecting system from the measured momentum width of the
objective secondary beam. The former is calculated as the measured momentum width
of the primary beam, since it has trivial momentum straggling, but only receive
momentum widths from the detecting system. That's it.

Usually $B\rho$-$\Delta E$-$TOF$ particle identification (PID) technique is employed
in intermediate-energy fixed-target nuclear experiments, where a particle leaves
its $aoz\equiv A/Z$ information via getting bent by a big dipole magnet with magnetic
field intensity $B$ following a curve with a curvature radius $\rho$, depositing
energy $\Delta E$ through electro-magnetic processes in $Z$ detectors, and flying
through a certain path with time-of-flight $TOF$. If this is the case, we have
one small thing worth mentioning, that the measured momentum distributions are
never used in the above calculation, since it is contaminated unnecessarily with
the resolution of $\rho$. The trick is that, since the objective reaction product
has been identified, it is better to utilize the exact $aoz$ value, or the $A$ value, instead taking from an experimental one. So the momenta are calculated following

\begin{equation}
  \label{eq:5.90}
  p_{\rm c}=\beta\gamma u aoz_{\rm c}Z=\beta\gamma m_{\rm c}
\end{equation}
where $u$ is the atomic mass unit. So in fact the resolution of $\beta\gamma$
($\sigma(\beta\gamma)$) is more handy to use.

Similary $\sigma(\beta\gamma)$ for the reaction products is extracted using primary
beams in target-out runs. There's no need to expand this.

Then let's go to the second topic, i.e. how to generate random numbers obeying an
arbitrarily given distribution $f(x)$, specifically, from a tabulated function.
For distribution functions with good analytic properties, the random numbers $x$
are easily generated from unifrom distribution $\xi$ just through a function
$x=F^{-1}(\xi)$, where $F^{-1}$ is the inverse of the cumulative distribution
function of $x$ that $F(x)=\int_{x_{\rm min}}^{x}{f(t)\mathrm{d}t}$. This is
properly dubbed as \emph{direct samping} method. A general and
more-often-encountered-in-practice case is a distribution function $f(x)$ given by
a tabulated function, where direct sampling has little use since we don't even have
a formula for $f(x)$ whatsoever.

A very simple sampling method is ready for this scenario, which is called plainly
the \emph{hit-and-miss} method. Different from the direct sampling method, as the
name indicates, not every samping would hit. Some may miss, and dropped. For evenly
generated random number $x$, an evenly distributed random number $y$ in range
$(0, f(x)_{\rm max})$ is also generated, to represent the acceptability of this $x$.
$x$ misses if $y>=f(x)$, or it is accepted, so that the probility density of accepting
$x$ is proportional to $f(x)$. It can be proved that $x$ thus generated has the
probability distribution just the same as $f(x)$. $f(x)$ is interpolated for an
arbitrary $x$ off the grid.







\end{document}
