\documentclass[11pt,a4paper]{book}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage{siunitx}
\usepackage[version=4]{mhchem}
\usepackage{amssymb}
\usepackage{authblk}
\usepackage{bm}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage[colorlinks=true,linkcolor=blue,
anchorcolor=blue,citecolor=blue]{hyperref}
\usepackage{yhmath}
\usepackage{braket}
\usepackage{mathrsfs}
\usepackage{booktabs}


\title{Full Configuration Interaction}
\author{Yazhou Sun}
\affil{Anyang Normal University}
\affil{Institute of Modern Physics}

\begin{document}
\maketitle
\tableofcontents

\chapter{Introduction}
Theoretical models tend to simplify physical problems, and for most of the cases
the simplifications grasps the essence of the problem. The practitioners get what
they want. We are good at thinking about one thing at a time, like, e.g., one particle
in an external potential, which can always be solved analytically or numerically
to a satisfying precision. Since two-body systems can always be factorized to a
center-of-mass motion part and a relative-motion part, it falls into the category
of one body problem. Well, things tend to be intractable as one deals with a
three-body system, and those with higher degrees of freedom. One of the most
typical problems is the quantum many-body system in a nucleus.

Thanks to second quantization, the wavefunction and the dynamics of the relative
motion of the constituent nucleons can be formulated in an elegant way, and the
solution of the problem boils down to a pure mathematical one, i.e. the
diagonalization of large sparse matrices. This is the most enticing and beautiful
part, that we can quickly turn physics into mathematics, because the governing
physical laws are succinct and thus natural, which casts the result with more
reliability.

This is a note for the theory and the implementation of the full configuration
interaction theory (FCI) in shell model and the diagonalization algorithm.
This work has been started when covid-19 broke out in the beginning of 2020.
It was a sunny winter morning, and I was coding for FCI. So it got its name
{\scriptsize SUNNY}, which has been used both for the C++ library and the coding
project. The name is to be kept, as it is reminiscent of the very moment that this
project started. And one should not easily forget when and why he or she started
something, even ater a long period of time, as it calls for time for things to
evolve, especially those great ones, that assign life meaning. Or one just lives
like a consuming machine. That's it.

We are about to present the basic theory of the modern shell model, including the
construction of the Hamiltonian matrix in a truncated model space of many-body Slater
determinants, and its diagonalization. The main reference is the lecture notes for
the Nuclear Talent course Many-body methods for nuclear physics, from Structure
to Reactions, at Henan Normal University, P.R. China, July 16-August 5, 2018, hosted
in \href{https://github.com/NuclearTalent/ManyBody2018}{github} at
https://github.com/NuclearTalent/ManyBody2018.

Basic knowledge of linear algebra and quantum mechanics of the readers are assumed.

\chapter{Second Quantization}
Second quantization is a very useful and succinct tool to handle many-body
quantum-mechanical problems. It wraps the Pauli exclusion principle and the
antisymmetrization of many-body Slater determinants (MBSDs) in the anti-commutation
relation of the creation and annihilation operators, which greatly facilitates
the evaluation of operators on state vectors or the expectation value of an operator.

\section{Many-body Problems}

Let's first define the physical problem here. We want to get the wavefunction of
a system of fermions by solving the Schr\"odinger equation.
\begin{equation}
  \label{eq:2.1}
  \mathrm{i}\hbar\frac{\partial{\Psi}}{\partial{t}}=H\Psi.
\end{equation}
As we are dealing with a many-body system with a time-independent Hamiltonian,
the time variable can be separated from the Schr\"odinger equation to reduce it
to a stationary Schr\"odinger equation, which essentially is the eigenvalue equation
of the Hamiltonian
\begin{equation}
  \label{eq:2.2}
  H\Psi=E\Psi.
\end{equation}
The solution of Eq.~\ref{eq:2.2} is not trivial. Usually, the many-body Hamiltonian
can be written as the sum of a one-body part $H_0$ and a two-body (interacting)
part $H_I$
\begin{equation}
  \label{eq:2.3}
  H=H_0+H_I=\sum_{i=1}^{A}h_0(x_i)+\sum_{i<j}^{A}v(r_{ij}),
\end{equation}
where $A$ is the number of particles in the many-body system, typically the mass
number of a nucleus.

In order not to complicate the problem too much at the beginning, we firstly ignore
the two-body part $H_I$, and denote the eigenfunction of $H_0$ as
\begin{equation}
  \label{eq:2.4}
  H_0\Phi=E\Phi.
\end{equation}
The many-body wavefunction is antisymmetric under the
permutation of two fermions, and obeys the Pauli exclusion principle, that two
particles are forbidden to occupy the same state. Then $\Phi$ can be antisymmetrized
by a Slater determinant
\begin{equation}
  \label{eq:2.5}
  \Phi({x_1,x_2,\cdots,x_A})=\frac{1}{\sqrt{A!}}
  \begin{vmatrix}
    \psi_\alpha({x_1}) & \psi_\alpha({x_2}) & \cdots & \psi_\alpha({x_A}) \\
    \psi_\beta({x_1})  & \psi_\beta({x_2})  & \cdots & \psi_\beta({x_A})  \\
    \vdots             & \vdots             & \ddots & \vdots             \\
    \psi_\sigma({x_1}) & \psi_\sigma({x_2}) & \cdots & \psi_\sigma({x_A}) \\
  \end{vmatrix}.
\end{equation}
$x_i$ represents the spatial coordinates of particle $i$, and the Greek letter
$\alpha,\beta,\cdots,\sigma$ indicate the quantum numbers of the single-particle
(sp) states. It is easily seen that the permutation of particle $i$ and $j$ is
equivalent to the swap of column $i$ and $j$, which produces a minus sign in front
of the determinant, in which way antisymmetrization is achieved. With antisymmetry
comes Pauli exclusion principle, since a determinant vanishes if two columns are
the same, which corresponds to two particle occupying the same state.

Situation immediately becomes hopeless when inter-particle correlation is introduced,
i.e., after we add back the two-body part $H_I$.
as the coordinates of all the particles are intertwined in Eq.~\ref{eq:2.2}.
It is then in general impossible to split the Schr\"odinger equation into a set
of independent ordinary differential equations (ODE) by separation of variables,
for situations where the number of particles in the system exceeds two.

Note that from the point of view of mathematics, besides solving partial differential
equations directly, eigenvalue problems can also be tackled using linear algebra,
i.e., via matrix diagonalization. Differential equations are the energy eigenvalue
equation in coordinate representation. The base vectors are the eigenfunctions of
the coordinate operator $x$
\begin{equation}
  \label{eq:2.6}
  x\delta(x-x^\prime)=x^\prime\delta(x-x^\prime)
\end{equation}
where the eigenvalue $x^\prime$ is continuous. To be tractable by linear algebra,
we need the energy matrix (Hamiltonian matrix) to be discrete and finite. One is
easily reminded that if treating $H_I$ as a perturbation, $\Phi$ is the zeroth
order approximation of the exact energy eigenfunction $\Psi$. This makes $\{\Phi\}$
a very good base vector set to span a space for representation of the energy matrix.
The solved $i$-th eigenvector $\Psi_i$ would have its dominant component on $\Phi_i$
and minor corrections to this component on other directions (base vectors). In this
sense we hope $H_I$ as small as possible, which requires that $h_0$ in $H_0$ includes
as much effective mean field as it could. Common choices for $h_0$ are harmonic
oscillator (HO) potential, Woods-Saxon (WS) potential and effective mean field
generated by Hatree-Fock (HF) self-consistent potential method. Although the
latter two fit better to reality, HO potential is vastly used nowadays, as its
eigenfunctions are analytical and fully researched to have many good properties,
which can be easily found in undergraduate quantum mechanics textbooks. Modern
shell model codes usually prefer bases from HF calculations, as they are currently
the best.

Theory gives that somehow the occupancies of the sp states near the Fermi level
(when $H_I$ is ignored) is smeared, that the occupancies drops below 1 before and
near Fermi level, reaching 1/2 at it, and monotonously drops quickly to 0 with sp
states going up. It justifies the practice to truncate the sp state, as the higher
the sp state, the less likely it is occupied. This truncation gives finite many-body
Slater determinants $\{\Phi\}$. Well, it is also common to directly cut off high-energy $\{\Phi\}$. Then the Hamiltonian matrix are discrete and finite, with a sufficiently
small dimensionality to be feasible to be diagonalized by modern computing power.
Our tasks now are very straight and clear:
\begin{enumerate}
  \item Choose the sp state wavefunction, which is the eigenfunction of $h_0$, and
  truncate them to total number of, say, $N$;
  \item Distribute the $n(\leq A)$ valence particles to the $N$ sp states to build the
  many-body basis;
  \item Calculate the Hamiltonian matrix elements;
  \item Diagonalize the Hamiltonian.
\end{enumerate}

The most challenging are steps 3 and 4. Usually the two-body matrix elements (TBME)
$\Braket{\alpha\beta|\hat v|\gamma\delta}$ are known a priori, mostly from fitting
to experiment data. Despite that the sp states have been specified, there are so
many degrees of freedom in the two-body interaction $v(r_{ij})$, which justifies
the inquiry from experiment. In contrast, the one-body matrix element (OBME) is
directly read from sp state energy, as the sp states are actually the eigenstates
of $h_0$. So only the sp state energies $\epsilon_\alpha$ are needed as
\begin{equation}
  \label{eq:2.7}
  \Braket{\alpha|h_0|\beta}=\epsilon_\alpha\Braket{\alpha|\beta}
  =\epsilon_\alpha\delta_{\alpha\beta}.
\end{equation}

As for step 3, the expression of $H$ in terms of OBME and TBME is needed to evaluate
the matrix element $H_{ij}\equiv\Braket{\Phi_i|H|\Phi_j}$. Note that both sides
are Slater determinants of sp states. It is easily told that $\Ket{\Phi_i}$ and
$\Ket{\Phi_j}$ can only differ by at most two sp states, or $H_{ij}$ vanishes,
due to the orthogonality of the sp states. Of course we can directly expand the
determinants and calculate the integrals one by one, but that would be rather
laborious. There'd be $A^3$ integrals for each matrix element of $H$. By
representing antisymmetry in the anti-commutation relation of the annihilation
and creation operators of fermions, we can use second quantization to greatly
facilitate the calculation of $H_{ij}$ by contraction of operators.

\section{Second Quantization}

We don't dwindle on the terminology of ``second quantization''. It is firstly
coined to refer to the quantization of physical fields in quantum field theory.
As far as the problem here concerns, second quantization is to express the
Hamiltonian in terms of annihilation and creation operators of fermions, so as to
tackle many-body problems. We now explain why it is possible and how it helps.

\subsection{The Creation Operator}

When a particle is in quantum state $\alpha$, usually it is written as $\Ket{\alpha}$.
Well in second quantization we put it another way, that a particle is created in
quantum state $\alpha$
\begin{equation}
  \label{eq:2.8}
  a_\alpha^\dagger\Ket{0}\equiv\Ket{\alpha}
\end{equation}
where $\Ket{0}$ is vacuum. This rephrasing turns the particle from the subject to
the object, and indeed grasps the essence of the principle that identical particles
are indiscernible. The MBSD is also represented in creation operator
\begin{equation}
  \label{eq:2.9}
  \Ket{\alpha_1\alpha_2\cdots\alpha_n}\equiv a^\dagger_{\alpha_1}
  a^\dagger_{\alpha_2}\cdots a^\dagger_{\alpha_n}\Ket{0}
\end{equation}
As the permutation of any two particles produces a minus sign, we have
\begin{equation}
  \label{eq:2.10}
  \Ket{\alpha_1\cdots\alpha_i\cdots\alpha_j\cdots\alpha_n}=
  -\Ket{\alpha_1\cdots\alpha_j\cdots\alpha_i\cdots\alpha_n}.
\end{equation}
Since this relation applies to any pair of particles, no matter whether they are
adjacent to each other or not, it follows
\begin{equation}
  \label{eq:2.11}
  a^\dagger_{\alpha_i}a^\dagger_{\alpha_j}=-a^\dagger_{\alpha_j}a^\dagger_{\alpha_i}.
\end{equation}
Conversely, Eq.~\ref{eq:2.11} can be the basic equation to give Eq.~\ref{eq:2.10}.
Supposing that there are $k$ particles between $\alpha_i$ and $\alpha_j$, it is easy
to tell that $2k+1$ permutations of adjacent particles are needed to go from the
left hand side (lhs) to the right hand side (rhs) of Eq.~\ref{eq:2.10}, which
according to Eq.~\ref{eq:2.11} adds a phase $(-1)^{2k+1}=-1,\forall k\in\mathbb{N}$.
So Eq.~\ref{eq:2.11} suffices to ensure antisymmetry for many-body wavefunction
defined in Eq.~\ref{eq:2.9}. Moreover, and naturally, from antisymmetry comes Pauli
exclusion principle, as
\begin{equation}
  \label{eq:2.12}
  a^\dagger_{\alpha_i}a^\dagger_{\alpha_i}=-a^\dagger_{\alpha_i}a^\dagger_{\alpha_i}=0,
\end{equation}
and in particular
\begin{equation}
  \label{eq:2.13}
  a^\dagger_\alpha
  \underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_n}}_{{\rm contains}\,\alpha}=0.
\end{equation}
Altogether, we have the anti-commutation rule
\begin{equation}
  \label{eq:2.14}
  \{a^\dagger_\alpha,a^\dagger_\beta\}\equiv
  a^\dagger_{\alpha}a^\dagger_{\beta}+a^\dagger_{\beta}a^\dagger_{\alpha}=0,
\end{equation}
which is just equivalent to Eq.~\ref{eq:2.11}.

\subsection{The Annihilation Operator}

We denote
\begin{equation}
  \label{eq:2.15}
  (a^\dagger_\alpha)^\dagger\equiv a_\alpha
\end{equation}
and premultiply the hermitian conjugate of Eq.~\ref{eq:2.8} to itself to get
\begin{equation}
  \label{eq:2.16}
  \Braket{\alpha|\alpha}=\Braket{0|a_\alpha a_\alpha^\dagger|0}=
  \Braket{0|a_\alpha|\alpha}=\Braket{0|0}=1
\end{equation}
or
\begin{equation}
  \label{eq:2.17}
  a_\alpha\Ket{\alpha}=\Ket{0},
\end{equation}
hence the name ``annihilation operator'', that $a_\alpha$ annihilates a particle
from sp state $\alpha$.

We don't bother deducing the anti-commutation rule of the annihilation operators
all over from the antisymmetry of the many-body wavefunctions, but directly by
taking the hermitian conjugate of Eq.~\ref{eq:2.14} to reach
\begin{equation}
  \label{eq:2.18}
  \{a_\alpha,a_\beta\}=a_{\alpha}a_{\beta}+a_{\beta}a_{\alpha}=0.
\end{equation}
Similar to Eq.~\ref{eq:2.13} we have for the annihilation operators
\begin{equation}
  \label{eq:2.19}
  a_\alpha\underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_n}}_{\neq\alpha}=0,
\end{equation}
A special case is noteworthy
\begin{equation}
  \label{eq:2.20}
  a_\alpha\Ket{0}=0.
\end{equation}
and together with its hermitian conjugate
\begin{equation}
  \label{eq:2.21}
  \Bra{0}a_\alpha^\dagger=0
\end{equation}

\subsection{The Anti-commutation Algebra}

Since the creation and annihilation operators are introduced to evaluate the
expectation values of physical quantities like the Hamiltonians in the first
place, one would inevitably and often encounter commutations of $a_\alpha$ and
$a^\dagger_\beta$. It would be made clear in the sections to follow, but we can
also quickly get a clue here, by dissecting the expansion of matrix element
\begin{equation}
  \label{eq:2.22}
  H_{ij}=\Braket{\Phi_i|H|\Phi_j}=
  \Braket{0|a_{\beta_n}\cdots a_{\beta_2}a_{\beta_1}H
  a^\dagger_{\alpha_1} a^\dagger_{\alpha_2}\cdots a^\dagger_{\alpha_n}|0}
\end{equation}
The evaluation of $H_{ij}$ follows a simple strategy. One just moves all the
creation operators to the left of all the annihilation operators using the
anti-commutation rules. Then with Eqs.~\ref{eq:2.20} and \ref{eq:2.21}, $H_{ij}$
is evaluated. To implement this scheme, we're short of two ingredients, namely
the commutator algebra of $a^\dagger_\alpha$ and $a_\beta$, and the expression
of $H$ in terms of the creation and annihilation operators.

Let's directly examine the application of the anti-commutator on an arbitrary
many-body basis $\Ket{\Phi}\equiv\Ket{\alpha_1\alpha_2\cdots\alpha_n}$
\begin{equation}
  \label{eq:2.23}
  \{a^\dagger_\alpha,a_\beta\}\Ket{\alpha_1\alpha_2\cdots\alpha_n}
  =(a^\dagger_\alpha a_\beta+a_\beta a^\dagger_\alpha)
  \Ket{\alpha_1\alpha_2\cdots\alpha_n}.
\end{equation}
The effect is obvious. The anti-commutator annihilates a particle at sp state
$\beta$ and creates another to $\alpha$, either annihilation comes first, or
otherwise. Basically, we specialize for different situations.

\begin{itemize}
  \item $\alpha=\beta$. Whether $\Ket{\alpha_1\alpha_2\cdots\alpha_n}$ contains
  $\alpha$ is irrelevant. If it does not,
  \begin{equation}
    \label{eq:2.24}
    \left\{
    \begin{aligned}
      &a^\dagger_\alpha a_\alpha
      \underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_n}}_{\neq\alpha}=0, \\
      &a_\alpha a^\dagger_\alpha
      \underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_n}}_{\neq\alpha}=
      a_\alpha\Ket{\alpha{\alpha_1\alpha_2\cdots\alpha_n}}
      =\underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_n}}_{\neq\alpha}.
    \end{aligned}
    \right.
  \end{equation}
  If it does,
  \begin{equation}
    \label{eq:2.25}
    \left\{
    \begin{aligned}
      &a_\alpha a^\dagger_\alpha
      \Ket{\alpha_1\alpha_2\cdots\alpha_k\alpha\alpha_{k+1}
      \cdots\alpha_{n-1}}=0, \\
      &a^\dagger_\alpha a_\alpha
      \Ket{\alpha_1\alpha_2\cdots\alpha_k\alpha\alpha_{k+1}
      \cdots\alpha_{n-1}} \\
      &\quad\quad=\Ket{\alpha\alpha^{-1}
      \alpha_1\alpha_2\cdots\alpha_k\alpha\alpha_{k+1}\cdots\alpha_{n-1}} \\
      &\quad\quad=(-1)^{2k}\Ket{ \alpha_1\alpha_2\cdots
      \alpha_k\alpha\alpha^{-1}\alpha\alpha_{k+1}\cdots\alpha_{n-1}} \\
      &\quad\quad=\Ket{\alpha_1\alpha_2\cdots\alpha_k\alpha\alpha_{k+1}
      \cdots\alpha_{n-1}}.
    \end{aligned}
    \right.
  \end{equation}
  where $\alpha^{-1}$ denotes a hole state of $\alpha$, $\Ket{\alpha^{-1}\alpha}=\Ket{0}$.

  Either way, we have $\{a^\dagger_\alpha,a_\alpha\}\Ket{\Phi}=\Ket{\Phi}$, hence
  \begin{equation}
    \label{eq:2.26}
    \{a^\dagger_\alpha,a_\alpha\}=1.
  \end{equation}

  \item $\alpha\neq\beta$. According to the interpretation of the anti-commutator,
  quantum state $\alpha$ must be vacant and $\beta$ be occupied so that it could
  be possible that $\{a^\dagger_\alpha,a_\beta\}\Ket{\Phi}\neq 0$, which is also
  easily and readily verified by direct evaluation. For this remaining case,
  suppose that
  \begin{equation}
    \label{eq:2.27}
    \Ket{\Phi}=\underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_k\beta\alpha_{k+1}
    \cdots\alpha_{n-1}}}_{\neq\alpha}
    =(-1)^k\underbrace{\Ket{\beta\alpha_1\alpha_2\cdots\alpha_{n-1}}}_{\neq\alpha},
  \end{equation}
  then
  \begin{equation}
    \label{eq:2.28}
    a^\dagger_\alpha a_\beta\Ket{\Phi}
    =(-1)^k\underbrace{\Ket{\alpha\alpha_1\alpha_2\cdots\alpha_{n-1}}}
    _{\alpha\notin\{\alpha_i\}}
    =\underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_k\alpha\alpha_{k+1}
    \cdots\alpha_{n-1}}}_{\alpha\notin\{\alpha_i\}},
  \end{equation}
  and similarly
  \begin{equation}
    \label{eq:2.29}
    \begin{aligned}
      a_\beta a^\dagger_\alpha\Ket{\Phi}
      &=(-1)^k a_\beta\underbrace{\Ket{\alpha\beta\alpha_1\alpha_2\cdots
      \alpha_{n-1}}}_{\alpha\notin\{\alpha_i\}} \\
      &=(-1)^{k+1} a_\beta\underbrace{\Ket{\beta\alpha\alpha_1\alpha_2\cdots
      \alpha_{n-1}}}_{\alpha\notin\{\alpha_i\}} \\
      &=-\underbrace{\Ket{\alpha_1\alpha_2\cdots\alpha_k\alpha\alpha_{k+1}
      \cdots\alpha_{n-1}}}_{\alpha\notin\{\alpha_i\}}.
    \end{aligned}
  \end{equation}
  So for this last case, still $\{a^\dagger_\alpha,a_\beta\}\Ket{\Phi}=0$.

  In summary, we have the anti-commutation rule for $a^\dagger_{\alpha}$ and
  $a_{\beta}$
  \begin{equation}
    \label{eq:2.30}
    \{a^\dagger_\alpha,a_\beta\}=\delta_{\alpha\beta}.
  \end{equation}
\end{itemize}

Altogether, Eqs.~\ref{eq:2.14}, \ref{eq:2.18} and \ref{eq:2.30} constitute the
whole set of the anti-commutation rules that are needed to perform operator
commutations. The formulation of Hamiltonians in terms of annihilation and creation
operators are what to be finished next.

\subsection{Many-Body Observables in Second Quantization}
Straightly speaking, the application of a physical observable $f$ (an operator
in quantum mechanics) on an arbitrary quantum state $\Ket{\beta}$ is to change
it to something else (say, $\Ket{\alpha}$). An equivalent effect can be achieved
with creation and annihilation operators
\begin{equation}
  \label{eq:2.31}
  f\Ket{\beta}=\sum_\alpha\Ket{\alpha}\Braket{\alpha|f|\beta}
  =\sum_\alpha\Braket{\alpha|f|\beta}a^\dagger_\alpha a_\beta\Ket{\beta}
\end{equation}
where we have inserted the closeure $\sum_\alpha\Ket{\alpha}\Bra{\alpha}=\bm{1}$
and used the substitution $\Ket{\alpha}=a^\dagger_\alpha a_\beta\Ket{\beta}$. Note
that the sp states $\alpha,\beta,\cdots,\delta$ are not required to be the eigenstates
of observable $f$ so as to preserve generality.

We now wish to generalizes Eq.~\ref{eq:2.31} to many-body systems. Without loss
of generality let's consider the case of one-body part $H_0=\sum_i{h_0(x_i)}$ of
the Hamiltonian in Eq.~\ref{eq:2.3}. We will denote the number of particles by
$n$ instead of $A$ just as a matter of convenience. As each occupied sp state only
accommodates one particle due to Pauli principle and particles are indiscernible,
the sum of $h_0$ over $n$ particles is equivalent to the sum over the $n$ sp states
that the particles occupy
\begin{equation}
  \label{eq:2.32}
  \begin{aligned}
    \sum_i h_0(x_i)\Ket{\alpha_1\alpha_2\cdots\alpha_n}
    &=\sum_{\alpha_i}h_0(\alpha_i)\Ket{\alpha_1\alpha_2\cdots\alpha_n} \\
    &=\sum_{\alpha\alpha_i}\Braket{\alpha|h_0|\alpha_i}
    a^\dagger_\alpha a_{\alpha_i}\Ket{\alpha_1\alpha_2\cdots\alpha_n}
  \end{aligned}
\end{equation}
where $h(\alpha_i)$ indicates that it only acts on sp state $\Ket{\alpha_i}$ in
the MBSD. Eq.~\ref{eq:2.31} is applied in the last step. Wary readers can easily
convince themselves by expanding the MBSD, applying Eq.~\ref{eq:2.31}, and combining
terms. Well basically speaking, the linearity of operators of physical observables
in quantum mechanics ensures that if an operator acting on individual states have
certain effects, it also have the same effects on any linear combination of those
states.

We're not satisfied yet. The sum over $\alpha_i$ is confined to the occupied sp
states and thus MBSD-dependent. We want to remove this restriction and it is ready
to be removed since $a_\beta\underbrace{\Ket{\alpha_1\alpha_2
\cdots\alpha_n}}_{\neq\beta}=0$.
So
\begin{equation}
  \label{eq:2.33}
  \sum_i h_0(x_i)\Ket{\alpha_1\alpha_2\cdots\alpha_n}
  =\sum_{\alpha\beta}\Braket{\alpha|h_0|\beta}
  a^\dagger_\alpha a_{\beta}\Ket{\alpha_1\alpha_2\cdots\alpha_n}
\end{equation}
Now $\alpha$ and $\beta$ run freely over all the sp states and $H_0$ is free of
the MBSD it applies. We are satisfied for now with
\begin{equation}
  \label{eq:2.34}
  H_0=\sum_{\alpha\beta}\Braket{\alpha|h_0|\beta}a^\dagger_\alpha a_{\beta}.
\end{equation}

Of course the above deduction and Eq.~\ref{eq:2.34} are not limited to $H_0$, but
suit for general one-body interactions in many-body systems.

Now we are ready to express two-body matrix elements in second quantization. It
is most convenient to follow the example of the derivation for the one-body part.
It starts with the application of two-body interaction $v(r_{ij})$ to a two-body
state vector $\Ket{\gamma\delta}$. Well situation is a little different here. Since
the equation to be derived here is to be inserted to a many-body wavefunction,
antisymmetry has to be taken into account, namely $\Ket{\alpha\beta}$ is an
antisymmetrized wavefunction. Following the prescription of Eq.~\ref{eq:2.31} gives
\begin{equation}
  \label{eq:2.35}
  v\Ket{\gamma\delta}=\sum_{\alpha<\beta}\Ket{\alpha\beta}
  \Braket{\alpha\beta|v|\gamma\delta}
  =\sum_{\alpha<\beta}\Braket{\alpha\beta|v|\gamma\delta}
  a^\dagger_\alpha a^\dagger_\beta a_\delta a_\gamma\Ket{\gamma\delta}.
\end{equation}
We intend to reiterate that the bra and ket in $\Braket{\alpha\beta|v|\gamma\delta}$
are both antisymmetrized two-body wavefunction. This is best seen by expanding
them explicitly
\begin{equation}
  \label{eq:2.36}
  \begin{aligned}
    \Braket{\alpha\beta|v|\gamma\delta}
    &=\frac{1}{\sqrt{2}}(\Bra{\alpha}\Bra{\beta}-\Bra{\beta}\Bra{\alpha}) v
     \frac{1}{\sqrt{2}}(\Ket{\gamma}\Ket{\delta}-\Ket{\delta}\Ket{\gamma}) \\
    &=\frac{1}{2}(\Braket{\alpha\beta|v|\gamma\delta}^\prime
    -\Braket{\alpha\beta|v|\delta\gamma}^\prime
    -\Braket{\beta\alpha|v|\gamma\delta}^\prime
    +\Braket{\beta\alpha|v|\delta\gamma}^\prime) \\
    &=\Braket{\alpha\beta|v|\gamma\delta}^\prime
    -\Braket{\alpha\beta|v|\delta\gamma}^\prime,
  \end{aligned}
\end{equation}
where the prime indicates that the two-body wavefunction in the bra and ket is not
antisymmetrized, i.e., $\Braket{\alpha\beta|v|\gamma\delta}^\prime\equiv\Bra{\alpha}
\Bra{\beta}v\Ket{\gamma}\Ket{\delta}$. The particle exchange symmetry of $v$
[$v(r_{ij})=v(r_{ji})$] is used in the last step of Eq.~\ref{eq:2.36}
\begin{equation}
  \label{eq:2.37}
  \Braket{\alpha\beta|v|\gamma\delta}^\prime
  =\Braket{\beta\alpha|v|\delta\gamma}^\prime.
\end{equation}
We'd love to make this clear by casting Eq.~\ref{eq:2.37} into coordinate
representation
\begin{equation}
  \label{eq:2.38}
  \begin{aligned}
    \Braket{\alpha\beta|v|\gamma\delta}^\prime&=\int\mathrm{d}x_1\mathrm{d}x_2
    \psi_\alpha(x_1)\psi_\beta(x_2)v(r_{12})\psi_\gamma(x_1)\psi_\delta(x_2) \\
    &=\int\mathrm{d}x_1\mathrm{d}x_2
    \psi_\alpha(x_2)\psi_\beta(x_1)v(r_{12})\psi_\gamma(x_2)\psi_\delta(x_1) \\
    &=\Braket{\beta\alpha|v|\delta\gamma}^\prime.
  \end{aligned}
\end{equation}
As the $v$ matrix elements integrated in coordinate representation are commonly
seen, $\Braket{\alpha\beta|v|\gamma\delta}$ expressed in Eq.~\ref{eq:2.26} are
specially referred to as the \emph{antisymmetrized} matrix element in literature.

Now let's try generalizing Eq.~\ref{eq:2.35} to a general MBSD with particles no
less than two. Under similar premises underlying the derivation in Eq.~\ref{eq:2.32}
it follows
\begin{equation}
  \label{eq:2.39}
  \begin{aligned}
    \sum_{i<j}v(r_{ij})\Ket{\alpha_1\alpha_2\cdots\alpha_n}
    &=\sum_{\alpha_i<\alpha_j}v(\alpha_i,\alpha_j)
    \Ket{\alpha_1\alpha_2\cdots\alpha_n} \\
    &=\sum_{\substack{\alpha<\beta \\ \alpha_i<\alpha_j}}
    \Braket{\alpha\beta|v|\alpha_i\alpha_j}
    a^\dagger_\alpha a^\dagger_\beta a_{\alpha_j} a_{\alpha_i}
    \Ket{\alpha_1\alpha_2\cdots\alpha_n}
  \end{aligned}
\end{equation}
where Eq.~\ref{eq:2.35} is inserted.

Furthermore, we want $\alpha_i$ and $\alpha_j$ to run over all the possible sp
states. And instead of running over pairs of $(\alpha_i,\alpha_j)$ and $(\alpha,
\beta)$, it feels more comfortable for the four indices to run freely and
independent of each other. Since for the annihilation operators with subscripts
not in basis $\{\alpha_i\}$ the result vanishes, the restriction on the domain
of $\alpha_i$ and $\alpha_j$ is removed in place, which will be thus denoted by
$\gamma$ and $\delta$ instead. As for the second proposition, while
$\Braket{\alpha\beta|v|\gamma\delta}$ is antisymmetric upon the permutation of
$(\alpha,\beta)$ or $(\gamma,\delta)$, so it is for $a^\dagger_\alpha
a^\dagger_\beta a_{\delta} a_{\gamma}$. Eventually the product is symmetric against
the permutations. By running the four indices freely, either side of v (i.e.,
$(\alpha,\beta)$ or $(\gamma,\delta)$) has been cycled $2!=2$ times. Totally the
sum is augmented by a factor of $(2!)^2=4$, so
\begin{equation}
  \label{eq:2.40}
  \begin{aligned}
    \sum_{i<j}v(r_{ij})\Ket{\alpha_1\alpha_2\cdots\alpha_n}
    &=\sum_{\substack{\alpha<\beta \\ \gamma<\delta}}
    \Braket{\alpha\beta|v|\gamma\delta}
    a^\dagger_\alpha a^\dagger_\beta a_{\delta} a_{\gamma}
    \Ket{\alpha_1\alpha_2\cdots\alpha_n} \\
    &=\frac{1}{4}\sum_{\alpha\beta\gamma\delta}\Braket{\alpha\beta|v|\gamma\delta}
    a^\dagger_\alpha a^\dagger_\beta a_{\delta} a_{\gamma}
    \Ket{\alpha_1\alpha_2\cdots\alpha_n}.
  \end{aligned}
\end{equation}
Now it appears that
\begin{equation}
  \label{eq:2.41}
  H_I=\frac{1}{4}\sum_{\alpha\beta\gamma\delta}\Braket{\alpha\beta|v|\gamma\delta}
  a^\dagger_\alpha a^\dagger_\beta a_{\delta} a_{\gamma}.
\end{equation}

From the two-body interaction on, one can easily obtain $m$-body interaction in
second quantization form following similar derivations as
\begin{equation}
  \label{eq:2.42}
  V_m=\frac{1}{(m!)^2}\sum_{\substack{\beta_1\beta_2\cdots\beta_n \\
  \gamma_1\gamma_2\cdots\gamma_n}}
  \Braket{\beta_1\beta_2\cdots\beta_n|v_m|\gamma_1\gamma_2\cdots\gamma_n}
  a^\dagger_{\beta_1}a^\dagger_{\beta_2}\cdots a^\dagger_{\beta_n}
  a_{\gamma_n}a_{\gamma_{n-1}}\cdots a_{\gamma_1}
\end{equation}
where $\Braket{\beta_1\beta_2\cdots\beta_n|v_m|\gamma_1\gamma_2\cdots\gamma_n}$
is an antisymmetrized matrix element, e.g., for three-body force,
\begin{equation}
  \label{eq:2.43}
  \begin{aligned}
    \Braket{pqr|v_3|stu}&=\Braket{pqr|v_3|stu}^\prime+\Braket{pqr|v_3|ust}^\prime
    +\Braket{pqr|v_3|tus}^\prime \\ &-\Braket{pqr|v_3|sut}^\prime
    -\Braket{pqr|v_3|uts}^\prime-\Braket{pqr|v_3|tsu}^\prime
  \end{aligned}
\end{equation}

Eventually, the Hamiltonian that only contains one- and two-body interactions is
expressed in second quantization by
\begin{equation}
  \label{eq:2.44}
  H=\sum_{\alpha\beta}{\Braket{\alpha|f|\beta}a_\alpha^\dagger a_\beta}
    +\frac{1}{4}
    \sum_{\alpha\beta\gamma\delta}{\Braket{\alpha\beta|v|\gamma\delta}
    a_\alpha^\dagger a_\beta^\dagger a_\gamma a_\delta}.
\end{equation}
From which the Hamiltonian matrix element is easily calculated and the matrix
conveniently constructed. Then the solution process of the stationary Schr\"odinger
equation is the diagonalization of the Hamiltonian matrix.

The many-body wavefunctions are built by distributing particles on different slots
of single-particle states designated by the model space. By distributing all the
constituent nucleons (without a frozen core) with no truncation of configurations
we arrive at the \emph{full configuration interaction} (FCI).

In order to make calculations feasible, truncations on the MBSDs are used in
various shell models. Among the commonly seen are the particle-hole truncation
and the energy truncation.
\begin{itemize}
  \item One can start with the ground state where sp states below the fermi surface
  are filled up, and then allow excitations of one, two, three... particles to sp
  states above the fermi surface, which therefore get their name as the ``$1p1h$'',
  ``$2p2h$'', ``$3p3h$''... excitations.
  \item One can set a limit to the total excitation energy measured by the main
  oscillator quanta of the sp orbits (of the excited particles) with regard to
  (w.r.t.) the ground state. So they called the $N\hbar\omega$ or the $N_{\rm max}$
  truncation.
\end{itemize}

Technically speaking, the option of truncation schemes is a trivial matter. It
only affects the dimensionality of the MBSD space. We can always generate MBSDs
according to the criterion of full configuration interaction, and then truncate
them the way as the truncation scheme pleases. The larger the MBSD space, the more
realistic the model is. And even with severe truncation, the resulting MBSD space
can explode well beyond the limit of the RAM and the CPU calculation power of
modern PCs. What really makes the difference for the time being is the efficiency
of the diagonalization algorithm for large and sparse matrices.

\chapter{Lanczos Algorithm with Thick Restarts}

Lanczos algorithm is one of the most popular algorithm for the diagonalization of
large sparse matrices encountered in FCI calculations. It allows for only liberating
the first few eigenpairs, which is enough as usually only the ground and first few
excited states are of interest in many fields, say, nuclear physics and quantum
chemistry. Since first published\footnote{C. Lanczos, An iteration method for the
solution of the eigenvalue problem of linear differential and integral operators,
J. Res. Nat. Bureau Standards, Sec. B, \textbf{45} (1950), p.~255--282.}, it has
experienced several rounds of upgrades, mainly to fix the loss of orthogonality
during the Gram-Schmidt orthogonalization of the Lanczos or Arnoldi vectors, due
to roundoff error of limited machine precision. For detailed learning material on
eigenvalue problems of large scale matrix, we recommend the lecture note of Prof.
Dr. Peter Arbenz from ETH Z\"urich, which is available at
\href{https://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp.pdf}
{https://people.inf.ethz.ch/arbenz/ewp/Lnotes/lsevp.pdf}.

\section{Eigenvalue Problems}

Matrix diagonalization is most often encountered in eigenvalue problems. Suppose
we have an $N\times N$ matrix $A$ with eigenpairs $\{\lambda_i,\bm x_i\}$
\begin{equation}
  \label{eq:3.1}
  A\bm x_i=\lambda_i \bm x_i
\end{equation}
where $\lambda_i$ is called an eigenvalue and $\bm x_i$ its eigenvector. Obviously
Eq.~\ref{eq:3.1} is a homogeneous linear equation set, to which a solution set
is possible only if its coefficients matrix is singular
\begin{equation}
  \label{eq:3.2}
  \lvert A-\lambda_i E\rvert=0.
\end{equation}

Eq.~\ref{eq:3.2} is an $N$-th order polynomial equation of $\lambda_i$, to which $N$
(complex) roots exist, according to the fundamental theorem of algebra. Each root
$\lambda_i$ corresponds to an eigenpair $(\lambda_i,\bm x_i)$. And Eq.~\ref{eq:3.2}
ensures that a non-zero $\bm x_i$ is bound to exist. And $k\bm x_i$ is also an
eigenvector corresponding to $\lambda_i$, with $k$ a constant. So $\bm x_i$ can
be scaled or normalized.

Inserting all the eigenpairs of $A$ in Eq.~\ref{eq:3.1}, we reach the matrix equation
\begin{equation}
  \label{eq:3.3}
  AP=P\Lambda
\end{equation}
or
\begin{equation}
  \label{eq:3.4}
  P^{-1}AP=\Lambda
\end{equation}
with $P\equiv(\bm x_1,\bm x_1,\cdots,\bm x_N)$, and
\begin{equation}
  \label{eq:3.5}
  \Lambda\equiv
  \begin{bmatrix}
    \lambda_1 & & & \\
     & \lambda_2 & & \\
     & & \ddots & \\
     & & & \lambda_N
  \end{bmatrix}
\end{equation}

So it is now clear that the solving process of all the eigenpairs of matrix $A$
is equivalent to the diagonalization of $A$. The $N\times N$ matrix $P$ that
diagonalizes $A$ has its $i$-th column as $\bm x_i$. The diagonalized $A$ has its
$i$-th diagonal element as $\lambda_i$. We concentrate on the diagonalization
algorithm in the text to follow.

\section{Lanczos Algorithm}

One has to know whether a matrix can be diagonalized at all in the first place
before seeking to diagonalize it. It is a complicated mathematical problem to probe
what properties are necessary for a matrix to be diagonalizable. We are satisfied
with what kind of properties lead to the diagonalizability of a matrix. It can be
proved that a matrix is diagonalizable if it is a \emph{normal} matrix, i.e.,
\begin{equation}
  \label{eq:3.6}
  A^\dagger A=AA^\dagger,
\end{equation}
where $A^\dagger$ is the conjugate transpose of $A$.
It is immediately clear that both real symmetric matrices and hermitian matrices
are diagonalizable. Now we are reassured that our matrices to be diagonalized in
quantum mechanics are all diagonalizable.

With diagonalizability comes a significant property, i.e., all the eigenvectors
of the matrix are linearly independent of each other. For hermitian matrices it
is easily found in common undergraduate quantum mechanics textbooks that their
eigenvalues are all real, and eigenvectors that belong to different eigenvalues
are orthogonal to each other. Since symmetric matrices are also hermitian, it also
applies to them. Pertinent to our purposes here, we only talk about the diagonalization
for real symmetric matrices.

There are many methods for diagonalization of small dense real symmetric matrices,
like the power method, Jacobi rotation method and Givens and Householder Reduction
to tridiagonal form followed by $QR$ (or $QL$) factorization\footnote{W.H. Press
et al., Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. (Cambridge
University Press, Cambridge, 1992), chap.~11}. Copious documentations about these
methods are easily found in textbooks and references given in the footnote above.

These traditional methods take computer memory of $O(N^2)$ words and typically
$O(N^3)$ operations for diagonalization of matrices with dimensionality $N\times N$.
They are not suitable for large matrices with dimensionality of the order $10^{5}$
and higher. They are not the topic of this chapter. The Lanczos algorithm we are
about to introduce is tailored for large sparse real symmetric matrices.

\subsection{Gram-Schmidt Orthogonalization}

We define the space spanned by $\{\bm x,A\bm x,\cdots,A^{j-1}\bm x\}$ as a
\emph{Krylov} subspace
\begin{equation}
  \label{eq:3.7}
  \mathcal{K}^j(\bm x)\equiv\mathcal{K}^j(\bm x,A)\equiv
  {\rm span}\{\bm x,A\bm x,\cdots,A^{j-1}\bm x\}.
\end{equation}
It will later be shown that Krylov subspace has very good properties, enabling
the liberation of the first few eigenpairs without the knowledge of the rest of
the eigenpairs.
Yet $\{\bm x,A\bm x,\cdots,A^{j-1}\bm x\}$ is a badly conditioned basis, as
$A^j\bm x$ with increasing $j$ converges quickly to the direction of $A$'s
eigenvector corresponding to the largest (\emph{pivotal}) eigenvalue in modulus,
\begin{equation}
  \label{eq:3.8}
  \begin{aligned}
    A^j\bm x=\sum_i{a_i A^j\bm x_i}=\sum_i{a_i\lambda_i^j\bm x_i}.
  \end{aligned}
\end{equation}
It is easily seen that the component of the direction $\bm x_i$ scales with
$\lambda_i^j$, so pivotal eigenvector $\bm x_m$ with the largest $|\lambda_m|$ will
soon dominate $A^j\bm x$. We want to orthogonalize $\{\bm x,A\bm x,\cdots,A^{j-1}\bm x\}$
so that each vector in the set is orthogonal to all the other. One of the most
direct strategies is to remove all the existing $j$ directions
($\{\bm x,A\bm x,\cdots,A^{j-1}\bm x\}$) from $A^j\bm x$
\begin{equation}
  \begin{aligned}
    \label{eq:3.9}
    \bm r_j&\equiv A^j\bm x-\sum_{i=1}^{j}{(\bm q_i^\dagger A^j\bm x)\bm q_i}, \\
    \bm q_{j+1}&\equiv\bm r_j/\lVert\bm r_j\rVert.
  \end{aligned}
\end{equation}
$\forall i<j$, $\bm q_i$ component has been removed from $\bm q_j$ according to
Eq.~\ref{eq:3.9}, so $\bm q_i\perp\bm q_j$. $\{\bm q_i\}$ thus constructed is
called the \emph{Arnoldi basis} for general matrices, or \emph{Lanczos basis} for
real symmetric, or hermitian matrices, with $\bm q_1$ defined as the normalized
starting vector of the Krylov subspace $\bm q_1\equiv\bm x/\lVert\bm x\rVert$.

\subsection{Krylov Subspace}

That above is basically what the Gram-Schmidt orthogonalization does. One drawback
compared with the original basis is that $A^j\bm x$ cannot be obtained progressively
by $AA^{j-1}\bm x$. We want to save the calculation of power of $A$.
If only $\bm q_j$ could be used in $A^{j-1}\bm x$'s stead.
Well obviously it could, because $\bm q_j$ only differs from $A^{j-1}\bm x$
in the absence of $\{\bm x,A\bm x,\cdots,A^{j-2}\bm x\}$ components, which is
irrelevant as no matter how many of those components in $\bm q_j$ or in
$A^{j-1}\bm x$, they are deducted altogether anyway while getting $\bm r_j$. The
resulting $\bm r_j$ is the same. So we update Eq.~\ref{eq:3.9} to
\begin{equation}
  \begin{aligned}
    \label{eq:3.10}
    \bm r_j&\equiv A\bm q_j-\sum_{i=1}^{j}{(\bm q_i^\dagger A\bm q_j)\bm q_i}, \\
    \bm q_{j+1}&\equiv\bm r_j/\lVert\bm r_j\rVert.
  \end{aligned}
\end{equation}
This is the so-called Arnoldi relation, or Lanczos relation for hermitian matrices.

It is told from Eq.~\ref{eq:3.10} that $\bm r_j$ should not be a zero vector, or
the basis thereafter all vanish. Encountering a zero vector as expanding Arnoldi
basis carries more profound meaning than that when it is in a Krylov subspace. It
means that for a certain $j_{\rm max}$, $A\bm q_{j_{\rm max}}$ can be linearly
represented by the previous basis, or $A\bm q_{j_{\rm max}}\in{\rm span}\{\bm q_1,
\bm q_2, \cdots, \bm q_{j_{\rm max}}\}$, i.e., the Krylov subspace does not change
with the iterative multiplication of $A$. We get an \emph{invariant} Krylov subspace,
spanned by $j_{\rm max}$ eigenvectors of $A$. This happens if and only if the
starting vector $\bm x$ happens to contain directions of only $j_{\rm max}$
eigenvectors of $A$, of which the eigenvalues are different from each other.

First, let's suppose
\begin{equation}
  \label{eq:3.11}
  \bm x=\sum_{i=1}^{j_{\rm max}}{a_i\bm x_i}\quad(\forall i\in[1,j_{\rm max}], a_i\neq 0),
\end{equation}
then we have
\begin{equation}
  \label{eq:3.12}
  A\bm x=\sum_{i=1}^{j_{\rm max}}{a_i\lambda_i\bm x_i}.
\end{equation}
So $\bm x\in{\rm span}\{\bm x_1,\bm x_2,\cdots,\bm x_{j_{\rm max}}\}\Rightarrow
A\bm x\in{\rm span}\{\bm x_1,\bm x_2,\cdots,\bm x_{j_{\rm max}}\}\Rightarrow
\forall j\in\mathbb{N},A^j\bm x\in{\rm span}\{\bm x_1,\bm x_2,\cdots,
\bm x_{j_{\rm max}}\}$. The last step is obtained by induction. The multiplication
of $A$ to $\bm x$ from the left does not produce any new components of its eigenvectors
that $\bm x$ does not have, but only effects the coefficients of the expansion of
$\bm x$ in terms of $\{\bm x_1,\bm x_2,\cdots,\bm x_{j_{\rm max}}\}$. So we are
bound to arrive at $\lVert\bm r_{j_{\rm max}}\rVert=0$. Consequently, the dimensionality
of $\mathcal{K}(\bm x,A)$ can be no larger than $j_{\rm max}$.

Conversely, if we meet $\lVert\bm r_{j_{\rm max}}\rVert=0$ while constructing
$\mathcal{K}(\bm x,A)$ with the recurrence Eq.~\ref{eq:3.10}, we have
$\forall j\geq j_{\rm max},\lVert\bm r_j\rVert=0$, and that $\bm r_j$ does not
contain new directions that ${\rm span}\{\bm q_1,\bm q_2,\cdots,\bm q_{j_{\rm max}}\}$
does not include. Let's now see if this is sufficient to deduce that the starting
vector $\bm x$ contains only $j_{\rm max}$ eigenvectors of $A$, to which the
corresponding eigenvalues are all different.

As $\bm r_j$ differs from $A^j\bm x$ only in the amount of
$\mathcal{K}^j(\bm x,A)$ components, and essentially
\begin{equation}
  \label{eq:3.13}
  {\rm span}\{\bm q_1,\bm q_2,\cdots,\bm q_j\}
  ={\rm span}\{\bm x,A\bm x,\cdots,A^{j-1}\bm x\}
  =\mathcal{K}^j(\bm x,A),
\end{equation}
it follows that $\forall j\in\mathbb{N},A^j\bm x=\sum_{i=1}^{j_{\rm m}}
{a_i\lambda_i^j\bm x_i}\in \mathcal{K}^{j_{\rm max}}(\bm x,A)$. Note that the
eigenvalues in the summation are assumed to be different from each other.
It can be proved that merely $\{\sum_{i=1}^{j_{\rm m}}{a_i\bm x_i},
\sum_{i=1}^{j_{\rm m}}{a_i\lambda_i\bm x_i},\cdots,
\sum_{i=1}^{j_{\rm m}}{a_i\lambda_i^{j_{\rm m}-1}\bm x_i}\}$ can span a linear
space with a dimensionality of $j_{\rm m}$, let alone the inclusion of other $j$
values. Then $j_{\rm m}$ has to be no larger than $j_{\rm max}$.

Define $X\equiv (a_1\bm x_1,a_2\bm x_2,\cdots,a_{j_{\rm m}}\bm x_{j_{\rm m}})$
and
\begin{equation}
  \label{eq:3.14}
  V\equiv
  \begin{bmatrix}
    1         & 1         & \cdots & 1                   \\
    \lambda_1 & \lambda_2 & \cdots & \lambda_{j_{\rm m}} \\
    \lambda^2_1 & \lambda^2_2 & \cdots & \lambda^2_{j_{\rm m}} \\
    \vdots & \vdots & \ddots & \vdots \\
    \lambda^{j_{\rm m}-1}_1 & \lambda^{j_{\rm m}-1}_2 & \cdots &
    \lambda^{j_{\rm m}-1}_{j_{\rm m}}
  \end{bmatrix}
\end{equation}
It boils down to proving that $VX$ is nonsingular, or $\lvert VX\rvert=\lvert V\rvert
\lvert X\rvert\neq 0$. $X$ is nonsingular as $\bm x_i$ are the eigenvectors of $A$
so as to be linearly independent of each other. As $\lvert V\rvert$ is a Vandermonde
determinant, we have
\begin{equation}
  \label{eq:3.15}
  \lvert V\rvert=\prod_{1\leq i<j\leq j_{\rm m}}^{j_{\rm m}}{(\lambda_i-\lambda_j)}
\end{equation}
which is non-zero as the eigenvalues are different from each other by assumption.

On the other hand, $j_{\rm m}$ cannot be less than $j_{\rm max}$. This is clear
as $A^j\bm x$ for different $j$ are but different linear combinations of $j_{\rm m}$
eigenvectors of $A$, so the dimensionality of $\mathcal{K}(\bm x,A)$ will not exceed
$j_{\rm m}$ -- there are simply not so many $\bm q_i$'s.
We will meet $\lvert\bm r_j\rvert=0$ before $j$ reaches $j_{\rm max}$. This contradicts
the original assumption we start from. So finally $j_{\rm m}=j_{\rm max}$, and
with Eq.~\ref{eq:3.13} comes
\begin{equation}
  \label{eq:3.16}
  \begin{aligned}
    \mathcal{K}^{j_{\rm m}}(\bm x, A)&=
    {\rm span}\left\{\sum_{i=1}^{j_{\rm m}}{a_i\bm x_i},
    \sum_{i=1}^{j_{\rm m}}{a_i\lambda_i\bm x_i},\cdots,
    \sum_{i=1}^{j_{\rm m}}{a_i\lambda_i^{j_{\rm m}-1}\bm x_i}\right\} \\
    &={\rm span}\{\bm x_1,\bm x_2,\cdots,\bm x_{j_{\rm m}}\}
  \end{aligned}
\end{equation}
as the two set of bases are connected by a linear and invertible transformation
matrix $V$, so that they can linearly represent each other.
This provides the most convenience, as whatever the basis of $\mathcal{K}^{j_{\rm m}}
(\bm x, A)$ we choose, it is a similar transformation away from $\{\bm x_1,\bm x_2,
\cdots,\bm x_{j_{\rm m}}\}$. As long as this transformation is found, the first
$j_{\rm m}$ eigenvectors are liberated from $\bm x$, together with the corresponding
eigenvalues.

\subsection{Lanczos Algorithm}

We now set out to see how to separate $\bm x_i$'s from $\bm x$.
$m$ will be used in stead of $j_{\rm m}$ or $j_{\rm max}$ to denote the maximum
dimensionality of $\mathcal{K}^{j_{\rm m}}(\bm x, A)$ as a matter of convenience.
Without loss of generality, we choose Arnoldi basis $\{\bm q_1,\bm q_2,\cdots,
\bm q_m\}$ and denote
\begin{equation}
  \label{eq:3.17}
  Q_m\equiv(\bm q_1,\bm q_2,\cdots,\bm q_m)
\end{equation}
Note that $Q_m$ is an $N\times m$ matrix and its columns are orthonormal to each
other
\begin{equation}
  \label{eq:3.18}
  \begin{aligned}
    \bm q_i^\dagger\bm q_j&=\delta_{ij}, \\
    Q_m^\dagger Q_m&=E.
  \end{aligned}
\end{equation}
The desired similar transform $m\times m$ matrix $S=\{s_{ij}\}$ satisfies
\begin{equation}
  \label{eq:3.19}
  A\sum_{j=1}^m{\bm q_i s_{ij}}=A\bm x_j=\lambda_j\sum_{j=1}^m{\bm q_i s_{ij}},
\end{equation}
or written in matrix form
\begin{equation}
  \label{eq:3.20}
  AQ_mS=Q_mS\Lambda_m,
\end{equation}
where $\Lambda_m={\rm diag}\{\lambda_1,\lambda_2,\cdots,\lambda_m\}$.
It becomes immediately enlightening if we move the $Q_mS$ from the rhs to the lhs
of Eq.~\ref{eq:3.20} by multiplying $S^{-1}Q_m^\dagger$ from the left
\begin{equation}
  \label{eq:3.21}
  S^{-1}Q_m^\dagger AQ_mS\equiv S^{-1}H_mS=\Lambda_m.
\end{equation}
where $H_m=\{h_{ij}\}$ and
\begin{equation}
  \label{eq:3.22}
  \begin{aligned}
    H_m&\equiv Q_m^\dagger AQ_m, \\
    h_{ij}&\equiv\bm q_i^\dagger A\bm q_j.
  \end{aligned}
\end{equation}
So basically the main task is to represent $A$ in basis $\{\bm q_i\}$ and diagonalize
the resulting $m\times m$ matrix $H_m$.

The above treatment means everything since it projects the eigenvalue problem of
a huge matrix $A$ to a (Krylov) subspace with a tractably small dimension $m$.
Then the well-developed toolkit for diagonalization of small dense matrices can
be amicably employed.

It is worth mentioning that $H_m$ is a matrix of (upper) Heisenberg form, namely
that its elements below the subdiagonal (the subdiagonal not included) are all zero,
hence its notation in literature `$H$'. $H_m$ becomes tridiagonal if $A$ is a
hermitian matrix. In which case it is written as $T_m$. These can be easily deduced
from the Arnoldi or Lanczos relation Eq.~\ref{eq:3.10}. The definition of $H$
(Eq.~\ref{eq:3.22}) tells that its $h_{ij}$ column gives $A\bm q_j$'s projection
on $\bm q_i$, which is clearly stated in Eq.~\ref{eq:3.10}
\begin{equation}
  \label{eq:3.23}
  \begin{aligned}
    A\bm q_j&=\sum_{i=1}^{j}{(\bm q_i^\dagger A\bm q_j)\bm q_i}+
    \lVert\bm r_j\rVert\bm q_{j+1} \\
    &=\sum_{i=1}^{j}{h_{ij}\bm q_i}+h_{j+1,j}\bm q_{j+1}
  \end{aligned}
\end{equation}
So it is clearly seen that there is no bits of $\bm q_i$ components for $i>j+1$
in $A\bm q_j$. $H_m$ takes the form of
\begin{equation}
  \label{eq:3.24}
  H_m=
  \begin{bmatrix}
    h_{11} & h_{12} & \cdots & h_{1m} \\
    h_{21} & h_{22} & \cdots & h_{2m} \\
           & \ddots & \ddots & \vdots \\
           &        & h_{m,m-1} & h_{mm}
  \end{bmatrix}
\end{equation}
Specifically for hermitian matrices $A$, $H_m$ is also hermitian as
\begin{equation}
  \label{eq:3.25}
  H_m^\dagger=(Q_m^\dagger AQ_m)^\dagger=Q_m^\dagger AQ_m=H_m
\end{equation}
So $H_m$ now becomes tridiagonal. As this is vastly the case in real applications,
the following notation is widely used in literature
\begin{equation}
  \label{eq:3.26}
  \begin{aligned}
    \beta_j&\equiv h_{j+1,j}=\lVert\bm r_j\rVert=\bm q_{j+1}^\dagger A\bm q_j, \\
    \alpha_j&\equiv h_{jj}=\bm q_j^\dagger A\bm q_j.
  \end{aligned}
\end{equation}
Note that $\alpha_j$ and $\beta_j$ are all real. Then $H_m$ is reduced to tridiagonal
$T_m$ as
\begin{equation}
  \label{eq:3.27}
  T_m=
  \begin{bmatrix}
    \alpha_1 & \beta_1  &          &             &             \\
    \beta_1  & \alpha_2 & \beta_2  &             &             \\
             & \beta_2  & \alpha_3 & \ddots      &             \\
             &          & \ddots   & \ddots      & \beta_{m-1} \\
             &          &          & \beta_{m-1} & \alpha_m
  \end{bmatrix}
\end{equation}
And this simplifies the problem even further as more efficient diagonalization
algorithms exist specifically for tridiagonal matrices than generic ones. With
Eq.~\ref{eq:3.27} comes the Lanczos relation which is much more commonly seen in
literature
\begin{equation}
  \label{eq:3.28}
  A\bm q_j=\beta_{j-1}\bm q_{j-1}+\alpha_j\bm q_j+\beta_j\bm q_{j+1}
\end{equation}


In formulating the theory above, we deliberately averted our attention from two
significant problems. It's all about the starting vector $\bm x$. One is that
usually $\bm x$ are arbitrarily chosen, which contains components of far more
eigenvectors than just a few, typically less than a dozen, that we may need. The
second one is that it is the pivotal eigenvectors that we are after. There is no
guarantee for that. Simply if there is no bits of those vectors in $\bm x$, none
of them will ever show up in the Arnoldi or Lanczos iteration to follow.

But for practical use, and with roundoff error in numerical calculation considered,
there is no need for $\bm x$ to be made of linear combination of absolutely pure
eigenvectors of interest -- a little contamination is pleasantly tolerated. The
contamination, or ``residue'' term, as will be shown below, will be used to
estimate the error of this approximation.

As for the second concern, it is customarily ignored as it is extremely difficult
for the arbitrarily chosen starting vector $\bm x$ to be just absolutely short of
the pivotal eigenvectors. Tiny traces of them will quickly thrive and dominate
along with the growth of the Arnoldi basis, as in the case of the power method.

Sometimes the starting vector happens to be such a sorry ass that the required
precision standard is not met until the Arnoldi basis proliferates to some
uncomfortably large dimension. Then it is possible that we give $\bm x$ some
intervention. The common practice is to remove (``purge'') the last few \emph{Ritz
vectors} (smallest in \emph{Ritz values}) from the Krylov space. The Ritz vectors
and Ritz values of $A$ are the counterpart of its eigenvectors and eigenvalues,
defined by the eigenpairs of $H_m$ or $T_m$ in the representation of the Arnoldi
or Lanczos basis. It can be proved that after the ``purge'', the rest of the Ritz
vectors still can span a Krylov space, so that the Arnoldi (Eq.~\ref{eq:3.10}) or
Lanczos (Eq.~\ref{eq:3.28}) algorithm can proceed. This is called Lanczos algorithm
with thick restarts, for real symmetric matrices. Each restart enriches pivotal
eigenvectors and eliminates the insignificant ones in the current Krylov subspace
while preserving its dimension from augmenting. So the pivotal eigenpairs are bound
to expose themselves. For the very rare cases where the weights of the pivotal ones
are too low in $\bm x$, we can even restart the whole process by starting with a
$\bm x$ drastically different from the current one.

\section{Krylov-Bogoliubov Eigenvalue Inclusion}

As has been explained before, the maximum dimension $m$ of Krylov subspace
$\mathcal{K}(\bm x, A)$ is usually quite large. We'd like to stop at $k\ll m$
in Lanczos algorithm. Let's write Eq.~\ref{eq:3.28} in matrix form
\begin{equation}
  \label{eq:3.29}
  \begin{aligned}
    AQ_k&=Q_kT_k+\beta_k\bm q_{k+1}\bm e_k^\dagger \\
    &=Q_k
    \begin{bmatrix}
      \alpha_1 & \beta_1  &          &             &             \\
      \beta_1  & \alpha_2 & \beta_2  &             &             \\
               & \beta_2  & \alpha_3 & \ddots      &             \\
               &          & \ddots   & \ddots      & \beta_{k-1} \\
               &          &          & \beta_{k-1} & \alpha_k
    \end{bmatrix}
    +\beta_k[\underbrace{\bm 0,\cdots,\bm 0}_{k-1\,{\rm times}},\bm q_{k+1}],
  \end{aligned}
\end{equation}
where $Q_k=[\bm q_1,\bm q_2,\cdots,\bm q_k]$ and $\bm e_k$ is a unit vector with
only its $k$-th element nonzero (=1). This equation is also called the
Lanczos relation, but in a more general form compared with Eq.~\ref{eq:3.10}.
Similarly one can easily have the more general Arnoldi relation, only by replacing
the tridiagonal matrix $T_k$ with Heisenberg-form matrix $H_k$.

The Ritz pairs become eigenpairs if the last term of Eq.~\ref{eq:3.29} vanishes.
Since $\bm q_{k+1}$ and $\bm e_k$ are all unit vectors, it is equivalent to
$\beta_k=0$. So $\beta_k$ is expected to be a measure of the deviation of Ritz
pairs from eigenpairs. To verify that, we present the lemma of
\emph{Krylov-Bogoliubov eigenvalue inclusion}\footnote{N. Krylov and N. Bogoliubov,
Sur le calcul des racines de la transcendante de Fredholm les plus voisines d'une
nombre donn\'e par les m\'ethodes des moindres carres et de l'algorithme variationel,
Izv. Akad. Naik SSSR, Leningrad, (1929), p.~471--488.}:
\begin{quote}
  Suppose $A$ is an $N\times N$ hermitian matrix,
  $\vartheta\in\mathbb{R}$ and $\bm x$ an non-zero $N$-dimension vector. Denote
  $\tau\equiv\lVert(A-\vartheta I)\bm x\rVert/\lVert\bm x\rVert$. Then there is
  an eigenvalue of $A$ in the interval $[\vartheta-\tau,\vartheta+\tau]$.
\end{quote}
\emph{Proof.} Matrix $A$ has the following spectral decomposition
\begin{equation}
  \label{eq:3.30}
  A=X\Lambda X^\dagger=\sum_{i=1}^{N}{\lambda_i\bm x_i\bm x_i^\dagger}.
\end{equation}
Then
\begin{equation}
  \label{eq:3.31}
  (A-\vartheta I)\bm x=\sum_{i=1}^{N}(\lambda_i\bm x_i\bm x_i^\dagger
  -\vartheta\bm x_i\bm x_i^\dagger)\bm x=\sum_{i=1}^{N}(\lambda_i-\vartheta)
  (\bm x_i^\dagger\bm x)\bm x_i.
\end{equation}
where we've used
\begin{equation}
  \label{eq:3.32}
  [(\bm a\bm b^\dagger)\bm c]_{i}=(\bm a\bm b^\dagger)_{ij}c_j
  =(a_i b^\dagger_j)c_j=(b^\dagger_jc_j)a_i=[(\bm b^\dagger\bm c)\bm a]_i.
\end{equation}
Taking norms of Eq.~\ref{eq:3.31}, noting that $\{\bm x_i\}$ is an orthonormal
basis
\begin{equation}
  \label{eq:3.33}
  \begin{aligned}
    \lVert(A-\vartheta I)\bm x\rVert^2
    &=\sum_{i=1}^{N}|\lambda_i-\vartheta|^2|\bm x_i^\dagger\bm x|^2 \\
    &\geq|\lambda_k-\vartheta|^2\sum_{i=1}^N|\bm x_i^\dagger\bm x|^2
    =|\lambda_k-\vartheta|^2\lVert\bm x\rVert^2
  \end{aligned}
\end{equation}
where $\lambda_k$ is the eigenvalue closest to $\vartheta$, namely, $\forall i,
|\lambda_k-\vartheta|\leq|\lambda_i-\vartheta|$.

We can acquire a direct understanding of this lemma since $\lVert(A-\vartheta I)
\bm x\rVert^2$ is small if $\vartheta$ and $\bm x$ are close to an eigenpair. Let's
try applying this lemma to the case where $(\vartheta,\bm x)$ is a Ritz pair,
denoted by $(\vartheta_i,\bm y_i)$, so that
\begin{equation}
  \label{eq:3.34}
  \begin{aligned}
    \bm y_i&=\sum_{j=1}^k\bm q_j s_{ji}=Q_k\bm s_i, \\
    T_k\bm s_i&=\vartheta_i\bm s_i.
  \end{aligned}
\end{equation}
Then
\begin{equation}
  \label{eq:3.35}
  \begin{aligned}
    \lVert(A-\vartheta_i I)\bm y_i\rVert^2&=\lVert(A-\vartheta_i I)\bm y_i\rVert^2 \\
    &=\lVert AQ_k\bm s_i-\vartheta_i Q_k\bm s_i\rVert^2 \\
    &=\lVert AQ_k\bm s_i-Q_kT_k\bm s_i\rVert^2 \\
    &=\lVert \beta_{k}\bm q_{k+1}\bm e_k^\dagger\bm s_i\rVert^2 \\
    &=|\beta_k s_{ki}|,
  \end{aligned}
\end{equation}
where we have used Eqs.~\ref{eq:3.29} and \ref{eq:3.32}. $s_{ki}$ is the $k$-th
(last) element of the $i$-th eigenvector $\bm s_i$ of $T_k$.

Finally, with normalized $\{\bm s_i\}$, from Eq.~\ref{eq:3.33} follows
\begin{equation}
  \label{eq:3.36}
  |\lambda_i-\vartheta_i|\leq|\beta_k s_{ki}|.
\end{equation}
It says that for a Ritz value $\vartheta_i$, there is an eigenvalue $\lambda_i$
that is no more than $|\beta_k s_{ki}|$ away from $\vartheta_i$. It is only the
last row of the eigenvector matrix
\begin{equation}
  \label{eq:3.37}
  S_k\equiv[\bm s_1,\bm s_2,\cdots,\bm s_k]
\end{equation}
that matters in the error bound of Ritz values used as eigenvalues. And further,
it is possible to get good approximations of eigenvalues even if $\beta_k$ is not
small. As for the error of the eigenvectors, we know from literature\footnote{B.N.
Parlett, The Symmetric Eigenvalue Problem, Prentice Hall, Englewood Cliffs, NJ,
1980. (Republished by SIAM, Philadelphia, 1998.), chap.~11.7} that
\begin{equation}
  \label{eq:3.38}
  \sin\langle\bm y_i,\bm x_i\rangle\leq\frac{|\beta_k s_{ki}|}{\gamma},
\end{equation}
where
\begin{equation}
  \label{eq:3.39}
  \gamma\equiv\min_{i}|\lambda_i-\lambda_k|,
\end{equation}
which is not known. It has been suggested to replace $|\lambda_i-\lambda_k|$ with
$|\vartheta_i-\vartheta_k|$, which is exact in the limit $k\to m$.

It's worth noting that Lanczos basis loses orthogonality in the Gram-Schmidt
orthogonalization process due to roundoff errors in numerical calculation. To
which in addition to Eq.~\ref{eq:3.10}, a reorthogonalization step is necessary for
each new base vector $\bm r_j$
\begin{equation}
  \label{eq:3.40}
  \bm r_j^\prime\equiv\bm r_j-\sum_{i=1}^j\bm q_i(\bm q_i^\dagger\bm r_j)
  =\bm r_j-Q_k(Q_k^\dagger\bm r_j).
\end{equation}
$\bm q_i(\bm q_i^\dagger\bm r_j)$ stands for the bit of $\bm q_i$ contamination
in $\bm r_j$. $\bm r_j^\prime$ is then used in $\bm r_j$'s stead.

\section{Lanczos Algorithm with Thick Restart}

Now we have the error estimator for an arbitrary Lanczos step $k<m$. It is then
possible to terminate a Lanczos iteration once it meets the error criterion. As
we are dealing with huge matrices, a moderate error tolerance may entail a Krylov
space so large as to overwhelm the memory and computing power of a modern PC. It
is almost impossible to arbitrarily choose the starting vector $\bm x$ to be purely
comprised of only the first few pivotal eigenvectors without additional analysis
of matrix $A$. However, we can remove or suppress the unwanted minor eigenvector
components from the starting vector $\bm x$, by Lanczos algorithm with thick restarts,
as to be presented below.

With the Ritz pairs as the approximation of the eigenpairs, and sorted by the
modulus of Ritz values in descending order, we intend to remove the last few (say,
$l$) Ritz pairs from the Krylov space $\mathcal{K}^{k}(\bm x,A)$ to reduce its
dimension to $k-l$. But there is a complication. The original Krylov space
spanned by the $k$ $\bm q_i$'s are now destroyed after the removal of the $l$ Ritz
vectors. Consequently it also invalidates the error estimation equations
Eqs.~\ref{eq:3.36} and \ref{eq:3.38}. The above purge of trivial Ritz pairs seems
not worth it if the resulting space loses the structure of a Krylov space.

Fortunately it is proved theoretically that the Krylov-space structure is preserved
by the shrinkage described above. There exists a set of Lanczos basis for the
purged space. We start by a theorem\footnote{M. Genseberger and G.L.G. Sleijpen,
Alternative correction equations in the Jacobi-Davidson method, Numer. Linear
Algebra Appl., \textbf{6} (1999), p.~235--253.} which tells what it takes for a
space to be a Krylov space.
\begin{quote}
  Denote $V\equiv[\bm v_1,\bm v_2,\cdots,\bm v_k]$ and $\mathcal{V}=
  {\rm span}\{\bm v_1,\bm v_2,\cdots,\bm v_k\}$. $\mathcal{V}$ is a Krylov space
  if and only if there is a $k\times k$ matrix $M$ such that for
  \begin{equation}
    \label{eq:3.41}
    R\equiv AV-VM
  \end{equation}
$R$ has rank 1 and $\mathcal{W}={\rm span}\{\bm v_1,\bm v_2,\cdots,\bm v_k,
\mathcal{R}(R)\}$ has dimension $k+1$.
\end{quote}
\emph{Proof.} Let's prove necessity first. Let $\mathcal{V}=\mathcal{K}^k(\bm x,A)$
for some $\bm x\in\mathcal{V}$ and $Q_k=[\bm q_1,\bm q_2,\cdots,\bm q_k]$ be the
Arnoldi basis of $\mathcal{K}^k(\bm x,A)$. Since $Q_k$ and $V$ are both nonsingular,
$S$ is nonsingular with $Q_k=VS$. Next we multiply the Arnoldi relation
\begin{equation}
  \label{eq:3.42}
  AQ_k=Q_kH_k+\beta_k\bm q_{k+1}\bm e_k^\dagger
\end{equation}
by $S^{-1}$ from the right to reach
\begin{equation}
  \label{eq:3.43}
  AV=VSH_kS^{-1}+\beta_k\bm q_{k+1}\bm e_k^\dagger S^{-1},
\end{equation}
which is Eq.~\ref{eq:3.41} with $M=SH_kS^{-1}$. Let's denote $\bm t_k$ as the last
row of $S^{-1}$. Then
\begin{equation}
  \label{eq:3.44}
  \begin{aligned}
    \beta_k\bm q_{k+1}\bm e_k^\dagger S^{-1}&=\beta_k\bm q_{k+1}\bm t_k^\dagger \\
    &=\beta_k[t_{k1}\bm q_{k+1},t_{k2}\bm q_{k+1},\cdots,t_{kk}\bm q_{k+1}].
  \end{aligned}
\end{equation}
So $R=\beta_k\bm q_{k+1}\bm e_k^\dagger S^{-1}$ has rank 1. And this equation
shows that for nonzero vectors $\bm a$ and $\bm b$, matrix $\bm a\bm b^\dagger$
has rank 1. Conversely, any rank-1 matrix can be written as $\bm a\bm b^\dagger$,
as all its columns are parallel to each other. Unless otherwise specified, vectors
appeared henceforth are all nonzero vectors. $\mathcal{W}$ has dimension $k+1$
for $Q_k^\dagger\bm q_{k+1}=\bm 0$.

Now let's go to the proof of sufficiency. Let rank-1 matrix $R=\bm v\bm w^\dagger$
and
\begin{equation}
  \label{eq:3.45}
  AV=VM+R=VM+\bm v\bm w^\dagger.
\end{equation}
where the lengths of $\bm v$ and $\bm w$ are $n$ and $k$, respectively.
As similarity transform does not change space $\mathcal{V}$, and $\mathcal{K}(\bm x,A)
=\mathcal{K}(\bm x,SAS^{-1})$, to simplify the situation without loss of generality,
let's assume that $V=[\bm v_1,\bm v_2,\cdots,\bm v_k]$ has been orthonormalized.
Note that the constituent of $\bm v\bm w^\dagger$ that belongs to linear space
$\mathcal{V}$ can be easily absorbed in the $VM$ term, so we will also remove those
components from $\bm v\bm w^\dagger$ so that $\bm v$ is orthogonal to $\mathcal{V}$,
namely $V^\dagger \bm v=\bm 0$.

To proceed with the proof, it is necessary to introduce the Householder reflector
matrix, which could map any vector $\bm x$ to $\bm e_k$. Let's see how it is done
\footnote{W.H. Press et al., Numerical Recipes in C: The Art of Scientific Computing,
2nd ed. (Cambridge University Press, Cambridge, 1992), p.~470}.

The reflector is composed of the product of a series of basic reflectors $P$, which
takes the form
\begin{equation}
  \label{eq:3.46}
  P=E-2\bm a\bm a^\dagger
\end{equation}
where $|\bm a|=1$.
Matrix $P$ is hermitian, involutory and thus unitary, as
\begin{equation}
  \label{eq:3.47}
  \begin{aligned}
    P^2&=(E-2\bm a\bm a^\dagger)\cdot(E-2\bm a\bm a^\dagger) \\
    &=E-4\bm a\bm a^\dagger+4\bm a(\bm a^\dagger\bm a)\bm a^\dagger \\
    &=E.
  \end{aligned}
\end{equation}
so $P^\dagger=P=P^{-1}$.
$P$ becomes a Householder reflector when it is written as
\begin{equation}
  \label{eq:3.48}
  P=E-\frac{\bm u\bm u^\dagger}{h}
\end{equation}
where $h$ is a scalar
\begin{equation}
  \label{eq:3.49}
  h\equiv\frac{1}{2}|\bm u|^2.
\end{equation}
Let vector $\bm x$ be the object of $P$. Set
\begin{equation}
  \label{eq:3.50}
  \bm u=\bm x-|\bm x|\bm e_k.
\end{equation}
Apply $P$ to $\bm x$
\begin{equation}
  \label{eq:3.51}
  \begin{aligned}
    P\bm x&=\bm x-\frac{\bm u}{h}(\bm x-|\bm x|\bm e_k)^\dagger\bm x \\
    &=\bm x-\frac{2\bm u}{2(|\bm x|^2-|\bm x|x_k)}(|\bm x|^2-|\bm x|x_k) \\
    &=|\bm x|\bm e_k.
  \end{aligned}
\end{equation}
Now we are ready to show that a general $n\times n$ matrix $A$ can be reduced to
Heisenberg form by a series of Householder reflectors. Let $\bm x_1$ be the lower
$n-1$ elements of the first column of $A$, and $\bm y_1^\dagger$ the rightmost
$n-1$ elements of the first row of $A$. Applying Eq.~\ref{eq:3.51} we get
\begin{equation}
  \label{eq:3.52}
  \begin{aligned}
    P_1A&=
    \left[
    \begin{array}{c|cccc}
      1 & 0 & 0 & \cdots & 0 \\
      \hline
      0 &  &  &  &  \\
      0 &  &  &  &  \\
      \vdots &  &  & ^{(n-1)}P_1 &  \\
      0 &  &  &  &
    \end{array}
    \right]
    \left[
    \begin{array}{c|cccc}
      a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
      \hline
      a_{21} &  &  &  &  \\
      a_{31} &  &  &  &  \\
      \vdots &  &  & ^{(n-1)}A &  \\
      a_{n1} &  &  &  &
    \end{array}
    \right] \\
    &=
    \left[
    \begin{array}{c|cccc}
      a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\
      \hline
      |\bm x_1| &  &  &  &  \\
      0 &  &  &  &  \\
      \vdots &  &  & ^{(n-1)}A_1 &  \\
      0 &  &  &  &
    \end{array}
    \right]=
    \begin{bmatrix}
      a_{11} & \bm y_1^\dagger \\
      |\bm x_1|\bm e_1 & ^{(n-1)}A_1
    \end{bmatrix},
  \end{aligned}
\end{equation}
where $^{(n-1)}P_1$ is an $(n-1)\times(n-1)$ Householder reflector to map $\bm x_1$
to $|\bm x_1|\bm e_1$, i.e., $k=1$ in Eqs.~\ref{eq:3.50} and \ref{eq:3.51}.

In order to have similarity, $P_1^\dagger=P_1$ is multiplied to the right of
Eq.~\ref{eq:3.52},
\begin{equation}
  \label{eq:3.53}
  \begin{aligned}
    A_1P_1&=
    \begin{bmatrix}
      a_{11} & \bm y_1^\dagger \\
      |\bm x_1|\bm e_1 & ^{(n-1)}A_1
    \end{bmatrix}
    \begin{bmatrix}
      1 & \bm 0^\dagger \\
      \bm 0 & ^{(n-1)}P_1
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      a_{11} & [^{(n-1)}P_1\bm y_1]^\dagger \\
      |\bm x_1|\bm e_1 & {^{(n-1)}A_1}{^{(n-1)}P_1}
    \end{bmatrix}
    \equiv A_1^\prime.
  \end{aligned}
\end{equation}
Such a step is called a Householder reduction, which is a similarity transform
that reduces a column of $A$ to Heisenberg form. For hermitian matrices,
$\bm x_1=\bm y_1$, the corresponding column and row are reduced to tridiagonal
form.

Denote $\bm x_2$ as the lower $n-2$ elements of the second column of $A_1^\prime$,
and ${\bm y_1^\prime}^\dagger$ ($\bm y_2^\dagger$) as the rightmost $n-2$ elements
of the first (second) row of $A_1^\prime$. Similarly we can proceed with the
reduction as
\begin{equation}
  \label{eq:3.54}
  \begin{aligned}
    P_2A_1^\prime P_2&=
    \begin{bmatrix}
      1 & 0 & \bm 0^\dagger \\
      0 & 1 & \bm 0^\dagger \\
      \bm 0 & \bm 0 & ^{(n-2)}P_2 \\
    \end{bmatrix}
    \begin{bmatrix}
      a_{11} & a_{12}^\prime & {\bm y_1^\prime}^\dagger \\
      |\bm x_1| & a_{22}^\prime & \bm y_2^\dagger \\
      \bm 0 & \bm x_2 & ^{(n-2)}A_1^\prime \\
    \end{bmatrix}
    \begin{bmatrix}
      1 & 0 & \bm 0^\dagger \\
      0 & 1 & \bm 0^\dagger \\
      \bm 0 & \bm 0 & ^{(n-2)}P_2 \\
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      a_{11} & a_{12}^\prime & {\bm y_1^\prime}^\dagger \\
      |\bm x_1| & a_{22}^\prime & \bm y_2^\dagger \\
      \bm 0 & |\bm x_2|\bm e_1 & {^{(n-2)}P_2}{^{(n-2)}A_1^\prime} \\
    \end{bmatrix}
    \begin{bmatrix}
      1 & 0 & \bm 0^\dagger \\
      0 & 1 & \bm 0^\dagger \\
      \bm 0 & \bm 0 & ^{(n-2)}P_2 \\
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      a_{11} & a_{12}^\prime & [{^{(n-2)}P_2}\bm y_1^\prime]^\dagger \\
      |\bm x_1| & a_{22}^\prime & [{^{(n-2)}P_2}\bm y_2]^\dagger \\
      \bm 0 & |\bm x_2|\bm e_1 & {^{(n-2)}P_2}{^{(n-2)}A_1^\prime}{^{(n-2)}P_2} \\
    \end{bmatrix}
  \end{aligned}
\end{equation}
where $^{(n-2)}P_2$ is an $(n-2)\times(n-2)$ Householder reflector to map $\bm x_2$
to $|\bm x_2|\bm e_1$. We can see that now the first two columns are reduced to
Heisenberg form by two steps of Householder reductions. Similarly, if $A$ is
hermitian, $\bm y_1^\prime=\bm 0$ and ${^{(n-2)}P_2}\bm y_2={^{(n-2)}P_2}\bm x_2
=|\bm x_2|\bm e_1$. We again get tridiagonal form.

It is clear that totally a sequence of $n-1$ such reductions can reduce $A$ to
Heisenberg form. Namely we have $PAP^\dagger=H_n$, with \emph{Householder matrix}
\begin{equation}
  \label{eq:3.55}
  P\equiv P_{n-1}P_{n-2}\cdots P_1.
\end{equation}
Matrix $P$ is also unitary because
\begin{equation}
  \label{eq:3.56}
  P^\dagger P=P_1^\dagger P_2^\dagger \cdots P_{n-1}^\dagger P_{n-1}\cdots P_2P_1=E.
\end{equation}

Let's return to the suspended proof. We now know that there exists a Householder
reflector
\begin{equation}
  \label{eq:3.57}
  \begin{aligned}
    S_1&=E-\frac{2\bm u\bm u^\dagger}{|u|^2} \\
    \bm u&=\bm w-|\bm w|\bm e_k
  \end{aligned}
\end{equation}
that maps $\bm w$ to $\bm e_k$, i.e., $S_1\bm w=|\bm w|\bm e_k$. Multiplying
$S_1^\dagger$ to the right of Eq.~\ref{eq:3.45} gives
\begin{equation}
  \label{eq:3.58}
  AVS_1^\dagger=VS_1^\dagger S_1MS_1^\dagger+|\bm w|\bm v\bm e_k^\dagger,
\end{equation}
where $S_1^\dagger=S^{-1}$ is used.

We intend to turn Eq.~\ref{eq:3.58} to an Arnoldi relation. Let $S_2$ be the
Householder matrix that reduces $S_1MS_1^\dagger$ to Heisenberg form $H_k$
\begin{equation}
  \label{eq:3.59}
  S_2S_1MS_1^\dagger S_2^\dagger\equiv SMS^\dagger=H_k
\end{equation}
where $S\equiv S_2S_1$. Applying $S_2$ to the right of Eq.~\ref{eq:3.58},
we have
\begin{equation}
  \label{eq:3.60}
  \begin{aligned}
    AVS^\dagger&=VS_1^\dagger S_2^\dagger S_2S_1MS_1^\dagger S_2^\dagger
    +|\bm w|\bm v\bm e_k^\dagger S_2^\dagger \\
    &=VS^\dagger H_k+|\bm w|\bm v(S_2\bm e_k)^\dagger.
  \end{aligned}
\end{equation}

If $(S_2\bm e_k)^\dagger=\bm e_k$, Eq.~\ref{eq:3.60} is exactly the Arnoldi relation,
Since $S$ is unitary, $Q=VS^\dagger$ is orthogonal as $V$ is orthogonal, and then
$\mathcal{V}=\mathcal{K}^k(\bm q,A)$ with $\bm q=Q\bm e_1$ and $\bm q_{k+1}=\bm v$
with $Q^\dagger\bm q_{k+1}=\bm 0$.

We know that for Householder matrix $S_2$ that is constructed by the prescription
mentioned above which is most commonly seen in literature, $S_2\bm e_1=\bm e_1$.
Well, it is also possible to have $S_2$ that achieves Eq.~\ref{eq:3.59} and at
the same time, leaves $\bm e_k$ alone, i.e., $S_2\bm e_k=\bm e_k$. Let's explore
the possibility of this.

It can be deduced that $a_{11}$ is unchanged by Householder reduction of
Eq.~\ref{eq:3.54} and those necessary to follow. $\bm e_k$ may be intact even if
it is the last column of the matrix to be transformed. We start from Eq.~\ref{eq:3.52}
but by reducing the last column of $A$
\begin{equation}
  \label{eq:3.61}
  \begin{aligned}
    P_1AP_1&=
    \begin{bmatrix}
      ^{(n-1)}P_1 & \bm 0 \\
      \bm 0^\dagger & 1
    \end{bmatrix}
    \begin{bmatrix}
      ^{(n-1)}A & \bm x_n \\
      \bm y_n^\dagger & a_{nn}
    \end{bmatrix}
    \begin{bmatrix}
      ^{(n-1)}P_1 & \bm 0 \\
      \bm 0^\dagger & 1
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      {^{(n-1)}P_1}{^{(n-1)}A} & |\bm x_n|\bm e_{n-1} \\
      \bm y_n^\dagger & a_{nn}
    \end{bmatrix}
    \begin{bmatrix}
      ^{(n-1)}P_1 & \bm 0 \\
      \bm 0^\dagger & 1
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      {^{(n-1)}P_1}{^{(n-1)}A}{^{(n-1)}P_1} & |\bm x_n|\bm e_{n-1} \\
      [{^{(n-1)}P_1}\bm y_n]^\dagger & a_{nn}
    \end{bmatrix},
  \end{aligned}
\end{equation}
where the notation is interpreted in the equation, and follow a similar fashion
in Eqs.~\ref{eq:3.52}$\sim$\ref{eq:3.56}. ${^{(n-1)}P_1}$ is an $(n-1)\times(n-1)$
Householder reflector such that ${^{(n-1)}P_1}\bm x_n=|\bm x_n|\bm e_{n-1}$.

Let $A_1^\prime\equiv P_1AP_1$. Similar to Eq.~\ref{eq:3.54}, we have
\begin{equation}
  \label{eq:3.62}
  \begin{aligned}
    P_2A_1^\prime P_2&=
    \begin{bmatrix}
      ^{(n-2)}P_2 & \bm 0 & \bm 0 \\
      \bm 0^\dagger & 1 & 0 \\
      \bm 0^\dagger & 0 & 1
    \end{bmatrix}
    \begin{bmatrix}
      ^{(n-2)}A_1^\prime & \bm x_{n-1} & \bm 0 \\
      \bm y_{n-1}^\dagger & a_{n-1,n-1}^\prime & |\bm x_n| \\
      {\bm y_n^\prime}^\dagger & a_{n,n-1}^\prime & a_{nn}
    \end{bmatrix}
    \begin{bmatrix}
      ^{(n-2)}P_2 & \bm 0 & \bm 0 \\
      \bm 0^\dagger & 1 & 0 \\
      \bm 0^\dagger & 0 & 1
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      {^{(n-2)}P_2}{^{(n-2)}A_1^\prime} & |\bm x_{n-1}|\bm e_{n-2} & \bm 0 \\
      \bm y_{n-1}^\dagger & a_{n-1,n-1}^\prime & |\bm x_n| \\
      {\bm y_n^\prime}^\dagger & a_{n,n-1}^\prime & a_{nn}
    \end{bmatrix}
    \begin{bmatrix}
      ^{(n-2)}P_2 & \bm 0 & \bm 0 \\
      \bm 0^\dagger & 1 & 0 \\
      \bm 0^\dagger & 0 & 1
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      {^{(n-2)}P_2}{^{(n-2)}A_1^\prime}{^{(n-2)}P_2} & |\bm x_{n-1}|\bm e_{n-2} & \bm 0 \\
      [{(n-2)}P_2\bm y_{n-1}]^\dagger & a_{n-1,n-1}^\prime & |\bm x_n| \\
      [{(n-2)}P_2{\bm y_n^\prime}]^\dagger & a_{n,n-1}^\prime & a_{nn}
    \end{bmatrix}.
  \end{aligned}
\end{equation}
One can see that eventually matrix $A$ will be reduced to lower Heisenberg form,
where the elements above the upper subdiagonal is zero. Still using the definition
Eq.~\ref{eq:3.55}, since the transposition of a lower Heisenberg is an upper one,
we have
\begin{equation}
  \label{eq:3.63}
  (PAP^\dagger)^\dagger=PA^\dagger P^\dagger=H_k.
\end{equation}
It also applies to each intermediate step of the reduction. Now it is clear that
the matrix $A$ can be reduced to lower Heisenberg form rowwise from the last row
if the Householder reflector is constructed respectively against each row of $A$.
And since that
\begin{equation}
  \label{eq:3.64}
  \begin{bmatrix}
    B & \bm 0 \\
    \bm 0^\dagger & 1
  \end{bmatrix}
  \begin{bmatrix}
    \bm 0 \\
    d
  \end{bmatrix}
  =
  \begin{bmatrix}
    \bm 0 \\
    d
  \end{bmatrix},
\end{equation}
we know that $\forall i\in[1,n-1], P_i\bm e_n=\bm e_n$, thus $P\bm e_n=\bm e_n$.
So $S_2$ could be formed similarly to reduce $S_1MS_1^\dagger$ to (upper) Heisenberg
form and have $S_2\bm e_k=\bm e_k$ as long as $S_2$ reduces the object matrix from
the last row, or equivalently the last column for hermitian object matrices. We
have so far proved the theorem.

It is worth the effort to prove it thoroughly as this theorem is vital in telling
whether a Krylov space that has been rid of some Ritz vectors still is a Krylov
space, so as to decide if we can still proceed expanding it using Arnoldi or Lanczos
relation. In which way the starting vector $\bm x$ is cleansed of the unwanted
(trivial) directions.

Here we only discuss hermitian matrices. Let $A=A^\dagger$. Multiplying $\bm s_i$
to the right of Lanczos relation Eq.~\ref{eq:3.29} noting Eq.~\ref{eq:3.34}, gives
\begin{equation}
  \label{eq:3.65}
  A\bm y_i-\vartheta_i\bm y_i=\beta_k\bm q_{k+1}\bm e_k^\dagger\bm s_i
  =\beta_ks_{ki}\bm q_{k+1}.
\end{equation}

Note that $\forall i\in[1,k], \bm y_i\in\mathcal{K}^k(\bm x,A)$, and $\bm q_{k+1}\in
\mathcal{K}^{k+1}(\bm x,A)\ominus\mathcal{K}^k(\bm x,A)$. Therefore, for any set
of indices $1\leq i_1\leq i_2\leq\cdots\leq i_j\leq k$,
\begin{equation}
  \label{eq:3.66}
  \begin{aligned}
    A[\bm y_{i_1},\bm y_{i_2},\cdots,\bm y_{i_j}]&-[\bm y_{i_1},\bm y_{i_2},\cdots,
    \bm y_{i_j}]{\rm diag}[\vartheta_{i_1},\vartheta_{i_2},\cdots,\vartheta_{i_j}]\\
    &=\beta_k\bm q_{k+1}[s_{ki_1},s_{ki_2},\cdots,s_{ki_j}].
  \end{aligned}
\end{equation}
So ${\rm span}\{\bm y_{i_1},\bm y_{i_2},\cdots,\bm y_{i_j}\}$ is a Krylov space.
It turns out that the linear space spanned by any set of Ritz vectors $[\bm y_{i_1},
\bm y_{i_2},\cdots,\bm y_{i_j}]$ is a Krylov space. It is expected from the proof
of the theorem that the generating vectors vary for each set.

Suppose $\{\bm y_1,\bm y_2,\cdots,\bm y_j\}$ are the `good' Ritz vectors chosen
to be kept, and are orthonormalized. As they are not Lanczos basis, each of them
may have a bit of $\bm q_k$, and so, a bit of $A\bm q_{k+1}$. It can be expressed
explicitly by
\begin{equation}
  \label{eq:3.67}
  \begin{aligned}
    A\bm q_{k+1}&=\alpha_{k+1}\bm q_{k+1}+\beta_{k+1}\bm q_{k+2}+\beta_k\bm q_k \\
    &=\alpha_{k+1}\bm q_{k+1}+\beta_{k+1}\bm q_{k+2}+\sum_{i=1}^j\bm y_i(\bm
    y_i^\dagger A\bm q_{k+1}).
  \end{aligned}
\end{equation}

Denoting $\sigma_i\equiv\bm y_i^\dagger A\bm q_{k+1}=\beta_k s_{ki}$,
we obtain $\bm q_{k+2}$
from Eq.~\ref{eq:3.67}
\begin{equation}
  \label{eq:3.68}
  \beta_{k+1}\bm q_{k+2}=\bm r_{k+1}
  =A\bm q_{k+1}-\alpha_{k+1}\bm q_{k+1}-\sum_{i=1}^j\sigma_i\bm y_i.
\end{equation}
Now that we have $\bm q_{k+1}$ and $\bm q_{k+2}$, the three-term recurrence Eq.~\ref{eq:3.28} can proceed. A matrix equation similar to Eq.~\ref{eq:3.29}
also holds
\begin{equation}
  \label{eq:3.69}
  AQ_l=Q_lT_l+\beta_{k+l-j}\bm q_{k+l-j+1}\bm e_l^\dagger,
\end{equation}
yet with
\begin{equation}
  \label{eq:3.70}
  Q_l=[\bm y_1,\cdots,\bm y_j,\bm q_{k+1},\cdots,\bm q_{k+l-j}]
\end{equation}
and
\begin{equation}
  \label{eq:3.71}
  T_l=
  \begin{bmatrix}
    \vartheta_1 & & & \sigma_1 & & \\
     & \ddots & & \vdots & & \\
     & & \vartheta_j & \sigma_j & & \\
    \sigma_1 & \cdots & \sigma_j & \alpha_{k+1} & \ddots & \\
     & & & \ddots & \ddots & \beta_{k+l-j-1} \\
     & & & & \beta_{k+l-j-1} & \alpha_{k+l-j}
  \end{bmatrix}.
\end{equation}

Still $T_l$ is hermitian, as can be easily seen by multiplying $Q_l^\dagger$ to
the left of Eq.~\ref{eq:3.69}, and using $Q_l^\dagger Q_l=E$ and $Q_l^\dagger
\bm q_{k+l-j+1}=\bm 0$.

The above technique of purging `bad' Ritz vectors are the so-called Lanczos algorithm
with thick restart. It achieves the goal we start for at the beginning of this
section. The tridiagonalization of $T_l$ is not mandatory for the restart, yet the
calculation of Ritz pairs requires the diagonalization of $T_l$, which may probably
need the tridiagonalization of $T_l$.

Fortunately it is not necessary to tridiagonalize $T_l$ as a whole, but only the
arrow matrix in the upper left corner. Since $T_l$ is composed of an arrow matrix
$B$ and a tridiagonal matrix $T$
\begin{equation}
  \label{eq:3.72}
  T_l=
  \begin{bmatrix}
    B & M \\
    M^\dagger & T
  \end{bmatrix}
\end{equation}
with
\begin{equation}
  \label{eq:3.73}
  B\equiv
  \begin{bmatrix}
    \vartheta_1 & & & \sigma_1 \\
     & \ddots & & \vdots \\
     & & \vartheta_j & \sigma_j \\
    \sigma_1 & \cdots & \sigma_j & \alpha_{k+1}
  \end{bmatrix},
  M\equiv
  \begin{bmatrix}
    \bm 0 & O \\
    \beta_{k+1} & \bm 0^\dagger
  \end{bmatrix}
\end{equation}
and
\begin{equation}
  \label{eq:3.74}
  T\equiv
  \begin{bmatrix}
    \alpha_{k+2} & \beta_{k+2} & & \\
    \beta_{k+2} & \alpha_{k+3} & \ddots & \\
     & \ddots & \ddots & \beta_{k+l-j-1} \\
     & & \beta_{k+l-j-1} & \alpha_{k+l-j}
  \end{bmatrix}
\end{equation}
We only need to tridiagonalize arrow matrix $B$. In fact, suppose $P$ is the
Householder matrix that tridiagonalizes $B$
\begin{equation}
  \label{eq:3.75}
  PBP^\dagger=T_B.
\end{equation}
Construct matrix
\begin{equation}
  \label{eq:3.76}
  S\equiv
  \begin{bmatrix}
    P & O \\
    O & E
  \end{bmatrix}.
\end{equation}
Then
\begin{equation}
  \label{eq:3.77}
  \begin{aligned}
    ST_lS^\dagger&=
    \begin{bmatrix}
      P & O \\
      O & E
    \end{bmatrix}
    \begin{bmatrix}
      B & M \\
      M^\dagger & T
    \end{bmatrix}
    \begin{bmatrix}
      P^\dagger & O \\
      O & E
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      PB & PM \\
      M^\dagger & T
    \end{bmatrix}
    \begin{bmatrix}
      P^\dagger & O \\
      O & E
    \end{bmatrix} \\
    &=
    \begin{bmatrix}
      PBP^\dagger & PM \\
      (PM)^\dagger & T
    \end{bmatrix}
    =
    \begin{bmatrix}
      T_B & PM \\
      (PM)^\dagger & T
    \end{bmatrix}.
  \end{aligned}
\end{equation}

It appears that $ST_lS^\dagger$ is tridiagonal as long as $PM$ does not disturb
the zero elements of $M$. Since $M=[\beta_{k+1}\bm e_{j+1},\bm 0,\cdots,\bm 0]$,
it is equivalent to $PM=[a\bm e_{j+1},\bm 0,\cdots,\bm 0]$, with $a\neq 0$, or
$P\bm e_{j+1}=\bm e_{j+1}$, as $PP^\dagger=E$. From our discussion in
Eqs.~\ref{eq:3.61}$\sim$ \ref{eq:3.64}, we know that this is easily satisfied so
long as $P$ reduces $B$ from its last column, instead of the first. This requirement
is also already present in the proof of the theorem on the sufficiency of being
a Krylov space.

Indulge us of a reminder that please tridiagonalize $B$ from the last column in
your relevant computer routine.


\chapter{Effective Interactions}

Now let's check what we've got here so far. We have an elegant path to constructing
Hamiltonian from one- and two-body interactions, namely the second quantization
formalism, and a potent mathematical tool to diagonalize it -- the Lanczos algorithm
with thick restart, in spite of its huge dimensionality. What it takes to crack
the nuclei is only appropriate interactions that describe the many-body system
with accuracy that is good to our satisfaction.

It may startle many that apart from the mathematical difficulties in solving the
many-body problems, the very basic inter-nucleon interactions that define the
many-body problem are not fully understood or well represented. Nuclear force is
the effective interaction of two packages of triple quarks. Good understanding
of quark-gluons dynamics does not help essentially to pin down the inter-nucleon
interactions. Even if there are some tentative forms of them, they are generally
too crude, and some of them difficult to use for the evaluation of the corresponding
one- and two-body matrix elements (OBMEs and TBMEs). Practically, to solve the
many-body system, only the Hamiltonian, and thus, the OBMEs and TBMEs are needed.

Theoretically the inter-nucleon interaction originates from the interaction among
the constituent quarks, which should be consistent for different nuclei, so is
the relevant OBMEs and TBMEs. This has initiated a quest among the physicists to
fit the OBMEs and TBMEs to experiment data involving various nuclei, and applies
them to new nuclei for predictions of observables of interest. They are so-called
\emph{effective interactions}. Nowadays many popular effective interactions exist.
Among the noteworthy ones are the WBP interaction\footnote{E.K. Warburton and B.A
Brown, Phys. Rev. C \textbf{46}, 923--944 (1992)} and the YSOX interaction\footnote{
Cenxi Yuan, et al., Phys. Rev. C \textbf{85} 064324 (2012)}.

\section{ZBM Interaction}
\label{sec:4.1}

These interactions are mature ones to work with in real applications, but too
complex for beginners. To start with, we undertake a nascent one, called ZBM
interaction\footnote{A.P. Zuker, B. Buck and J.B. McGrory, Phys. Rev. Lett.
\textbf{21}, 39 (1968)}, to exemplify the usage of effective interactions.

ZBM interaction was initially coined for the description of the energy spectra of
\ce{O}$^{16}$. It uses a model space consisting of three valence sp orbitals,
$0p_{1/2}$, $1s_{1/2}$ and $0d_{5/2}$, accommodating $(2+6+2)=10$ neutrons or
protons. The three sp orbitals are adjacent in traditional independent particle
model (IPM) for nuclei near doubly closed shells. They will be denoted by $p,s$
and $d$ as they have distinctive angular momenta $l$.

The eight protons (neutrons) of \ce{O}$^{16}$ fill up the $0s_{1/2}$, $1p_{3/2}$
and $0s{1/2}$ shells. The outermost four nucleons (two protons and two neutrons)
are regarded as the valence nucleons. Distribution of the two valence nucleons
in the three valence sp orbitals for protons and neutrons respectively gives all
the possible valence configurations, i.e., the many-body basis in which the
Hamiltonian is represented. The resulting dimensionality of the Hamiltonian matrix
is then $\binom{10}{2}^2=2025$. This provides a good benchmark for the Lanczos
algorithm as matrices of this size are eligible for both the traditional
diagonalization routines and the iterative algorithms for huge matrices, like the
Lanczos method. Actually the final dimensionality is slightly smaller than that
because the Hamiltonian conserves the total momentum $(J,M)$, the total isospin
$(T,z)$ and the total parity $P$.

\subsection{The Matrix Elements}
\label{sec:4.1.1}

From a practical perspective, the sp state wavefunctions and the specific form of
the one- and two-body interactions are irrelevant. What matters is the matrix
elements. So according to the ZBM interaction, only the OBMEs and the TBMEs are
given. And as the sp states are chosen to be the eigenstates of the one-body
interaction, the one-body matrix is diagonal, with its diagonal matrix elements
being the sp state energies.

Due to the symmetries of the two-body interaction, $\Ket{\alpha\beta}$ and $\Ket
{\gamma\delta}$ must have the same $(J,M)$, $(T,z)$ and $P$, or $\Braket{\alpha
\beta|v|\gamma\delta}$ vanishes. The eigenstates of $H$ will have good $J,M,T,z$
and $P$. So it will greatly reduce the dimension of the Hamiltonian matrix if we
choose the non-interacting many-body basis as the eigenstates of as many these
operators as possible. It is very easy to construct a many-body basis with fixed
$P$, $M$ and $z$ ($P$ is directly read from the $l$ values of the sp states, and
$M$ and $z$ are both additive), but a little difficult for $J$ and $T$. Although
the Hamiltonian matrix have a smaller dimension (a factor of 10 or more) in the
many-body basis with fixed $J$ ($J$-scheme) than with fixed $M$ ($M$-scheme), we
will use $M$-scheme for a start. The two basis are connected by a linear transform
of Clebsch-Gordan (C-G) coefficients. The same goes for $T$-scheme. The $JT$-schemes
are referred to in the following as the coupling scheme.

As ZBM uses coupling scheme, we discuss the TBMEs of the coupling scheme, and then
the transformation of coupling scheme to non-coupling scheme. Possible combinations
of $l$ for two-body state $\Ket{\alpha\beta}$ are $dd$, $ds$, $dp$, $ss$, $sp$
and $pp$. $J$ ranges for each of them are $[0,5]$, $[2,3]$, $[2,3]$, $[0,1]$, $[0,1]$
and $0,1$, respectively, adding up to 16 $\Ket{l_1l_2;J}$ and 32 $\Ket{l_1l_2;JT}$
with $T$ being 0 or 1, which may give rise to $32^2=1024$ TBMEs.

But $\Ket{\alpha\beta}$ with different parities do not interact. As parity of an
sp state is $(-1)^l$, and of a many-body state is $\prod_i (-1)^{l_i}$, we have
positive parity for $dd$, $ds$, $ss$ and $pp$ (24 $\Ket{l_1l_2;JT}$ states), and
negative parity for $dp$ and $sp$ (8 $\Ket{l_1l_2;JT}$ states). The number of
potential TBMEs drops to $24^2+8^2=640$. This is still a large number of TBMEs,
and each one of them is meant to be different from all the other, as different
couplings of $J$ and $T$ make a difference in the energy.

The number of TBMEs can still be reduced greatly if the antisymmetry of $\Ket{\alpha
\beta}$ against particle permutation are given due respect, which requires that
$J+T={\rm odd}$ if $\alpha$ and $\beta$ are in the same sp orbit. This is because
that the spatial part of the wavefunction of $\Ket{\alpha\beta}$ is symmetric
(because those of the two particles' are the same) against permutation of the two
particles, so their angular and isospin part must have antisymmetry. Since
the C-G coefficients possess the following symmetry
\begin{equation}
  \label{eq:4.1}
  \Braket{j_1m_1j_2m_2|JM}=(-1)^{j_1+j_2-J}\Braket{j_2m_2j_1m_1|JM}
\end{equation}
with shorthand
\begin{equation}
  \label{eq:4.2}
  \begin{aligned}
    \Braket{j_1m_1j_2m_2|JM}\equiv\Braket{j_1m_1j_2m_2|j_1j_2;JM} \\
    \Braket{j_2m_2j_1m_1|JM}\equiv\Braket{j_2m_2j_1m_1|j_2j_1;JM}
  \end{aligned}
\end{equation}
We have
\begin{equation}
  \label{eq:4.3}
  P_{12}\Ket{\alpha\beta}=(-1)^{j_1+j_2+t_1+t_2-J-T}\Ket{\alpha\beta}
\end{equation}
With $t_1=t_2=1/2$ and $j_1=j_2$ being half integers, $j_1+j_2+t_1+t_2$ is even.
So $J+T$ must be odd to impose antisymmetry. Since $J,T$ are conserved, we denote
$\Braket{l_1l_2|v|l_3l_4;JT}\equiv\Braket{l_1l_2;JT|v|l_3l_4;JT}$. Number of $(J,T)$
assignments allowed for $\Braket{dd|v|dd;JT}$, $\Braket{ss|v|ss;JT}$, $\Braket
{pp|v|pp;JT}$ are then 6, 2, 2, and that for $\Braket{dd|v|ss;JT}$, $\Braket
{dd|v|pp;JT}$ and $\Braket{ss|v|pp;JT}$ are 2, 2, 2, totaling 16 TBMEs.

Then let's exhaust the coupling of same orbits $ll$ with separate orbits $l_1l_2$.
The $(J,T)$'s for $\Braket{dd|v|ds;JT}$ are (2,0) and (3,1), where $J\in[2,3]$ is
restricted from $\Ket{ds}$ and $J+T={\rm odd}$ from $\Ket{dd}$. Coupling of $dd$
with $dp$ or $sp$ is forbidden for parity mismatch. Coupling of $ss$ with $ds$ is
forbidden due to $J$ range mismatch ([$[0,1]\cap[2,3]=\varnothing$]). Here we only
collect 2 TBMEs. These examples above illustrate of how the game plays.

The coupling $\Braket{\alpha\beta|v|\gamma\delta}$ where $\alpha\neq\beta$ and
$\beta\neq\gamma$ does not have limitations on $J+T$, as the many-body wavefunction
can be antisymmetrized either way. It gives 12 TBMEs.

Eventually we have only 30 TMBEs in total that are physically allowed. One can see
from this simple interaction that while the two-body states may be many, the final
TBMEs can be few given all the symmetries and physical restrictions. The sp state
energies and the TBMEs are tabulated in Tab.~\ref{tab:4.1}\footnote{A.P. Zuker,
B. Buck and J.B. McGrory, Phys. Rev. Lett. \textbf{21}, 39 (1968), Table I.}.

\begin{table}
  \caption{Two-body matrix elements (TBMEs) $\Braket{l_1l_2|v|l_3l_4;JT}$ and sp
  state energies $\epsilon$ (in MeV) of the ZBM interaction.}
  \label{tab:4.1}
  \begin{center}
    \begin{tabular}{cccccc}
      \toprule
      $(l_1l_2)$ & $(l_3l_4)$ & $J$ & $T$ &   I   &    II  \\
      \hline
         $dd$    &    $dd$    &  0  &  1  & -2.81 &  -3.41 \\
         $dd$    &    $dd$    &  1  &  0  & -1.30 &  +0.01 \\
         $dd$    &    $dd$    &  2  &  1  & -0.98 &  -1.21 \\
         $dd$    &    $dd$    &  3  &  0  & -1.02 &  +0.38 \\
         $dd$    &    $dd$    &  4  &  1  & +0.12 &  -0.08 \\
         $dd$    &    $dd$    &  5  &  0  & -3.86 &  -4.26 \\
         $ss$    &    $ss$    &  0  &  1  & -2.28 &  -2.17 \\
         $ss$    &    $ss$    &  1  &  0  & -4.04 &  -3.67 \\
         $pp$    &    $pp$    &  0  &  1  & -2.37 &  -0.26 \\
         $pp$    &    $pp$    &  1  &  0  & -4.55 &  -4.15 \\
         $dd$    &    $ss$    &  0  &  1  & -1.20 &  -1.04 \\
         $dd$    &    $ss$    &  1  &  0  & -0.93 &  -4.27 \\
         $dd$    &    $pp$    &  0  &  1  & +3.37 &  +3.37 \\
         $dd$    &    $pp$    &  1  &  0  & -1.50 &  -1.50 \\
         $ss$    &    $pp$    &  0  &  1  & +0.73 &  +0.73 \\
         $ss$    &    $pp$    &  1  &  0  & -0.50 &  -0.50 \\
         $dd$    &    $ds$    &  2  &  1  & -0.84 &  -0.88 \\
         $dd$    &    $ds$    &  3  &  0  & -1.69 &  -3.53 \\
         $ds$    &    $ds$    &  2  &  0  & -0.80 &  -3.70 \\
         $ds$    &    $ds$    &  2  &  1  & -1.15 &  -1.17 \\
         $ds$    &    $ds$    &  3  &  0  & -3.90 &  -2.60 \\
         $ds$    &    $ds$    &  3  &  1  & +0.24 &  +1.16 \\
         $dp$    &    $dp$    &  2  &  0  & -4.65 &  -4.74 \\
         $dp$    &    $dp$    &  2  &  1  & +0.67 &  +1.25 \\
         $dp$    &    $dp$    &  3  &  0  & -2.71 &  -4.14 \\
         $dp$    &    $dp$    &  3  &  1  & -0.95 &  +0.50 \\
         $sp$    &    $sp$    &  0  &  0  & -3.17 &  -3.57 \\
         $sp$    &    $sp$    &  0  &  1  & +0.35 &  +1.55 \\
         $sp$    &    $sp$    &  1  &  0  & -3.01 &  -3.00 \\
         $sp$    &    $sp$    &  1  &  1  & +0.47 &  +0.95 \\
                 &      & $\epsilon d$ &  &  0.80 &   3.50 \\
                 &      & $\epsilon s$ &  &  0.30 &   2.75 \\
                 &      & $\epsilon p$ &  &  0.00 &   0.00 \\
    \bottomrule
    \end{tabular}
  \end{center}
\end{table}

We don't dwindle on the algorithm for distributing $N$ particles on $n$ sp orbits
in this document. It is basically the problem of putting $N$ balls in $n$ boxes.
The number of possible combinations is $\binom{n}{N}$. The problem is not trivial,
yet completely tractable for high-school students with basic knowledge of combination
and arrangement, plus a few coding skills.

\subsection{$M$-Scheme TBMEs}

The derivation of $M$-scheme TMBEs in term of coupling scheme TBMEs is quite
straightforward. To begin with, let's first explore how the two-body wavefunction in
$M$-scheme relates to that in the coupling scheme. we note that MBSDs
(Eq.~\ref{eq:2.5}) are practically $M$-scheme base vectors. Denote sp state vector
\begin{equation}
  \label{eq:4.4}
  \begin{aligned}
    \Ket{\alpha}&\equiv\Ket{n_1l_1j_1m_1T_1t_1}\equiv\Ket{n_1l_1j_1m_1t_1} \\
    \Ket{\beta}&\equiv\Ket{n_2l_2j_2m_2t_2}
  \end{aligned}
\end{equation}
where we have ommited constant $T_1=T_2=1/2$. Then we have antisymmetrized two-body
wavefunction
\begin{equation}
  \label{eq:4.5}
  \Ket{\alpha\beta}=\frac{1}{\sqrt{2}}(\Ket{\alpha}\Ket{\beta}-
  \Ket{\beta}\Ket{\alpha}).
\end{equation}
For two-body wavefunctions in coupling scheme the following shorthand is used
\begin{equation}
  \label{eq:4.6}
  \Ket{j_1j_2JMT_1T_2Tt}\equiv\Ket{JMTt}.
\end{equation}

Then
\begin{equation}
  \label{eq:4.7}
  \begin{aligned}
    \Ket{\alpha\beta}=&\frac{1}{\sqrt{2}}\sum_{JMTt}\Ket{JMTt}(\Bra{JMTt}\Ket{\alpha}
    \Ket{\beta}-\Ket{\beta}\Ket{\alpha}) \\
    =&\frac{1}{\sqrt{2}}\sum_{JMTt}\Ket{JMTt}\Braket{JM|j_1m_1j_2m_2}
    \Braket{Tt|t_1t_2}\Ket{n_1l_1n_2l_2} \\
    &-\frac{1}{\sqrt{2}}\sum_{JMTt}\Ket{JMTt}\Braket{JM|j_2m_2j_1m_1}
    \Braket{Tt|t_2t_1}\Ket{n_2l_2n_1l_1} \\
    =&\frac{1}{\sqrt{2}}\sum_{JMTt}\Braket{JM|j_1m_1j_2m_2}
    \Braket{Tt|t_1t_2}[\Ket{n_1l_1n_2l_2;j_1j_2JM;t_1t_2Tt} \\
    &-\Ket{n_2l_2n_1l_1;j_2j_1JM;t_2t_1Tt}\cdot(-1)^{j_1+j_2+1-J-T}] \\
    \equiv&\frac{1}{\sqrt{2}}\sum_{JMTt}\Braket{JM|j_1m_1j_2m_2}
    \Braket{Tt|t_1t_2}[\Ket{12}-\Ket{21}(-1)^{j_1+j_2+1-J-T}].
  \end{aligned}
\end{equation}
The symmetry of the C-G coefficients Eq.~\ref{eq:4.3} is used. It is worth noting
that for $j_1=j_2$ as half integers, the phase factor $(-1)^{j_1+j_2+1-J-T}$
vanishes if $J+T={\rm even}$, which means that no antisymmetrized two-body
wavefunctions exist in this circumstance, consistent with our analysis before.

Now we are prepared to talk about the TBMEs. Denote $\Ket{\gamma}$ and $\Ket{\delta}$
as No. 3 and 4, we have
\begin{equation}
  \label{eq:4.8}
  \begin{aligned}
    \Braket{\alpha\beta|v|\gamma\delta}=&
    \frac{1}{2}\sum_{JMTt}\Braket{JM|j_1m_1j_2m_2}\Braket{JM|j_3m_3j_4m_4}
    \Braket{Tt|t_1t_2}\Braket{Tt|t_3t_4}\cdot \\
    &[\Braket{12|v|34}-\Braket{12|v|43}(-1)^{j_3+j_4+1-J-T} \\
    &-\Braket{21|v|34}(-1)^{j_1+j_2+1-J-T}+\Braket{21|v|43}(-1)^{j_1+j_2+j_3+j_4}] \\
    =&\frac{1}{2}\sum_{JMTt}\Braket{JM|j_1m_1j_2m_2}\Braket{JM|j_3m_3j_4m_4}
    \Braket{Tt|t_1t_2}\Braket{Tt|t_3t_4}\cdot \\
    &\{\Braket{12|v|34}[(1+(-1)^{\sum_{i=1}^4j_i})] \\
    &\quad\quad-\Braket{12|v|43}[(-1)^{j_1+j_2}+(-1)^{j_3+j_4}](-)^{J+T+1}]\} \\
    =&\frac{1}{2}\sum_{JMTt}\Braket{JM|j_1m_1j_2m_2}\Braket{JM|j_3m_3j_4m_4}
    \Braket{Tt|t_1t_2}\Braket{Tt|t_3t_4}\cdot \\
    &[(1+(-1)^{\sum_{i=1}^4j_i})][\Braket{12|v|34}-\Braket{12|v|43}
    (-1)^{J+T+1+j_1+j_2}],
  \end{aligned}
\end{equation}
where $v$'s symmetry $\Braket{12|v|34}=\Braket{21|v|43}$ has been used multiple
times.

We see from Eq.~\ref{eq:4.8} that $\sum_{i=1}^4j_i$ has to be even, otherwise
$\Braket{\alpha\beta|v|\gamma\delta}$ vanishes. And this conclusion also applies
to TBMEs in coupling scheme, as $\sum_{i=1}^4j_i$ has nothing to do with the
projection of the angular momenta. Well, parity conservation ensures that
$(-1)^{l_1+l_2}=(-1)^{l_3+l_4}$, so
\begin{equation}
  \label{eq:4.9}
  \sum_{i=1}^4j_i=\sum_{i=1}^4l_i\pm\frac{1}{2}\pm\frac{1}{2}\pm\frac{1}{2}\pm\frac{1}{2}
\end{equation}
depends on the choices of signs of the four $\pm\frac{1}{2}$, which is essentially
the way of $\bm l+\bm s$ coupling. Note that they can be plus or minus independently
of each other. Single coupling, where one $\bm l$-$\bm s$ coupling is different
from the rest three, namely, three `$+$' and one `$-$', or three `$-$' and one
`$+$', leads to odd $\sum_{i=1}^4j_i$, thus is not allowed.

In the case of ZBM interaction, the $\bm l+\bm s$ couplings for $0p_{1/2}$, $1s_{1/2}$
and $0d_{5/2}$ are `$-$', `$+$' and `$+$', respectively, which coincidentally equal
to their parity values. So parity conservation rules out illegal $\sum_{i=1}^4j_i$
assignments. We do not have to worry about it in this particular case.

Anyway, with $\sum_{i=1}^4j_i={\rm even}$, and
\begin{equation}
  \label{eq:4.10}
  \Braket{\alpha\beta|v|\gamma\delta;JM}\equiv
  [\Braket{12|v|34}-\Braket{12|v|43}(-1)^{J+T+1+j_1+j_2}],
\end{equation}
Eq.~\ref{eq:4.8} is reduced to a simpler form
\begin{equation}
  \label{eq:4.11}
  \begin{aligned}
    \Braket{\alpha\beta|v|\gamma\delta}=&
    \sum_{JMTt}\Braket{JM|j_1m_1j_2m_2}\Braket{JM|j_3m_3j_4m_4}
    \Braket{Tt|t_1t_2}\Braket{Tt|t_3t_4}\cdot \\
    &\Braket{\alpha\beta|v|\gamma\delta;JM}.
  \end{aligned}
\end{equation}

$\Braket{\alpha\beta|v|\gamma\delta;JM}$ is antisymmetric. To exchange particle
1 and 2 is equivalent to exchanging 3 and 4, due to the symmetry of $v$. Without
loss of generality, we exchange 1 and 2, to have
\begin{equation}
  \label{eq:4.12}
  \begin{aligned}
    \Braket{\beta\alpha|v|\gamma\delta;JM}&=
    [\Braket{21|v|34}-\Braket{21|v|43}(-1)^{J+T+1+j_1+j_2}] \\
    &=[\Braket{21|v|43}(-1)^{J+T+1+j_3+j_4}-\Braket{21|v|34}(-1)^{j_1+j_2+j_3+j_4}] \\
    &=[\Braket{12|v|43}(-1)^{J+T+1+j_1+j_2}-\Braket{12|v|34}] \\
    &=-\Braket{\alpha\beta|v|\gamma\delta;JM}
  \end{aligned}
\end{equation}

As the original literature of ZBM interaction claims that the TMBEs tabulated in
Tab.~\ref{tab:4.1} are ``antisymmetrized'' matrix elements, and there is only one
value for each configuration, we tend to believe that the tabulated TBMEs in the
coupling scheme is just $\Braket{\alpha\beta|v|\gamma\delta;JM}$.
This will be verified in our code.

\chapter{Numerical Implementation}

We hereby keep note of some of the algorithmic techniques used in the computer
code implementation of the FCI code. They will save much valuable time in reviewing
the codes while debugging.

\section{Bit Representation of MBSDs}
\label{sec:5.1}

In bit representation, the MBSDs are constructed by distributing ``$N$ balls in
$n$ boxes ($N\leq n$)''. We have an $n$-bit variable for the $n$ slots (sp states).
0 or 1 represents the bit (sp state) is occupied or not. It is not only efficient
in memory-using, but also convenient to implement operator applications to MBSDs.

Let's illustrate this by an example. Suppose that we have an MBSD where there are
10 sp states in model space labeled from 0 to 9, and 4 particles occupies sp states,
say, 0, 3, 5, 8. Then this MBSD is $\Ket{\Phi}\equiv\Ket{0100101001}$. This can
be easily realized with C++ container \texttt{bitset}. Accordingly we use the
least significant bit (LSB) as sp state 0.

Let's see how one calculates $a_{\alpha}\Ket{\Phi}$. To be specific and without
loss of generality, let $\alpha=3$. To begin with, let's define a \texttt{bitset}
object to represent $\Ket{\Phi}$
\begin{quote}
  $\Ket{\Phi}\to\,$\texttt{bitset<10> m(0b0100101001);}
\end{quote}

First test if sp state $\alpha$ is occupied, by \texttt{m.test($\alpha$)}. If the
returning value is \texttt{false}, $a_3\Ket{\Phi}=\Ket{0}$. Otherwise, we will
count the occupied states before $\alpha$ (denoted by $j$) and assign additional
phase factor $(-1)^j$ to $\Ket{\Phi}$. This is effective, as $j$ is just the
inversion number of $\alpha$, and totally the phase of $\Ket{\Phi}$ is
$(-1)^{\sum_{\alpha_i}{j}}$, where $\sum_{\alpha_i}{j}$ is the total inversion
number, with the sum over all sp states of $\Ket{\Phi}$. If we have odd $j$,
we get an additional $-1$ phase. This is just the case of $a_3\Ket{\Phi}$, where
obviously $j=1$, So eventually
\begin{quote}
  $a_3\Ket{\Phi}=-\Ket{0100100001}$
\end{quote}

The above steps can be wrapped in a member method (e.g., \texttt{Annihilate})
of a class, say, \texttt{TABit} (following the naming convention of \emph{this}
code). We now want to calculate $\Braket{\Psi|a^\dagger_p a^\dagger_q a_s
a_r|\Phi}$, and use \texttt{TABit} objects \texttt{rr} and \texttt{cc} for
$\Ket{\Psi}$ and $\Ket{\Phi}$, respectively. Then the evaluation of
$\Braket{\Psi|a^\dagger_p a^\dagger_q a_s a_r|\Phi}$ is done in computer code via
\begin{quote}
  $a_s a_r\Ket{\Phi}\to\,$\texttt{rr.Annihilate(r).Annihilate(s);}
\end{quote}
Note that this member function returns the object itself, which enables a convenient
consecutive calling to complete the sequential annihilation operation. $\Bra{\Psi}a^\dagger_p
a^\dagger_q$ is calculated similarly. One just starts with its complex conjugate.
Since the result is always real (0, or $\pm 1$), we have
\begin{quote}
  $\Bra{\Psi}a^\dagger_p a^\dagger_q\to a_q a_p\Ket{\Psi}
  \to\,$\texttt{cc.Annihilate(p).Annihilate(q);}
\end{quote}

Finally, it is worth noting that one may define a `normal' order for the sp states
to save memory for storing the TBMEs. It can be either decided by a certain
convention (descending order or ascending order), or just directly adopted from
the literature from which the effective TBMEs are quoted, or using other orders.
No matter what the specific order is, one can always shuffle and reorder the
sequence of the sp states in the MBSD at either side of the TBME, as long as the
corresponding creators and annihilators move along with them. We don't have a
certain order of the sp states that is superior to any other. All the orders are
equal. Their effects are balanced by their accompanying sequence of operators
(creators and annihilators).

Let's illustrate this by an example. Suppose that we have a given TBME $\Braket{
\alpha_3\alpha_4|v|\alpha_2\alpha_5}=c$. In constructing the Hamiltonian matrix
such a matrix element is encountered as $\Braket{\alpha_1\alpha_3\alpha_4|H_{I}|
\alpha_1\alpha_5 \alpha_2}$, where Eq.~\ref{eq:2.41} is used for $H_{I}$. We have
\begin{equation}
  \label{eq:5.1}
  \begin{aligned}
    \Braket{\alpha_1\alpha_3\alpha_4|H_{I}|\alpha_1\alpha_5\alpha_2}
  =&\frac{1}{4}(\Braket{\alpha_3\alpha_4|v|\alpha_5\alpha_2}
  -\Braket{\alpha_4\alpha_3|v|\alpha_5\alpha_2} \\
  &-\Braket{\alpha_3\alpha_4|v|\alpha_2\alpha_5}
  +\Braket{\alpha_4\alpha_3|v|\alpha_2\alpha_5}) \\
  =&\Braket{\alpha_3\alpha_4|v|\alpha_5\alpha_2}=-c.
  \end{aligned}
\end{equation}

We wish not to bother adding the minus sign. One just does the following swap
\begin{quote}
  $\Braket{\alpha_3\alpha_4|v|\alpha_5\alpha_2}a^\dagger_3 a^\dagger_4 a_2 a_5\to$
  $\Braket{\alpha_3\alpha_4|v|\alpha_2\alpha_5}a^\dagger_3 a^\dagger_4 a_5 a_2$.
\end{quote}
No minus signs are needed to add explicitly, but implicit in the ordering of the
annihilators.

We have just exemplified how the Hamiltonian matrix element $\Braket{\Phi|H|\Psi}$
is calculated where $\Ket{\Phi}$ and $\Ket{\Psi}$ differ in two pairs of sp states.
This is useful in writing the corresponding computer code as it simplifies the
calculation process. It turns out that the sum over the four indices results in
one TMBE. Now we turn to the cases where $\Ket{\Phi}$ and $\Ket{\Psi}$ differ in
one pair of sp states, for which we have
\begin{equation}
  \label{eq:5.2}
  \begin{aligned}
    \Braket{\alpha_1\cdots\alpha_n\beta|H_{I}|\alpha_1\cdots\alpha_n\gamma}
    &=\frac{1}{4}\sum_{\alpha\beta\gamma\delta}\Braket{\alpha\beta|v|\gamma\delta}
    \cdot \\
    &\Braket{\alpha_1\cdots\alpha_n\beta|a^\dagger_\alpha a^\dagger_\beta a_{\delta} a_{\gamma}|\alpha_1\cdots\alpha_n\gamma} \\
    &=\sum_{\alpha_i}\Braket{\alpha_i\beta|v|\alpha_i\gamma}
  \end{aligned}
\end{equation}
where it is assumed that the phase factor for each of $\alpha_i$ in the last step
is +1. So the only difference from Eq.~\ref{eq:5.1} is that besides $\beta$ and
$\gamma$, we can choose to annihilate any one of $\alpha_i,\cdots,\alpha_n$ from
both sides, so that the resulting $\Ket{\Phi^\prime}$ and $\Ket{\Psi^\prime}$ are
essentially the same physical MBSD, at most differing in a phase factor,
specifically, $\Braket{\Phi^\prime|\Psi^\prime}=\pm 1$.

Similarly we have for the situation where $\Ket{\Phi}=\Ket{\Psi}$,
\begin{equation}
  \label{eq:5.3}
  \begin{aligned}
    \Braket{\alpha_1\cdots\alpha_n|H_{I}|\alpha_1\cdots\alpha_n}
    =&\sum_{\alpha_i<\alpha_j}\Braket{\alpha_i\alpha_j|v|\alpha_i\alpha_j}
  \end{aligned}
\end{equation}
Also please be reminded that the sign of each term in the summation should be
determined by the expectation value of the operators.

There is one minor thing about the order of the MBSDs in a TBME. Apart from the
normal order of the sp states in an MBSD, we may define $\Braket{\Phi|v|\Psi}$
as the normal order instead of $\Braket{\Psi|v|\Phi}$, or the contrary. Usually
the TBMEs are real, so the two are actually the same. One may choose to do the
swap or not to be the order of what is stored in the code.

\section{Constructing the Hamiltonian Matrix}

It is very useful to illustrate the constructing process of a Hamiltonian matrix
for debugging the FCI code. During the process the dimensionality of the problem
is revealed. The ZBM interaction still is a very good point to start from, as it
is not too complicate for beginners to digest and for this document to show, yet
still exquisite enough to possess basic components of a mature and modern effective
interaction.

\subsection{The Model Space}

The sp orbits of the model space is $0p_{1/2}$, $1s_{1/2}$ and $0d_{5/2}$, as
mentioned in Sec.~\ref{sec:4.1}. For convenient reference, they are numbered as
tabulated in Tab.~\ref{tab:5.1}. $t_z$ is the projection of the nucleon's isospin.
$t_z=1/2$ is for protons. The ids start from 0 to be compatible with the bitset
coding convention.

Quantum number $l$ is deliberately chosen to dominate over $t_z$ so as to avoid
generating inconsistent $l$-ordering in TBMEs. Let me explain this. Usually (and
more easy and secure) the generated MBSDs have their constituent sp states ordered
in a fashion of whether descending order or ascending order w.r.t. the id of the
sp states. Without loss of generality, let it be a descending order. So $l$ being
the foremost pivotal key in numbering the sp states ensures that the sp states are
in a descending order in all the MBSDs, which enables us to avert most of the
situations where two sp states has to be swapped so as to avoid adding a minus
sign explicitly to the TBMEs given in Tab.~\ref{tab:4.1}, as discussed in
Sec.~\ref{sec:5.1}. Despite that actually one has to do the swap in the case where
$\Ket{\Phi}$ and $\Ket{\Psi}$ differ only by a pair of sp states, as
Eq.~\ref{eq:5.2} describes. When $\alpha_i$ loops over its allowed domain, there
is no guarantee that $\alpha_i\beta$ and $\alpha_i\gamma$ are always in the right
order (descending order in our exammple here). More often than not, a swap of
$\alpha_i$ and $\beta$ and/or $\alpha_i$ and $\gamma$ is needed.


\begin{table}
  \caption{Single-particle orbitals of the ZBM interaction.}
  \label{tab:5.1}
  \begin{center}
    \begin{tabular}{cccccc}
      \toprule
       id & $n$ & $l$ & $j$ & $m_j$ & $t_z$ \\
      \hline
       0  &  0  &  1  & 1/2 &  1/2 &  1/2  \\
       1  &  0  &  1  & 1/2 & -1/2 &  1/2  \\
       2  &  0  &  1  & 1/2 &  1/2 & -1/2  \\
       3  &  0  &  1  & 1/2 & -1/2 & -1/2  \\
       4  &  1  &  0  & 1/2 &  1/2 &  1/2  \\
       5  &  1  &  0  & 1/2 & -1/2 &  1/2  \\
       6  &  1  &  0  & 1/2 &  1/2 & -1/2  \\
       7  &  1  &  0  & 1/2 & -1/2 & -1/2  \\
       8  &  0  &  2  & 5/2 &  1/2 &  1/2  \\
       9  &  0  &  2  & 5/2 & -1/2 &  1/2  \\
       10 &  0  &  2  & 5/2 &  3/2 &  1/2  \\
       11 &  0  &  2  & 5/2 & -3/2 &  1/2  \\
       12 &  0  &  2  & 5/2 &  5/2 &  1/2  \\
       13 &  0  &  2  & 5/2 & -5/2 &  1/2  \\
       14 &  0  &  2  & 5/2 &  1/2 & -1/2  \\
       15 &  0  &  2  & 5/2 & -1/2 & -1/2  \\
       16 &  0  &  2  & 5/2 &  3/2 & -1/2  \\
       17 &  0  &  2  & 5/2 & -3/2 & -1/2  \\
       18 &  0  &  2  & 5/2 &  5/2 & -1/2  \\
       19 &  0  &  2  & 5/2 & -5/2 & -1/2  \\
    \bottomrule
    \end{tabular}
  \end{center}
\end{table}

\subsection{The MBSDs and the Hamiltonian Matrix}
Since the $M$-scheme and $T$-scheme are adopted, first we choose the values of
total $M$ and $t$, where $t$ is the projection of total $T$. $t$ is fixed for a
given nucleon, as
\begin{equation}
  \label{eq:5.4}
  t=\frac{1}{2}(Z-N)
\end{equation}
In contrast, $M$ is not fixed. Yet as $[H,M]=0$, the total Hamiltonian matrix is
block-diagonal w.r.t. $M$, such that to diagonalize $H$ is to diagonalize each
$M$ block of $H$. So we only concentrate on the diagonalization of each of such
blocks, and still name them as $H$ as a matter of convenience. They are collected
for the eventual energy spectrum of the nucleus.

We like such symmetries, as they split the Hamiltonian matrix into much smaller
diagonal blocks. So the diagonalization has only to be done in those blocks. We
know that each symmetry of a system is connected with a conserved quantity, and
each conserved quantity must have a symmetry corresponding to it. Besides $M$ and
$t$, we also have $J$, $T$ and $\pi$, where $\pi$ is the parity. As has been
mentioned in Sec.~\ref{sec:4.1.1}, the eigenstates of $J$ is sort of complicated
to construct. The same goes for $T$. We hereby use $M$-scheme for a start. The
good news is that parity $\pi$ is also a conserved quantity, for which the system
has mirror symmetry. Any two MBSDs that have different parities do not interact.
So the parity should also be specified as input at the beginning, just as $M$ does.

Specifically, one just generates all the MBSDs, and pick up those that have the
specified $M$, $t$ and $\pi$. Suppose that there are $k$ selected MBSDs. The
calculation of the TBMEs of all the $k\times k$ Hamiltonian matrix elements refers
to Eqs.~\ref{eq:5.1}$\sim$\ref{eq:5.3}.

Let's take a look at how the MBSDs are generated and selected in several situations.
To exhaust all the possible MBSDs, the odometer method is among those that are
popular. It's like counting numbers or a mechanic clock. The MBSD is treated as
a number, and is incremented by 1 at a time, where the different sp states are
the digits. The only difference is that all the digits in the MBSD must be distinct,
due to Pauli exclusion. Assume that we have four particles in six sp states. The
sequence of MBSDs worked out by odometer method appears in Tab.~\ref{tab:5.2}.

\begin{table}
  \caption{The sequence of MBSDs generated by odometer method.}
  \label{tab:5.2}
  \begin{center}
    \begin{tabular}{ccccc}
      \toprule
      \multicolumn{4}{c}{MBSD} \\
      \hline
      0 & 1 & 2 & 3 \\
      0 & 1 & 2 & 4 \\
      0 & 1 & 2 & 5 \\
      0 & 1 & 3 & 4 \\
      0 & 1 & 3 & 5 \\
      0 & 1 & 4 & 5 \\
      0 & 2 & 3 & 4 \\
      0 & 2 & 3 & 5 \\
      0 & 2 & 4 & 5 \\
      0 & 3 & 4 & 5 \\
      1 & 2 & 3 & 4 \\
      1 & 2 & 3 & 5 \\
      1 & 2 & 4 & 5 \\
      1 & 3 & 4 & 5 \\
      2 & 3 & 4 & 5 \\
      \bottomrule
    \end{tabular}
  \end{center}
\end{table}

We can see that totally there are $\binom{6}{4}=15$ MBSDs.

In a similar way we get the MBSDs for the ZBM interaction. One can check that there
are totally 311 MBSDs for total $M=0$ and $t=0$, where 144 of them have $\pi=-$,
and the rest 167 MBSDs have $\pi=+$. The Hamiltonian matrix is of quite moderate
scale, yet still too large to be shown in a terminal. To catch a glimpse of the
structure of the matrix, they are drawn in a picture, where each of the pixels is
a matrix element, with white ones representing the zeros, and colored ones for the
nonzero matrix elements. The pictures for the Hamiltonian matrix with $M=0$, $t=0$,
and $\pi=$+, $-$ and (+,$-$) combined are shown in Fig.~\ref{fig:5.1}.

\begin{figure}
  \centering
  \subfigure[]{
    \includegraphics[width=3.5cm]{zbm+.pdf}
    \label{fig:5.1.1}
  }
  \quad
  \subfigure[]{
    \includegraphics[width=3.5cm]{zbm-.pdf}
    \label{fig:5.1.2}
  }
  \quad
  \subfigure[]{
    \includegraphics[width=3.5cm]{zbm.pdf}
    \label{fig:5.1.3}
  }
  \caption{The structure of the ZBM Hamiltonian matrix with $M=0$, $t=0$ and
  $\pi=$+ (a), $-$ (b) and (+,$-$) combined (c). The red pixels are the nonzero
  ones, where the white pixels are the zeros.}
  \label{fig:5.1}
\end{figure}

Visualization is a very pleasant perspective to perceive a system in a bird-view.
Several traits of the Hamiltonian matrices can be obtained from Fig.~\ref{fig:5.1}.
\begin{itemize}
  \item What appears to be a non-block-diagonal matrix is actually reducible,
  namely that it can be reduced to a block-diagonal matrix, just after the MBSDs
  are sorted by their parities.
  \item The Hamiltonian matrix is rather sparse. The matrix elements of
  Fig.~\ref{fig:5.1.1} and \ref{fig:5.1.2} constitute the matrix elements of
  Fig.~\ref{fig:5.1.3}. The former two are more dense than Fig.~\ref{fig:5.1.3}.
  The number of total matrix elements, non-zero elements and their percentages
  are 20736, 3476, and 16.76\% for Fig.~\ref{fig:5.1.1}, 27889, 4775 and 17.12\%
  for Fig.~\ref{fig:5.1.2}, and 96721, 8251, and 8.53\% for Fig.~\ref{fig:5.1.3}.
  \item By splitting the MBSDs according to their parities, we have cut the
  dimensionality of an individual matrix by almost half and thus ruled out half
  of the zero elements.
  \item The non-zero matrix elements are concentrated mainly on the diagonal.
\end{itemize}



\end{document}
