% Manual for the SIESTA program
%
% To generate the printed version:
%
% pdflatex siesta
% splitidx siesta.idx (optional if you want a split index)
% makeindex siesta    (Optional if you have a current siesta.ind)
% pdflatex siesta
%
%

% NOTE:
% If you want to reference a fdf flag, use:
%  \fdf{#1}
% This command will automatically create the necessary link
% to the created entry of the flag. Thus every flag will
% eventually become a link to the correct.
% If you want to create an fdf flag which is not referenced
% For instance a sub-block segment, use \fdf*{#1}.
% The \fdf command will automatically convert ! to . but retains
% the ! for indexing of the flag. Thus \fdf{FDF!Flag} will print
% as FDF.Flag, but indexed as \index{FDF!Flag}. If you reference
% a flag you must use the exclamation mark as used in the fdfentry
% environment.
% If you want to reference a fdf flag with an argument do it like
% this:
%   \fdf{Label:Argument}
% in case the index exists for this.
%
% Two very used fdf-flags are \fdftrue and \fdffalse which
% are short-hands for \fdf*{true} and \fdf*{false}.
% 
% To document a new flag, do this:
%
%  \begin{fdfentry}{FDF.Flag}
%
%    Description
%
%  \end{fdfentry}
%
% Remark that it is not necessary to create any index commands
% as that is handled by the fdfentry environment.
% Optionally one may specify the type of variable it corresponds
% to:
%
%  \begin{fdfentry}{FDF.Flag}[integer]<0>
%
%    Description
%
%  \end{fdfentry}
%
% where [#1] may be any of:
%    block, integer, real, energy, length, logical
% and <#1> is the default value of the flag.
%
% There are two shorthands for the logical input with T/F default:
%
%  \begin{fdflogicalT}{FlagDefaultTrue}
%
%    This flag is default \fdftrue.
%
%  \end{fdflogicalT}
%  \begin{fdflogicalF}{FlagDefaultFalse}
%
%    This flag is default \fdffalse.
%
%  \end{fdflogicalF}
%
% Sometimes it is useful to create an index value without
% having to create the entry (say if a flag has been
% superseeded by a new flag), in this case you should do:
%
%  \begin{fdfentry}{FDF.Flag}[integer]<0>
%    \fdfindex*{Old.Flag}
%
%    Description
%
%  \end{fdfentry}
% 
% Additionally one may create dependency flags to make it
% easy to see dependencies between flags
%  \begin{fdfentry}{FDF.Flag}[integer]<0>
%    \fdfdepend{FDF.First, FDF.Second}
%
%    Description
%
%  \end{fdfentry}
% will create additional information in the PDF with 
% proper dependencies and hyperlinks to the fdf-flags.
% To document a flag has replaced another flag, use
%  \fdfdeprecates{...} in the same manner as \fdfdepend{...}.
%
%
% !!!IMPORTANT!!!
% Do NOT use \bf, \it, \tt, \rm, \em etc!
% 
% If you want to emphasize a word in a sentence, prefer to
% use \emph{#1}.
% If you want bold, use \textbf{#1}
% If you want italics, use \textit{#1}.

% In addition to the above specifications one may add developer
% notes by encapsulating sections in:
%  \ifdeveloper
%    content only shown when compiling the developer version
%  \fi
% This may be handy for doing documentation in-line together
% with the regular documentation.

% Include settings appropriate for siesta
\input{tex/init.tex}

% Local command for the software version for printing
% \unskip helps when space/newline chars are present in the
% version.info file.
\providecommand\softwareversion{\input{../version.info}\unskip}

% Title (note that \input above will fail in the title, we however
% don't care since it is only meaningful for final publications where
% direct text is used.)
\title{SIESTA manual \softwareversion}

% Set the date
\date{\today}

% Specify Authors
\author{%
    Emilio Artacho, %
    Jose Maria Cela, %
    Julian D. Gale, %
    Alberto Garcia, %
    Javier Junquera, %
    Richard M. Martin, %
    Pablo Ordejon, %
    Daniel Sanchez-Portal, %
    Jose M. Soler, %
    Nick R. Papior%
}

% Ensure the information is written in the PDF (see tex/setup.tex)
\setpdfmetadata

\begin{document}

% TITLE PAGE --------------------------------------------------------------

\begin{titlepage}

\begin{center}

\vspace{1cm}
\ifdeveloper
 {\Huge \textsc{D e v e l o p e r' s \, \, G u i d e}}
\else
 {\Huge \textsc{U s e r' s \, \, G u i d e}}
\fi

\vspace{1cm}
\hrulefill
\vspace{1cm}

{\Huge \textbf{S I E S T A \, \, \softwareversion}}

\vspace{1cm}
\hrulefill
\vspace{0.5cm}

{\Large \printdate}

\vspace{1.5cm}
{\Large \url{http://www.uam.es/siesta}}

\vspace{2.5cm}
\siesta\ Steering Committee:
\vspace{1.0cm}

\begin{tabular}{ll}
  
  Emilio Artacho &
  \textit{CIC-Nanogune and University of Cambridge} \\
  
  Jos\'e Mar\'{\i}a Cela &
  \textit{Barcelona Supercomputing Center} \\
  
  Julian D. Gale &
  \textit{Curtin University of Technology, Perth} \\
  
  Alberto Garc\'{\i}a &
  \textit{Institut de Ci\`encia de Materials, CSIC, Barcelona} \\

  Javier Junquera &
  \textit{Universidad de Cantabria, Santander} \\

  Richard M. Martin &
  \textit{University of Illinois at Urbana-Champaign} \\

  Pablo Ordej\'on &
  \textit{Centre de Investigaci\'o en Nanoci\`encia} \\
  &
  \textit{  i Nanotecnologia, (CSIC-ICN), Barcelona} \\
  
  Nick R\"ubner Papior &
  \textit{Technical University of Denmark} \\
  
  Daniel S\'anchez-Portal &
  \textit{Unidad de F\'{\i}sica de Materiales,} \\
  &
  \textit{Centro Mixto CSIC-UPV/EHU, San Sebasti\'an} \\
  
  Jos\'e M. Soler &
  \textit{Universidad Aut\'onoma de Madrid} \\
   
\end{tabular}
 
\vspace{0.5cm}

\siesta\ is Copyright \copyright\ 1996-2019 by The Siesta Group

\end{center}

\end{titlepage}

% END TITLE PAGE --------------------------------------------------------------

\newpage

\section*{Contributors to \siesta}
\addcontentsline{toc}{section}{Contributors to \texorpdfstring{\siesta}{Siesta}}

The SIESTA project was initiated by Pablo Ordejon (then at the Univ. de
Oviedo), and Jose M. Soler and Emilio Artacho (Univ. Autonoma de Madrid,
UAM).  The development team was then joined by Alberto Garcia (then at
Univ. del Pais Vasco, Bilbao), Daniel Sanchez-Portal (UAM), and
Javier Junquera (Univ. de Oviedo and later UAM), and sometime later by
Julian Gale (then at Imperial College, London). In 2007 Jose M. Cela
(Barcelona Supercomputing Center, BSC) became a core developer and
member of the Steering Committee.

The original \tsiesta\ module was developed by
Pablo Ordejon and Jose L. Mozos (then at ICMAB-CSIC), and Mads Brandbyge,
Kurt Stokbro, and Jeremy Taylor (Technical Univ. of Denmark).

The current \tsiesta\ module within SIESTA is developed by 
Nick R. Papior and Mads Brandbyge. Nick R. Papior became a core
developer and member of the Steering Committee in 2015.

Other contributors (we apologize for any omissions):

Eduardo Anglada,
Thomas Archer,
Luis C. Balbas,
Xavier Blase,
Jorge I. Cerd\'a,
Ram\'on Cuadrado,
Michele Ceriotti,
Fabiano Corsetti,
Raul de la Cruz,
Gabriel Fabricius,
Marivi Fernandez-Serra,
Jaime Ferrer,
Chu-Chun Fu,
Sandra Garcia,
Victor M. Garcia-Suarez,
Rogeli Grima,
Rainer Hoft,
Georg Huhs,
Jorge Kohanoff,
Richard Korytar,
In-Ho Lee,
Lin Lin,
Nicolas Lorente,
Miquel Llunell,
Eduardo Machado,
Maider Machado,
Jose Luis Martins,
Volodymyr Maslyuk,
Juana Moreno,
Frederico Dutilh Novaes, 
Micael Oliveira,
Magnus Paulsson,
Oscar Paz,
Andrei Postnikov,
Roberto Robles,
Tristana Sondon,
Rafi Ullah,
Andrew Walker,
Andrew Walkingshaw,
Toby White,
Francois Willaime,
Chao Yang.

O.F. Sankey, D.J. Niklewski and D.A. Drabold made the FIREBALL code
available to P. Ordejon.  Although we no longer use the routines in
that code, it was essential in the initial development of SIESTA,
which still uses many of the algorithms developed by them.

\newpage
\tableofcontents
\newpage

\section{INTRODUCTION}

\textit{This Reference Manual contains descriptions of all the input,
  output and execution features of \siesta, but is not really a
  tutorial introduction to the program. Interested users can find
  tutorial material prepared for \siesta\ schools and workshops at
  the project's web page} \url{http://www.uam.es/siesta}.


\textbf{NOTE: See the description of changes in the logic of the SCF loop}

\siesta\index{Siesta@\siesta} (Spanish Initiative for
Electronic Simulations with
Thousands of Atoms) is both a method and its computer program implementation,
to perform electronic structure calculations and \textit{ab initio} molecular
dynamics simulations of molecules and solids. Its main characteristics are:
\begin{itemize}
\item
It uses the standard Kohn-Sham selfconsistent density functional
method in the local density (LDA-LSD) and generalized gradient (GGA)
approximations, as well as in a non local functional that includes
van der Waals interactions (VDW-DF).
\item
It uses norm-conserving pseudopotentials in their fully nonlocal
(Kleinman-Bylander) form.
\item
It uses atomic orbitals as a basis set, allowing unlimited multiple-zeta
and angular momenta, polarization and off-site orbitals. The radial
shape of every orbital is numerical and any shape can be used and provided
by the user, with the only condition that it has to be of finite support,
i.e., it has to be strictly zero beyond a user-provided distance from the
corresponding nucleus.
Finite-support basis sets are the key for calculating the Hamiltonian
and overlap matrices in $O(N)$ operations.
\item
Projects the electron wavefunctions and density onto a real-space
grid in order to calculate the Hartree and exchange-correlation
potentials and their matrix elements.
\item
Besides the standard Rayleigh-Ritz eigenstate method, it allows
the use of localized linear combinations of the occupied orbitals
(valence-bond or Wannier-like functions), making the computer
time and memory scale linearly with the number of atoms.
Simulations with several hundred atoms are feasible with
modest workstations.
\item
It is written in Fortran 95 and memory is allocated dynamically.
\item
It may be compiled for serial or parallel execution (under MPI).

\end{itemize}

It routinely provides:
\begin{itemize}
  \item Total and partial energies.
  \item Atomic forces.
  \item Stress tensor.
  \item Electric dipole moment.
  \item Atomic, orbital and bond populations (Mulliken).
  \item Electron density.
\end{itemize}

And also (though not all options are compatible):
\begin{itemize}
  \item Geometry relaxation, fixed or variable cell.
  \item Constant-temperature molecular dynamics (Nose thermostat).
  \item Variable cell dynamics (Parrinello-Rahman).
  \item Spin polarized calculations (colinear or not).
  \item k-sampling of the Brillouin zone.
  \item Local and orbital-projected density of states.
  \item COOP and COHP curves for chemical bonding analysis.
  \item Dielectric polarization.
  \item Vibrations (phonons).
  \item Band structure.
  \item Ballistic electron transport under non-equilibrium (through \tsiesta)
\end{itemize}


Starting from version 3.0, \siesta\ includes the \tsiesta\index{TranSIESTA@\tsiesta}
module. \tsiesta\ provides the ability to model open-boundary systems where ballistic
electron transport is taking place.  Using \tsiesta\ one can compute electronic
transport properties, such as the zero bias conductance and the I-V characteristic, of a
nanoscale system in contact with two electrodes at different electrochemical potentials.
The method is based on using non equilibrium Greens functions (NEGF), that are
constructed using the density functional theory Hamiltonian obtained from a given electron
density. A new density is computed using the NEGF formalism, which closes the DFT-NEGF
self consistent cycle.

Starting from version 4.1, \tsiesta\ is an intrinsic part of the
\siesta\ code. I.e. a separate executable is not necessary
anymore. See Sec.~\ref{sec:transiesta} for details.

For more details on the formalism, see the main \tsiesta\
reference cited below. A section has been added to this User's Guide,
that describes the necessary steps involved in doing transport
calculations, together with the currently implemented input options.

\vspace{0.5cm}
{\large \textbf{References:} }

\begin{itemize}

\item
``Unconstrained minimization approach for electronic computations
that scales linearly with system size''
P. Ordej\'on, D. A. Drabold, M. P. Grumbach and R. M. Martin,
Phys. Rev. B \textbf{48}, 14646 (1993);
``Linear system-size methods for electronic-structure calculations''
Phys. Rev. B \textbf{51} 1456 (1995), and references therein.

Description of the order-\textit{N} eigensolvers
implemented in this code.

\item
``Self-consistent order-$N$ density-functional
calculations for very large systems''
P. Ordej\'on, E. Artacho and J. M. Soler,
Phys. Rev. B \textbf{53}, 10441, (1996).

Description of a previous version of this methodology.

\item
``Density functional method for very large systems with LCAO basis sets''
D. S\'anchez-Portal, P. Ordej\'on, E. Artacho and J. M. Soler,
Int. J. Quantum Chem., \textbf{65}, 453 (1997).

Description of the present method and code.

\item
``Linear-scaling ab-initio calculations for large and complex systems''
E. Artacho, D. S\'anchez-Portal, P. Ordej\'on, A. Garc\'{\i}a and
J. M. Soler, Phys. Stat. Sol. (b) \textbf{215}, 809 (1999).

Description of the numerical atomic orbitals (NAOs) most commonly
used in the code, and brief review of applications as of March 1999.

\item
``Numerical atomic orbitals for linear-scaling calculations''
J. Junquera, O. Paz, D. S\'anchez-Portal, and E. Artacho, Phys. Rev. B
 \textbf{64}, 235111, (2001).

Improved, soft-confined NAOs.

\item
``The \siesta\ method for ab initio order-$N$ materials simulation''
J. M. Soler, E. Artacho, J.D. Gale, A. Garc\'{\i}a, J. Junquera, P. Ordej\'on,
and D. S\'anchez-Portal, J. Phys.: Condens. Matter \textbf{14}, 2745-2779 (2002)

Extensive description of the \siesta\ method.

\item
``Computing the properties of materials from first principles
with  \siesta'', D. S\'anchez-Portal, P. Ordej\'on,
and E. Canadell, Structure and Bonding \textbf{113},
103-170 (2004).

Extensive review of applications as of summer 2003.

\item
 ``Improvements on non-equilibrium and transport Green function techniques: The next-generation TranSIESTA'',
 Nick Papior, Nicolas Lorente, Thomas Frederiksen, Alberto García and
 Mads Brandbyge, Computer Physics Communications, \textbf{212}, 8--24 (2017).

 Description of the \tsiesta\ method.

\item
 ``Density-functional method for nonequilibrium electron transport'',
 Mads Brandbyge, Jose-Luis Mozos, Pablo Ordej\'on, Jeremy Taylor,
 and Kurt Stokbro, Phys. Rev. B \textbf{65}, 165401 (2002).

 Description of the original \tsiesta\ method (prior to 4.1).

\end{itemize}

For more information you can visit the web page
\url{http://www.uam.es/siesta}.

\section{COMPILATION}
\label{sec:compilation}

\subsection{The build directory}

Rather than using the top-level Src directory as building directory,
the user has to use an ad-hoc building directory (by default the
top-level \shell{Obj} directory, but it can be any (new) directory in
the top level).  The building directory will hold the object files,
module files, and libraries resulting from the compilation of the
sources in \shell{Src}.  The \shell{VPATH} mechanism of modern \shell{make}
programs is used. This scheme has many advantages. Among them:

\begin{itemize}
\item The Src directory is kept pristine.
\item Many different object directories can be used concurrently to
  compile the program with different compilers or optimization levels.
\end{itemize}

If you just want to compile the program, go to \shell{Obj} and issue the
command:
\begin{shellexample}
  sh ../Src/obj_setup.sh
\end{shellexample}
to populate this directory with the minimal scaffolding of makefiles,
and then make sure that you create or generate an appropriate \file{arch.make}
file (see below, in Sec.~\ref{sec:arch-make}). Then, type
\begin{shellexample}
  make
\end{shellexample}
The executable should work for any job. (This is not exactly true,
since some of the parameters in the atomic routines are still
hardwired (see \shell{Src/atmparams.f}), but those would seldom need to
be changed.)

To compile utility programs (those living in \shell{Util}), you can just
simply use the provided makefiles, typing ``make'' as appropriate.

\subsubsection{Multiple-target compilation}

The mechanism described here can be repeated in other directories at
the same level as Obj, with different names. In this way one can
compile as many different versions of the \siesta\ executable as
needed (for example, with different levels of optimization, serial,
parallel, debug, etc), by working in separate building directories.

Simply provide the appropriate arch.make, and issue the setup command
above. To compile utility programs, you need to use the form:
\begin{shellexample}
   make OBJDIR=ObjName
\end{shellexample}
where \shell{ObjName} is the name of the object directory of your
choice. Be sure to type \shell{make clean} before attempting to
re-compile a utility program.

(The pristine Src directory should be kept ``clean'', without objects, or else
the compilation in the build directories will get confused)


\subsection{The arch.make file}
\label{sec:arch-make}

The compilation of the program is done using a \shell{Makefile} that is
provided with the code.\index{Makefile} This \shell{Makefile} will
generate the executable for any of several architectures, with a
minimum of tuning required from the user and encapsulated in a
separate file called \file{arch.make}.

You are strongly encouraged to look at
\shell{Obj/DOCUMENTED-TEMPLATE.make} for information about the
fine points of the \file{arch.make} file. There are two sample make
files for compilation of \siesta\ with \shell{gfortran} and
\shell{ifort} named \shell{Obj/gfortran.make} and
\shell{Obj/intel.make}, respectively. Please use those as guidelines
for creating the final \file{arch.make}.

Note that Intel compilers default to high optimizations which tends to
break \siesta. We advice to use \shell{-fp-model source} flag and not
compile with higher optimizations than \shell{-O2}.

% You can also get
% inspiration by looking at the actual \file{arch.make} examples in
% the \shell{Src/Sys} subdirectory. If you intend to create a parallel
% version of \siesta, make sure you have all the extra support libraries
% (\program{MPI}, \program{ScaLAPACK}, \program{blacs}\dots) (see Sec.~\ref{sec:parallel}).
  
% Optionally, the command \shell{../Src/configure} will start an
% automatic scan of your system and try to build an \file{arch.make}
% for you. Please note that the configure script might need some help in
% order to find your Fortran compiler, and that the created
% \file{arch.make} may not be optimal, mostly in regard to compiler
% switches and preprocessor definitions, but the process should provide
% a reasonable first version. Type \shell{../Src/configure --help} to
% see the flags understood by the script, and take a look at the
% \shell{Src/Confs} subdirectory for some examples of their explicit
% use. 



\subsection{Parallel}
\label{sec:parallel}

To achieve a parallel build of \siesta\ one should first determine
which type of parallelism one requires. It is advised to use MPI for
calculations with moderate number of cores. If one requires eXa-scale
parallelism \siesta\ provides hybrid parallelism using both MPI and
OpenMP. 


\subsubsection{MPI}
\index{External library!MPI}
\index{compile!MPI}

MPI is a message-passing interface which enables communication between
equivalently executed binaries. This library will thus duplicate all
non-distributed data such as local variables etc. 

To enable MPI in \siesta\ the compilation options are required to be
changed accordingly, here is the most basic changes to the
\file{arch.make} for standard binary names
\begin{shellexample}
  CC = mpicc
  FC = mpifort # or mpif90
  MPI_INTERFACE = libmpi_f90.a
  MPI_INCLUDE = .
  FPPFLAGS += -DMPI
\end{shellexample}
\index{compile!pre-processor!-DMPI}


Subsequently one may run \siesta\ using the
\shell{mpirun}/\shell{mpiexec} commands:
\begin{shellexample}
  mpirun -np <> siesta RUN.fdf
\end{shellexample}
where \shell{<>} is the number of cores used.


\subsubsection{OpenMP}
\index{External library!OpenMP}
\index{compile!OpenMP}

OpenMP is shared memory parallelism. It typically does not infer any
memory overhead and may be used if memory is scarce and the regular
MPI compilation is crashing due to insufficient memory.

To enable OpenMP, simply add this to your \file{arch.make}
\begin{shellexample}
  # For GNU compiler
  FFLAGS += -fopenmp
  LIBS += -fopenmp
  # or, for Intel compiler < 16
  FFLAGS += -openmp
  LIBS += -openmp
  # or, for Intel compiler >= 16
  FFLAGS += -qopenmp
  LIBS += -qopenmp
\end{shellexample}
The above will yield the most basic parallelism using OpenMP. However,
the BLAS/LAPACK libraries which is the most time-consuming part of
\siesta\ also requires to be threaded, please see Sec.~\ref{sec:libs}
for correct linking.

Subsequently one may run \siesta\ using OpenMP through the environment
variable \shell{OMP\_NUM\_THREADS} which determine the number of
threads/cores used in the execution.
\begin{shellexample}
  OMP_NUM_THREADS=<> siesta RUN.fdf
  # or (bash)
  export OMP_NUM_THREADS=<>
  siesta RUN.fdf
  # or (csh)
  setenv OMP_NUM_THREADS <>
  siesta RUN.fdf
\end{shellexample}
where \shell{<>} is the number of threads/cores used.

If \siesta\ is also compiled using MPI it is more difficult to obtain
a good performance. Please refer to your local cluster how to
correctly call MPI with hybrid parallelism.
%
An example for running \siesta\ with good performance using OpenMPI >
1.8.2 \emph{and} OpenMP on a machine with 2 sockets and 8 cores per
socket, one may do:
\begin{shellexample}
  # MPI = 2 cores, OpenMP = 8 threads per core (total=16)
  mpirun --map-by ppr:1:socket:pe=8 \
     -x OMP_NUM_THREADS=8 \
     -x OMP_PROC_BIND=true siesta RUN.fdf

  # MPI = 4 cores, OpenMP = 4 threads per core (total=16)
  mpirun --map-by ppr:2:socket:pe=4 \
     -x OMP_NUM_THREADS=4 \
     -x OMP_PROC_BIND=true siesta RUN.fdf

  # MPI = 8 cores, OpenMP = 2 threads per core (total=16)
  mpirun --map-by ppr:4:socket:pe=2 \
     -x OMP_NUM_THREADS=2 \
     -x OMP_PROC_BIND=true siesta RUN.fdf
\end{shellexample}
If using only 1 thread per MPI core it is advised to compile \siesta\
without OpenMP. As such it may be advantageous to compile \siesta\ in
3 variants; OpenMP-only (small systems), MPI-only (medium to large
systems) and MPI$+$OpenMP (large$>$ systems).

The variable \shell{OMP\_PROC\_BIND} may heavily influence the
performance of the executable! Please perform tests for the
architecture used.


\subsection{Library dependencies}
\label{sec:libs}
\index{compile!libraries}

\siesta\ makes use of several libraries. Here we list a set of
libraries and how each of them may be added to the compilation step
(\file{arch.make}).

\siesta\ is distributed with scripts that install the most useful
libraries. These installation scripts may be located in the
\shell{Docs/} folder with names: \shell{install\_*.bash}.
Currently \siesta\ is shipped with these installation scripts:
\begin{itemize}
  \item \shell{install\_netcdf4.bash}; installs NetCDF with full CDF4
  support. Thus it installs zlib, hdf5 \emph{and} NetCDF C and
  Fortran.

  \item \shell{install\_flook.bash}; installs \program{flook} which
  enables interaction with Lua and \siesta.
  
\end{itemize}
Note that these scripts are guidance scripts and users are encouraged
to check the mailing list for or seek help there in non-standard. The
installation scripts finishes by telling \emph{what} to add to the
\file{arch.make} file to correctly link the just installed libraries.

\begin{description}

  \item[BLAS] %
  \index{External library!BLAS}%
  it is recommended to use a high-performance library
  (\href{https://github.com/xianyi/OpenBLAS}{OpenBLAS} or MKL
  library from Intel)
  
  \begin{itemize}
    \item If you use your *nix distribution package manager to install
    BLAS you are bound to have a poor performance. Please try and use
    performance libraries, whenever possible!

    \item If you do not have the BLAS library you may use the BLAS
    library shipped with \siesta. To do so simply add
    \shell{libsiestaBLAS.a} to the \shell{COMP\_LIBS} variable.
  \end{itemize}

  To add BLAS to the \file{arch.make} file you need to add the
  required linker flags to the \shell{LIBS} variable in the
  \file{arch.make} file.

  Example variables
\begin{shellexample}
  # OpenBLAS:
  LIBS += -L/opt/openblas/lib -lopenblas
  # or for MKL
  LIBS += -L/opt/intel/.../mkl/lib/intel64 -lmkl_blas95_lp64
  -lmkl_<>_lp64 ...
\end{shellexample}
  where \shell{<>} is the compiler used (\shell{intel} or \shell{gf}
  for gnu).

  To use the threaded (OpenMP) libraries
\begin{shellexample}
  # OpenBLAS, change the above to:
  LIBS += -L/opt/openblas/lib -lopenblasp
  # or for MKL, add a single flag:
  LIBS += -lmkl_<>_thread
\end{shellexample}
  where \shell{<>} is the compiler used (\shell{intel} or \shell{gnu}).

  \item[LAPACK]%
  \index{External library!LAPACK}%
  it is recommended to use a high-performance library
  (\href{https://github.com/xianyi/OpenBLAS}{OpenBLAS}\footnote{OpenBLAS
      enables the inclusion of the LAPACK routines. This is advised.}
  or MKL library from Intel)

  If you do not have the LAPACK library you may use the LAPACK
  library shipped with \siesta. To do so simply add
  \shell{libsiestaLAPACK.a} to the \shell{COMP\_LIBS} variable.

  Example variables
\begin{shellexample}
  # OpenBLAS (OpenBLAS will default to build in LAPACK 3.6)
  LIBS += -L/opt/openblas/lib -lopenblas
  # or for MKL
  LIBS += -L/opt/intel/.../mkl/lib/intel64 -lmkl_lapack95_lp64 ...
\end{shellexample}

  To use the threaded (OpenMP) libraries
\begin{shellexample}
  # OpenBLAS, change the above to:
  LIBS += -L/opt/openblas/lib -lopenblasp
  # or for MKL, add a single flag:
  LIBS += -lmkl_<>_thread ...
\end{shellexample}
  where \shell{<>} is the compiler used (\shell{intel} or \shell{gnu}).


  \item[ScaLAPACK]%
  \index{External library!ScaLAPACK}%
  \emph{Only required for MPI compilation.}

  Here one may be sufficient to rely on the NetLIB\footnote{ScaLAPACKs
      performance is mainly governed by BLAS and LAPACK.} version of
  ScaLAPACK. 

  Example variables
\begin{shellexample}
  # ScaLAPACK
  LIBS += -L/opt/scalapack/lib -lscalapack
  # or for MKL
  LIBS += -L/opt/intel/.../mkl/lib/intel64 -lmkl_scalapack_lp64 
           -lmkl_blacs_<>_lp64 ...
\end{shellexample}
where \shell{<>} refers to the MPI version used, (\shell{intelmpi},
\shell{openmpi}, \shell{sgimpt}).

\end{description}

Additionally \siesta\ may be compiled with support for several other
libraries
\begin{description}

  \item[\href{https://github.com/zerothi/fdict}{fdict}] %
  \index{External library!fdict}%
  This library is shipped with \siesta\ and its linking may be enabled
  by 
  \begin{shellexample}
    COMP_LIBS += libfdict.a
  \end{shellexample}


  \item[\href{https://www.unidata.ucar.edu/software/netcdf}{NetCDF}] %
  \index{NetCDF format}%
  \index{External library!NetCDF}%
  \index{compile!pre-processor!-DCDF}
  It is advised to compile NetCDF in CDF4 compliant mode (thus
  also linking with HDF5) as this enables more advanced IO. If you
  only link against a CDF3 compliant library you will not get the
  complete feature set of \siesta.

  \begin{description}
    \item[3]%
    \index{NetCDF format!3}%
    If the CDF3 compliant library is present one may add this to
    your \file{arch.make}:
\begin{shellexample}
  LIBS += -L/opt/netcdf/lib -lnetcdff -lnetcdf
  FPPFLAGS += -DCDF
\end{shellexample}

    \item[4]%
    \index{NetCDF format!4}%
    If the CDF4 compliant library is present the HDF5 libraries
    are also required at link time:
\begin{shellexample}
  LIBS += -L/opt/netcdf/lib -lnetcdff -lnetcdf \
            -lhdf5_fortran -lhdf5 -lz
\end{shellexample}

  \end{description}

  
  \item[\href{https://github.com/zerothi/ncdf}{ncdf}] %
  \index{External library!ncdf}%
  This library is shipped with \siesta\ and its linking is required to
  take advantage of the CDF4 library functionalities. To use this
  library, ensure that you can compile \siesta\ with CDF4
  support. Then proceed by adding the following to your \file{arch.make}
  \begin{shellexample}
    COMP_LIBS += libncdf.a libfdict.a
    FPPFLAGS += -DNCDF -DNCDF_4
  \end{shellexample}
  \index{compile!pre-processor!-DNCDF}
  \index{compile!pre-processor!-DNCDF\_4}

  
  If the NetCDF library is compiled with parallel support one may
  take advantage of parallel IO by adding this to the \file{arch.make} 
\begin{shellexample}
  FPPFLAGS += -DNCDF_PARALLEL
\end{shellexample}
  \index{compile!pre-processor!-DNCDF\_PARALLEL}

  To easily install \program{NetCDF} please see the installation file:
  \shell{Docs/install\_netcdf4.bash}.


  \item[\href{http://glaros.dtc.umn.edu/gkhome/metis/metis/overview}{Metis}]%
  \index{Metis}%
  \index{External library!Metis}%
  The Metis library may be used in the Order-\emph N code. 

  Add these flags to your \file{arch.make} file to enable Metis
\begin{shellexample}
  LIBS += -L/opt/metis/lib -lmetis
  FPPFLAGS += -DSIESTA__METIS
\end{shellexample}
  \index{compile!pre-processor!-DSIESTA\_\_METIS}

  
  \item[\href{http://elpa.mpcdf.mpg.de}{ELPA}]%
  \index{ELPA}%
  \index{External library!ELPA}%
  The ELPA\cite{ELPA,ELPA-1} library provides faster diagonalization routines.

  The version of ELPA \emph{must} be 2017.05.003 or later, since the
  new ELPA API is used. 
  
  Add these flags to your \file{arch.make} file to enable ELPA
\begin{shellexample}
  LIBS += -L/opt/elpa/lib -lelpa <>
  FPPFLAGS += -DSIESTA__ELPA -I/opt/elpa/include/elpa-<>/modules
\end{shellexample}
  \index{compile!pre-processor!-DSIESTA\_\_ELPA}
  where \shell{<>} are any libraries that ELPA depend on.

  \note ELPA can only be used in the parallel version of \siesta.

  
  \item[\href{http://mumps.enseeiht.fr}{MUMPS}]%
  \index{MUMPS}%
  \index{External library!MUMPS}%
  The MUMPS library may currently be used with \tsiesta.
  
  Add these flags to your \file{arch.make} file to enable MUMPS
\begin{shellexample}
  LIBS += -L/opt/mumps/lib -lzmumps -lmumps_common <>
  FPPFLAGS += -DSIESTA__MUMPS
\end{shellexample}
  \index{compile!pre-processor!-DSIESTA\_\_MUMPS}
  where \shell{<>} are any libraries that MUMPS depend on.


  \item[\href{https://math.berkeley.edu/~linlin/pexsi}{PEXSI}]%
  \index{PEXSI}%
  \index{External library!PEXSI}%
  The PEXSI library may be used with \siesta\ for exa-scale
  calculations, see Sec.~\ref{SolverPEXSI}. Currently the interface is
  implemented (tested) as in PEXSI version 0.8.0, 0.9.0 and 0.9.2. If
  newer versions retain the same interface they may also be used.

  To successfully compile \siesta\ with PEXSI support one require the
  PEXSI fortran interface. When installing PEXSI copy the
  \shell{f\_interface.f90} file to the include directory of
  PEXSI such that the module may be found\footnote{Optionally the file
      may be copied to the \shell{Obj} directory where the compilation
      takes place.} when compiling \siesta.

  Add these flags to your \file{arch.make} file to enable PEXSI
\begin{shellexample}
  INCFLAGS += -I/opt/pexsi/include
  LIBS += -L/opt/pexsi/lib -lpexsi_linux <>
  FPPFLAGS += -DSIESTA__PEXSI
\end{shellexample}
  \index{compile!pre-processor!-DSIESTA\_\_PEXSI}
  where \shell{<>} are any libraries that PEXSI depend on. 
  %
  If one experiences linker failures, one possible solution that may
  help is
\begin{shellexample}
  LIBS += -lmpi_cxx -lstdc++
\end{shellexample}
  which is due to PEXSI being a C++ library, and the Fortran compiler
  is the linker. The exact library name for your MPI vendor may
  vary. 

  Additionally the PEXSI linker step may have duplicate objects which
  can be circumvented by prefixing the PEXSI libraries with
\begin{shellexample}
  LIBS += -Wl,--allow-multiple-definition -lpexsi_linux <>
\end{shellexample}


  \item[CheSS]%
  \index{CheSS}%
  \index{External library!CheSS}%
  \siesta\ allows calculation of the electronic structure through the
  use of the Order-N method \program{CheSS}\footnote{See
      \url{https://launchpad.net/chess}.}. To enable this solver (see
  \fdf{SolutionMethod}) one needs to first compile the
  \program{CheSS}-suite and subsequently to add the following to the
  \file{arch.make}. Here \program{<build-dir>} is the build-directory
  of the \program{CheSS} suite:  

\begin{shellexample}
  LIBS += -L<build-dir> -lCheSS-1 -lfutile-1 -lyaml
  INCFLAGS += -I<build-dir>/install/include
  FPPFLAGS += -DSIESTA__CHESS
\end{shellexample}
  \index{compile!pre-processor!-DSIESTA\_\_CHESS}

  
  \item[\href{https://github.com/electronicstructurelibrary/flook}{flook}]%
  \index{flook}%
  \index{External library!flook}%
  \siesta\ allows external control via the LUA scripting language.
  Using this library one may do advanced MD simulations and much more
  \emph{without} changing any code in \siesta.
  
  Add these flags to your \file{arch.make} file to enable \program{flook}
\begin{shellexample}
  LIBS += -L/opt/flook/lib -lflookall -ldl
  COMP_LIBS += libfdict.a
  FPPFLAGS += -DSIESTA__FLOOK
\end{shellexample}
  \index{compile!pre-processor!-DSIESTA\_\_FLOOK}
  
  See \program{Tests/h2o\_lua}\index{Tests} for an example on the LUA interface.

  To easily install \program{flook} please see the installation file:
  \shell{Docs/install\_flook.bash}.

\end{description}


% start of developer section
\ifdeveloper
\section{Data Structures in \texorpdfstring{\siesta}{siesta}}

\siesta\ comprises more than 200,000 lines of code. To interact with,
and develop \siesta\ an understanding of the intrinsic data structures
that govern the majority of \siesta\ modules.

% end of developer
\fi


\section{EXECUTION OF THE PROGRAM}

A fast way to test your installation of \siesta\ and get a feeling for
the workings of the program is implemented in directory
\shell{Tests}\index{Tests}. In it you can find several subdirectories
with pre-packaged \fdflib\ files and pseudopotential references. Everything
is automated: after compiling \siesta\ you can just go into any
subdirectory and type \shell{make}. The program does its work in
subdirectory \shell{work}, and there you can find all the resulting
files. For convenience, the output file is copied to the parent
directory. A collection of reference output files can be found in
\shell{Tests/Reference}. Please note that small numerical and
formatting differences are to be expected, depending on the compiler.
(For non-standard execution environments, including queuing systems,
have a look at the Scripts in Tests/Scripts, and see also
Sec.~\ref{sec:parallel}.)

Other examples are provided in the \shell{Examples} directory. This
directory contains basically \shell{.fdf} files and the appropriate
pseudopotential generation input files. Since at some point you will
have to generate your own pseudopotentials and run your own jobs, we
describe here the whole process by means of the simple example of the
water-molecule. It is advisable to create independent directories for
each job, so that everything is clean and neat, and out of the
\siesta\ directory, so that one can easily update version by replacing
the whole \siesta\ tree. Go to your favorite working directory and:

\begin{fdfexample}
  $ mkdir h2o
  $ cd h2o
  $ cp path-to-package/Examples/H2O/h2o.fdf
\end{fdfexample}
%$

You need to make the siesta executable visible in your path. 
You can do it in many ways, but a simple one is

\begin{fdfexample}
  $ ln -s path-to-package/Obj/siesta
\end{fdfexample}
%$

\noindent
We need to generate the required pseudopotentials.
\index{pseudopotential!example generation}
(We are going to streamline this process for this time, but
you must realize that this is a tricky business that you
must master before using \siesta\ responsibly. Every
pseudopotential must be thoroughly checked before use. Please refer to
the \textsc{ATOM} program manual for details regarding what follows.)

NOTE: The \textsc{ATOM} program is no longer bundled with \siesta,
but academic users can dowload it from the \siesta\ webpage at
\url{www.icmab.es/siesta}.

\shell{\$ cd path/to/atom/package/}

(Compile the program following the instructions)

\shell{\$ cd Tutorial/PS\_Generation/O}

\shell{\$ cat O.tm2.inp}

\noindent
This is the input file, for the oxygen pseudopotential,
that we have prepared for you.
It is in a standard (but ancient and obscure) format that
you will need to understand in the future:
\begin{verbatim}
------------------------------------------------------------
   pg      Oxygen
        tm2  2.0
 n=O  c=ca
       0.0       0.0       0.0       0.0       0.0       0.0
    1    4
    2    0     2.00      0.00
    2    1     4.00      0.00
    3    2     0.00      0.00
    4    3     0.00      0.00
   1.15     1.15     1.15     1.15
------------------------------------------------------------
\end{verbatim}

To generate the pseudopotential do the following;

\shell{\$ sh ../../Utils/pg.sh O.tm2.inp}

\noindent
Now there should be a new subdirectory called O.tm2 (O for oxygen)
and \shell{O.tm2.vps} (binary) and \shell{O.tm2.psf} (ASCII) files.

\shell{\$ cp O.tm2.psf path-to-working-dir/h2o/O.psf}

\noindent
copies the generated pseudopotential file to your working directory.
(The unformatted and ASCII files are functionally equivalent, but
the latter is more transportable and easier to look at, if you so
desire.) The same could be repeated for the pseudopotential for H,
but you may as well copy \shell{H.psf} from \shell{Examples/Vps/}
to your \shell{h2o} working directory.

\noindent
Now you are ready to run the program:

\shell{./siesta < h2o.fdf | tee h2o.out}

\noindent
(If you are running the parallel version you should use some other
invocation, such as \shell{mpirun -np 2 siesta ...}, but we cannot
go into that here --- see Sec.~\ref{sec:parallel}).

After a successful run of the program, you should have several
files in your directory including the following:
\begin{itemize}

\item fdf.log
 (contains all the data used, explicit or chosen by default)
\item O.ion and H.ion
 (complete information about the basis and KB projectors)
\item h2o.XV
 (contains positions and velocities)
\item h2o.STRUCT\_OUT
 (contains the final cell vectors and positions in
 ``crystallographic'' format)
\item h2o.DM
 (contains the density matrix to allow a restart)
\item h2o.ANI
 (contains the coordinates of every MD step, in this case only one)
\item h2o.FA
 (contains the forces on the atoms)
\item h2o.EIG
 (contains the eigenvalues of the Kohn-Sham Hamiltonian)
\item h2o.xml
 (XML marked-up output)
\end{itemize}

The prefix h2o of all these files is the 
\fdf{SystemLabel}
specified in the input h2o.fdf file (see \fdflib\ section below).
The standard output of the program, that you
have already seen passing on the screen, was copied to
file h2o.out by the tee command. Have a look at it
and refer to the output-explanation section if necessary.
You may also want to look at the fdf.log\index{fdf.log} file to see all
the default values that siesta has chosen for you, before
studying the input-explanation section and start changing them.

Now look at the other data files in \shell{Examples}
(all with an .fdf suffix) choose one and repeat the process for it.

\subsection{Specific execution options}

\siesta\ may be executed in different forms. The basic execution form
is
\begin{shellexample}
  siesta < RUN.fdf > RUN.out
\end{shellexample}
which uses a \emph{pipe} statement. 
%
\siesta\ 4.1 and later does not require one to pipe in the input file
and the input file may instead be specified on the command line.
\begin{shellexample}
  siesta RUN.fdf > RUN.out
\end{shellexample}
This allows for \siesta\ to accept special flags described in what
follows. Each flag may be quoted if it contains spaces, or one may
substitute spaces by \fdf*{:}.
\begin{fdfoptions}

  \option[-h]%
  \fdfindex*{Command line options:-h}%
  print a help instruction and quit

  \option[-L]%
  \fdfindex*{Command line options:-L}%
  Override, temporarily, the \fdf{SystemLabel} flag. 

  \shell{siesta -L Hello}.

  \option[-out|-o]%
  \fdfindex*{Command line options:-out}%
  \fdfindex*{Command line options:-o}%
  Specify the output file (instead of printing to the terminal).

  \shell{siesta -out RUN.out}.

  \option[-electrode|-elec]%
  \fdfindex*{Command line options:-electrode}%
  \fdfindex*{Command line options:-elec}%
  \fdfoverwrite{TS!HS.Save,TS!DE.Save}
  denote this as an electrode calculation which forces the
  \sysfile{TSHS} and \sysfile{TSDE} files to be saved.

  \note This is equivalent to specifying \fdf{TS!HS.Save:true} and
  \fdf{TS!DE.Save:true} in the input file.

  \option[-V]%
  \fdfindex*{Command line options:-V}%
  \fdfoverwrite{TS!Voltage}
  specify the bias for the current \tsiesta\ run.

  \shell{siesta -V 0.25:eV} or \shell{siesta -V "0.25 eV"}
  which sets the applied bias to $0.25\,\mathrm{eV}$.

  \note This is equivalent to specifying \fdf{TS!Voltage} in the input
  file.

\end{fdfoptions}


\section{THE FLEXIBLE DATA FORMAT (FDF)}
\index{FDF}

The main input file,\index{input file}
which is read as the standard input (unit 5),
contains all the physical data of the system and the parameters of
the simulation to be performed.
This file is written in a special format called FDF, developed by
Alberto Garc\'{\i}a and Jos\'e M. Soler. This format allows data to be
given in any order, or to be omitted in favor of default values.
Refer to documentation in $\sim$/siesta/Src/fdf for details.
Here we offer a glimpse of it through the following rules:

\begin{itemize}

\item[$\bullet$] The \fdflib\ syntax is a 'data label' followed by its value.
Values that are not specified in the datafile are assigned
a default value.

\item[$\bullet$] \fdflib\ labels are case insensitive, and characters - \_ .
in a data label are ignored. Thus, LatticeConstant and
lattice\_constant represent the same label.

\item[$\bullet$] All text following the \# character is taken as comment.

\item[$\bullet$] Logical values can be specified as T, true, .true.,
yes, F, false, .false., no. Blank is also equivalent to true.

\item[$\bullet$] Character strings should \textbf{not} be in apostrophes.

\item[$\bullet$] Real values which represent a physical magnitude must be
followed by its units. Look at function fdf\_convfac in
file $\sim$/siesta/Src/fdf/fdf.f for the units that are currently supported.
It is important to include a decimal point in a real number to distinguish
it from an integer, in order to prevent ambiguities when mixing the types
on the same input line.

\item[$\bullet$] Complex data structures are called blocks and are
placed between `\%block label'\index{block@\%block} and a `\%endblock label'
(without the quotes).

\item[$\bullet$] You may `include' other \fdflib\ files and redirect the search
for a particular data label to another file.
If a data label appears more than once, its first appearance
is used.

\item[$\bullet$] If the same label is specified twice, the first one takes precedence.

\item[$\bullet$] If a label is misspelled it will not be recognized (there is no
  internal list of ``accepted'' tags in the program). You can check 
  the actual value used by siesta by looking for the label in the
  output \textit{fdf.log}\index{fdf.log} file.

\end{itemize}

\noindent
These are some examples:

\begin{verbatim}
           SystemName      Water molecule  # This is a comment
           SystemLabel     h2o
           SpinPolarized        yes
           SaveRho
           NumberOfAtoms         64
           LatticeConstant       5.42 Ang
           %block LatticeVectors
                    1.000  0.000  0.000
                    0.000  1.000  0.000
                    0.000  0.000  1.000
           %endblock LatticeVectors
           KgridCutoff < BZ_sampling.fdf

           # Reading the coordinates from a file
           %block AtomicCoordinatesAndAtomicSpecies < coordinates.data

           # Even reading more FDF information from somewhere else
           %include mydefaults.fdf
\end{verbatim}

The file \textit{fdf.log} contains all the parameters used by \siesta\
in a given run, both those specified in the input fdf file and
those taken by default. They are written in fdf format, so that
you may reuse them as input directly. Input data blocks are
copied to the fdf.log file only if you specify the \textit{dump} option
for them.

\section{PROGRAM OUTPUT}

\subsection{Standard output} \index{output!main output file}

\siesta\ writes a log of its workings to standard output (unit 6),
which is usually redirected to an ``output file''.

A brief description follows. See the example cases in the
siesta/Tests directory for illustration.

The program starts writing the version of the code which is
used. Then, the input \fdflib\ file is dumped into the output file as is
(except for empty lines). The program does part of the reading and
digesting of the data at the beginning within the \program{redata}
subroutine. It prints some of the information it digests. It is
important to note that it is only part of it, some other information
being accessed by the different subroutines when they need it during
the run (in the spirit of \fdflib\ input).  A complete list of the input
used by the code can be found at the end in the file \shell{fdf.log},
including defaults used by the code in the run.

After that, the program reads the pseudopotentials, factorizes them
into Kleinman-Bylander form, and generates (or reads) the atomic basis
set to be used in the simulation. These stages are documented in the
output file.

The simulation begins after that, the output showing information of
the MD (or CG) steps and the SCF cycles within.  Basic descriptions of
the process and results are presented. The user has the option to
customize it, however,\index{output!customization} by defining
different options that control the printing of informations like
coordinates, forces, $\vec k$ points, etc.  The options are discussed
in the appropriate sections, but take into account the behavior of the
legacy \fdf{LongOutput} option, as in the current implementation might
silently activate output to the main .out file at the expense of
auxiliary files.

\begin{fdflogicalF}{LongOutput}
  \index{output!long}

  \siesta\ can write to standard output different data sets depending
  on the values for output options described below.  By default
  \siesta\ will not write most of them. They can be large for large
  systems (coordinates, eigenvalues, forces, etc.)  and, if written to
  standard output, they accumulate for all the steps of the
  dynamics. \siesta\ writes the information in other files (see Output
  Files) in addition to the standard output, and these can be
  cumulative or not.

  Setting \fdf{LongOutput} to \fdftrue\ changes the default of some
  options, obtaining more information in the output (verbose).  In
  particular, it redefines the defaults for the following:

  \begin{itemize}
    
    \item \fdf{WriteKpoints}%
    \index{output!grid $\vec k$ points}
    
    \item \fdf{WriteKbands}%
    \index{output!band $\vec k$ points}

    \item \fdf{WriteCoorStep}%
    \index{output!atomic coordinates!in a dynamics step}

    \item \fdf{WriteForces}%
    \index{output!forces}

    \item \fdf{WriteEigenvalues}%
    \index{output!eigenvalues}
    
    \item \fdf{WriteWaveFunctions}%
    \index{output!wave functions}
    
    \item \fdf{WriteMullikenPop}%
    \index{output!Mulliken analysis}%
    \index{Mulliken population analysis}%
    (it sets it to 1)

  \end{itemize}

  The specific changing of any of these options has precedence.
  
\end{fdflogicalF}


\subsection{Output to dedicated files}%
\index{output!dedicated files}

\siesta\ can produce a wealth of information in dedicated files,
with specific formats, that can be used for further analysis. See the
appropriate sections, and the appendix on file formats.
Please take into account the behavior of
\fdf{LongOutput}, as in the current implementation might silently
activate output to the main .out file at the expense of auxiliary
files.


\section{DETAILED DESCRIPTION OF PROGRAM OPTIONS}


Here follows a description of the variables that you can define in
your \siesta\ input file, with their data types and default
values. For historical reasons the names of the tags do not have an
uniform structure, and can be confusing at times.

Almost all of the tags are optional: \siesta\ will assign a
default if a given tag is not found when needed (see \shell{fdf.log}).


\subsection{General system descriptors}

\begin{fdfentry}{SystemLabel}[string]<siesta>
  
  A \emph{single} word (max. 20 characters \emph{without blanks})
  containing a nickname of the system, used to name output files.

\end{fdfentry}


\begin{fdfentry}{SystemName}[string]

  A string of one or several words containing a descriptive name of
  the system (max. 150 characters).
  
\end{fdfentry}


\begin{fdfentry}{NumberOfSpecies}[integer]<\nonvalue{lines in \fdf{ChemicalSpeciesLabel}}>

  Number of different atomic species in the simulation.  Atoms of the
  same species, but with a different pseudopotential or basis set are
  counted as different species.

  \note This is not required to be set.
  
\end{fdfentry}

\begin{fdfentry}{NumberOfAtoms}[integer]<\nonvalue{lines in \fdf{AtomicCoordinatesAndAtomicSpecies}}>

  Number of atoms in the simulation.

  \note This is not required to be set.

\end{fdfentry}


\begin{fdfentry}{ChemicalSpeciesLabel}[block]

  It specifies the different chemical species\index{species} that are
  present, assigning them a number for further identification.
  \siesta\ recognizes the different atoms by the given atomic number.

  \begin{fdfexample}
     %block ChemicalSpecieslabel
        1   6   C
        2  14   Si
        3  14   Si_surface
     %endblock ChemicalSpecieslabel
  \end{fdfexample}
  The first number in a line is the species number, it is followed by
  the atomic number, and then by the desired label. This label will be
  used to identify corresponding files, namely, pseudopotential file,
  user basis file, basis output file, and local pseudopotential output
  file.

  This construction allows you to have atoms of the same species but
  with different basis or pseudopotential, for example.

  Negative atomic numbers are used for \emph{ghost}
  atoms\index{ghost atoms} (see \fdf{PAO!Basis}).

  For atomic numbers over $200$ or below $-200$ you should read \fdf{SyntheticAtoms}.

  \note This block is mandatory.
  
\end{fdfentry}


\begin{fdfentry}{SyntheticAtoms}[block]

  This block is an additional block to complement
  \fdf{ChemicalSpeciesLabel} for special atomic numbers.
  
  Atomic numbers over $200$ are used to represent \emph{synthetic atoms}
  \index{synthetic atoms} (created for example as a ``mixture'' of two
  real ones for a ``virtual crystal'' (VCA)\index{VCA}
  calculation). In this special case a new \fdf{SyntheticAtoms} block
  must be present to give \siesta\ information about the ``ground
  state'' of the synthetic atom.

  \begin{fdfexample}
     %block ChemicalSpeciesLabel
        1   201 ON-0.50000
     %endblock ChemicalSpeciesLabel
     %block SyntheticAtoms
        1               # Species index
        2 2 3 4         # n numbers for valence states  with l=0,1,2,3
        2.0 3.5 0.0 0.0 # occupations of valence states with l=0,1,2,3
     %endblock SyntheticAtoms
  \end{fdfexample}

  Pseudopotentials for synthetic atoms can be created using the
  \program{mixps} and \program{fractional} programs \index{mixps
      program}\index{fractional program} in the \shell{Util/VCA}
  directory.

  Atomic numbers below $-200$ represent \emph{ghost synthetic atoms}.
  
\end{fdfentry}

\begin{fdfentry}{AtomicMass}[block]

  It allows the user to introduce the atomic masses of the different
  species used in the calculation, useful for the dynamics with
  isotopes,\index{isotopes} for example. If a species index is not
  found within the block, the natural mass for the corresponding
  atomic number is assumed. If the block is absent all masses are the
  natural ones. One line per species with the species index (integer)
  and the desired mass (real). The order is not important. If there is
  no integer and/or no real numbers within the line, the line is
  disregarded.

  \begin{fdfexample}
     %block AtomicMass
        3  21.5
        1  3.2
     %endblock AtomicMass
  \end{fdfexample}

  The default atomic mass are the natural masses. For \emph{ghost}
  atoms (i.e. floating orbitals) the mass is $10^{30}\,\mathrm{a.u.}$ 

\end{fdfentry}



\subsection{Pseudopotentials}
\index{pseudopotential!generation}

\siesta\ uses pseudopotentials to represent the electron-ion
interaction (as do most plane-wave codes and in contrast to so-called
``all-electron'' programs). In particular, the pseudopotentials are of
the ``norm-conserving'' kind, and can be generated by the \program{Atom} program,
(see \shell{Pseudo/README.ATOM}). Remember that \textbf{all pseudopotentials
  should be thoroughly tested} before using them. We refer you to the
standard literature on pseudopotentials and to the \program{ATOM} manual
for more information. A number of
other codes (such as \program{APE}) can generate pseudopotentials that
\siesta\ can use directly (typically in the \shell{.psf} format).

The pseudopotentials will be read by \siesta\ from different files, one
for each defined species (species defined either in block
\fdf{ChemicalSpeciesLabel}).\index{pseudopotential!files}
The name of the files should be:

\textit{Chemical\_label}\texttt{.vps} (unformatted) or
\textit{Chemical\_label}\texttt{.psf} (ASCII)

\noindent
where \textit{Chemical\_label} corresponds to the label defined in the
\fdf{ChemicalSpeciesLabel} block.


\subsection{Basis set and KB projectors}

\subsubsection{Overview of atomic-orbital bases implemented in \texorpdfstring{\siesta}{SIESTA}}

The main advantage of atomic orbitals is their efficiency (fewer orbitals
needed per electron for similar precision)
and their main disadvantage is the lack of systematics for optimal
convergence, an issue that quantum chemists have been working on for
many years. They have also clearly shown that there
is no limitation on precision intrinsic to LCAO.
This section provides some information about how basis sets can be
generated for \siesta.

It is important to stress at this point that neither the \siesta\
method nor the program
are bound to the use of any particular kind of atomic orbitals. The
user can feed into \siesta\ the atomic basis set he/she choses by
means of radial tables (see \fdf{User.Basis} below), the
only limitations being: $(i)$ the functions have to be atomic-like (radial
functions mutiplied by spherical harmonics), and $(ii)$ they have to be
of finite support, i.e., each orbital becomes strictly zero beyond some
cutoff radius chosen by the user.

Most users, however, do not have their own basis sets. For these users
we have devised some schemes to generate basis sets within the program
with a minimum input from the user.  If nothing is specified in the
input file, Siesta generates a default basis set of a reasonable
quality that might constitute a good starting point.  Of course,
depending on the accuracy required in the particular problem, the user
has the degree of freedom to tune several parameters that can be
important for quality and efficiency. A description of these basis
sets and some performance tests can be found in the references quoted
below.

\noindent
``Numerical atomic orbitals for linear-scaling calculations'',
J. Junquera, O. Paz, D. S\'anchez-Portal, and E. Artacho, Phys. Rev. B
\textbf{64}, 235111, (2001)

An important point here is that the basis set selection is a
variational problem and, therefore, minimizing the energy with respect
to any parameters defining the basis is an ``ab initio'' way to
define them.

We have also devised a quite simple and systematic way of generating
basis sets based on specifying only one main parameter (the energy shift)
besides the basis size. It does not offer the best NAO results one can get
for a given basis size but it has the important advantages mentioned above.
More about it in:

\noindent
``Linear-scaling ab-initio calculations for large and complex systems'',
E. Artacho, D. S\'anchez-Portal, P. Ordej\'on, A. Garc\'{\i}a and
J. M. Soler, Phys. Stat. Sol. (b) \textbf{215}, 809 (1999).

In addition to \siesta\ we provide the program \program{Gen-basis}
\index{Gen-basis@\program{Gen-basis}}, which reads \siesta's input and
generates basis files for later use. \program{Gen-basis} can be found
in \texttt{Util/Gen-basis}. 
It should be run from the \texttt{Tutorials/Bases} directory,
using the \texttt{gen-basis.sh} script. It is limited to a single species.

Of course, as it happens for the pseudopotential, it is the
responsibility of the user to check that the physical results obtained
are converged with respect to the basis set used before starting any
production run.

In the following we give some clues on the basics of the basis sets
that \siesta\ generates.
  The starting point is always the solution of Kohn-Sham's Hamiltonian
for the isolated pseudo-atoms, solved in a radial grid,
with the same approximations as for the solid or molecule
(the same exchange-correlation functional and  pseudopotential),
plus some way of confinement (see below).
  We describe in the following three main features of a
basis set of atomic orbitals: size, range, and radial shape.

\textbf{Size:} number of orbitals per atom

  Following the nomenclature of Quantum Chemistry, we establish
a hierarchy of basis sets, from single-$\zeta$ to multiple-$\zeta$
with polarization and diffuse orbitals, covering from quick calculations
of low quality to high precision, as high as the finest obtained in
Quantum Chemistry.
  A single-$\zeta$ (also called minimal) basis set (SZ in the following)
has one single radial function per angular momentum channel, and only for
those angular momenta with substantial electronic population in the valence of
the free atom.
  It offers quick calculations and some insight on qualitative trends
in the chemical bonding and other properties.
  It remains too rigid, however, for more quantitative calculations
requiring both radial and angular flexibilization.

  Starting by the radial flexibilization of SZ, a better basis is obtained
by adding a second function per channel: double-$\zeta$ (DZ).
  In Quantum Chemistry, the \textit{split valence} scheme
is widely used: starting from the expansion in Gaussians of one atomic
orbital, the most contracted Gaussians are used to define the first
orbital of the double-$\zeta$ and the most extended ones for the second.
  For strictly localized functions there was a first proposal
of using the excited states of the confined atoms, but it would work only
for tight confinement (see \fdf{PAO!BasisType} \texttt{nodes} below).
  This construction was proposed and tested in D. S\'anchez-Portal
\textit{et al.}, J. Phys.: Condens. Matter \textbf{8}, 3859-3880 (1996).

  We found that the basis set convergence is slow, requiring high levels
of multiple-$\zeta$ to achieve what other schemes do at the double-$\zeta$
level.
  This scheme is related with the basis sets used in the OpenMX project
[see T. Ozaki, Phys. Rev. B \textbf{67}, 155108 (2003); T. Ozaki and H. Kino,
Phys. Rev. B \textbf{69}, 195113 (2004)].

  We then proposed an extension of the split valence idea of Quantum Chemistry
to strictly localized NAO which has become the standard and has been used
quite successfully in many systems (see \fdf{PAO!BasisType} \texttt{split} below).
  It is based on the idea of suplementing the first $\zeta$ with, instead of
a gaussian, a numerical orbital that reproduces the tail of the original PAO
outside a matching radius $r_{m}$, and continues smoothly towards the origin as
$r^l(a-br^2)$, with $a$ and $b$ ensuring continuity and differentiability
at $r_{m}$.
  Within exactly the same
Hilbert space, the second orbital can be chosen to be the difference between
the smooth one and the original PAO, which gives a basis orbital strictly
confined within the matching radius $r_{m}$ (smaller than the
original PAO!) continuously differentiable throughout.

  Extra parameters have thus appeared: one $r_m$ per orbital to be doubled.
The user can again introduce them by hand (see \fdf{PAO!Basis} below).
Alternatively, all the $r_m$'s can be defined at once by specifying
the value of the tail of the original PAO beyond $r_m$, the so-called
split norm. Variational optimization
of this split norm performed on different systems
shows a very general and stable performance for values around
15\% (except for the $\sim 50\%$ for hydrogen).
  It generalizes to multiple-$\zeta$ trivially by adding an additional
matching radius per new zeta.

Note: What is actually used is the norm of the tail \emph{plus} the
norm of the parabola-like inner function.

Angular flexibility is obtained by adding shells of higher angular
momentum.  Ways to generate these so-called polarization orbitals have
been described in the literature for Gaussians.  For NAOs there are
two ways for \siesta\ and \program{Gen-basis} to generate them: $(i)$
Use atomic PAO's of higher angular momentum with suitable confinement,
and $(ii)$ solve the pseudoatom in the presence of an electric field
and obtain the $l+1$ orbitals from the perturbation of the $l$
orbitals by the field. Experience shows that method $(i)$ tends to
give better results.

So-called diffuse orbitals, that might be important in the description
of open systems such as surfaces, can be simply added by specifying
extra ``n'' shells. [See S. Garcia-Gil, A. Garcia, N. Lorente,
  P. Ordejon, Phys. Rev. B \textbf{79}, 075441 (2009)]

Finally, the method allows the inclusion of off-site (ghost) orbitals
(not centered around any specific atom), useful for example in the
calculation of the counterpoise correction for basis-set superposition
errors.  Bessel functions for any radius and any excitation level can
also be added anywhere to the basis set.

\textbf{Range:} cutoff radii of orbitals.

Strictly localized orbitals (zero beyond a cutoff radius) are used in
order to obtain sparse Hamiltonian and overlap matrices for linear
scaling. One cutoff radius per angular momentum channel has to be
given for each species.

A balanced and systematic starting point for defining all the
different radii is achieved by giving one single parameter, the energy
shift, i.e., the energy increase experienced by the orbital when confined.
Allowing for system and physical-quantity variablity, as a rule of
thumb $\Delta E_{\mathrm{PAO}} \approx 100$ meV gives typical
precisions within the accuracy of current GGA functionals.  The user
can, nevertheless, change the cutoff radii at will.

\textbf{Shape}

Within the pseudopotential framework it is important to keep the
consistency between the pseudopotential and the form of the
pseudoatomic orbitals in the core region.  The shape of the orbitals
at larger radii depends on the cutoff radius (see above) and on the
way the localization is enforced.

The first proposal (and quite a standard among \siesta\ users)
uses an infinite square-well potential.  It was originally proposed
and has been widely and successfully used by Otto Sankey and
collaborators, for minimal bases within the ab initio tight-binding
scheme, using the \program{Fireball} program, but also for more flexible
bases using the methodology of \siesta.  This scheme has the
disadavantage, however, of generating orbitals with a discontinuous
derivative at $r_c$.  This discontinuity is more pronounced for
smaller $r_c$'s and tends to disappear for long enough values of this
cutoff.  It does remain, however, appreciable for sensible values of
$r_c$ for those orbitals that would be very wide in the free atom.  It
is surprising how small an effect such a kink produces in the total
energy of condensed systems.  It is, on the other hand, a problem for
forces and stresses, especially if they are calculated using a
(coarse) finite three-dimensional grid.

Another problem of this scheme is related to its defining the basis
starting from the free atoms.  Free atoms can present extremely extended
orbitals, their extension being, besides problematic, of no practical
use for the calculation in condensed systems: the electrons far away
from the atom can be described by the basis functions of other atoms.

A traditional scheme to deal with this is one based on the radial
scaling of the orbitals by suitable scale factors.  In addition to
very basic bonding arguments, it is soundly based on restoring
the virial's theorem for finite bases, in the case of Coulombic potentials
(all-electron calculations).  The use of pseudopotentials limits its
applicability, allowing only for extremely small deviations from unity
($\sim 1\%$) in the scale factors obtained variationally (with the
exception of hydrogen that can contract up to 25\%). This possiblity
is available to the user.

Another way of dealing with the above problem and that of the kink at the
same time is adding a soft confinement potential to the atomic
Hamiltonian used to generate the basis orbitals: it smoothens the kink
and contracts the orbital as suited. Two additional parameters are
introduced for the purpose, which can be defined again variationally.
The confining potential is flat (zero) in the core region, starts off
at some internal radius $r_i$ with all derivatives continuous and
diverges at $r_c$ ensuring the strict localization there.  It is
\begin{equation}
  V(r) = V_{\mathrm o} \frac{e^{- { \frac{r_c - r_i}{r - r_i} } }}{r_c -r}
\end{equation}
and both $r_i$ and $V_{\mathrm o}$ can be given to \siesta\ together
with $r_c$ in the input (see \fdf{PAO!Basis} below).
The kink is normally well smoothened with the default values 
for soft confinement by default (\fdf{PAO!SoftDefault} true), which 
are $r_i = 0.9 r_c$ and $V_{\mathrm o} = 40\,\mathrm{Ry}$.

When explicitly introducing orbitals in the basis that would be empty in 
the atom (e.g. polarisation orbitals) these tend to be extremely 
extended if not completely unbound. The above procedure produces
orbitals that bulge as far away from the nucleus as possible, to 
plunge abruptly at $r_c$. Soft confinement can be used to try
to force a more reasonable shape, but it is not ideal (for orbitals
peaking in the right region the tails tend to be far too short).
\textit{Charge confinement} \index{Charge confinement} produces
very good shapes for empty orbitals. Essentially a $Z/r$ potential
is added to the soft confined potential above. For flexibility
the charge confinement option in \siesta\ is defined as
\begin{equation}
  V_{\mathrm Q}(r) = \frac{Z e^{-\lambda r}}{\sqrt{r^2 + \delta^2}}
\end{equation}
where $\delta$ is there to avoid the singularity (default $\delta=0.01$ Bohr),
and $\lambda$ allows to screen the potential if longer tails are needed.
The description on how to introduce this option can be found in
the \fdf{PAO!Basis} entry below.

Finally, the shape of an orbital is also changed by the ionic
character of the atom.  Orbitals in cations tend to shrink, and they
swell in anions.  Introducing a $\delta Q$ in the basis-generating
free-atom calculations gives orbitals better adapted to ionic
situations in the condensed systems.

More information about basis sets can be found in the proposed
literature.


\noindent

There are quite a number of options for the input of the basis-set and
KB projector specification, and they are all optional! By default,
\siesta\ will use a DZP basis set with appropriate choices for the
determination of the range, etc. Of course, the more you experiment
with the different options, the better your basis set can get. To aid
in this process we offer an auxiliary program for optimization which
can be used in particular to obtain variationally optimal basis sets
(within a chosen basis size). See \texttt{Util/Optimizer}
for general information, and \texttt{Util/Optimizer/Examples/Basis\_Optim}
for an example. The directory \texttt{Tutorials/Bases} in the main \siesta\
distribution contains some tutorial material for the generation of
basis sets and KB projectors.

Finally, some optimized basis sets for particular elements are
available at the \siesta\ web page.  Again, it is the
responsability of the users to test the transferability of the basis
set to their problem under consideration.


\subsubsection{Type of basis sets}

\begin{fdfentry}{PAO!BasisType}[string]<split>
  \index{basis!PAO}
  
  The kind of basis to be generated is chosen. All are based on
  finite-range pseudo-atomic orbitals\index{finite-range pseudo-atomic
      orbitals} [PAO's of Sankey and Niklewsky, PRB 40, 3979
  (1989)]. The original PAO's were described only for minimal
  bases. \siesta\ generates extended bases
  (multiple-$\zeta$,\index{multiple-$\zeta$}
  polarization,\index{polarization orbitals} and diffuse
  orbitals\index{diffuse orbitals}) applying different schemes of
  choice:

  \begin{itemize}

    \item[-] Generalization of the PAO's: uses the excited orbitals of
    the finite-range pseudo-atomic problem, both for multiple-$\zeta$
    and for polarization [see S\'anchez-Portal, Artacho, and Soler,
    JPCM \textbf{8}, 3859 (1996)]. Adequate for short-range orbitals.

    \item[-] Multiple-$\zeta$ in the spirit of split
    valence,\index{split valence} decomposing the original PAO in
    several pieces of different range, either defining more (and
    smaller) confining radii, or introducing
    Gaussians\index{Gaussians} from known bases (Huzinaga's book).

  \end{itemize}

  \noindent
  All the remaining options give the same minimal basis\index{minimal
      basis}. The different options and their \fdflib\ descriptors are
  the following:

  \begin{fdfoptions}

    \option[split]%
    \fdfindex*{PAO!BasisType:split}%

    Split-valence scheme for multiple-zeta.
    The split is based on different radii.


    \option[splitgauss]%
    \fdfindex*{PAO!BasisType:splitgauss}%
    
    Same as \texttt{split} but using gaussian functions
    $e^{-(x/\alpha_i)^2}$. The gaussian widths $\alpha_i$ are read
    instead of the scale factors (see below). There is no cutting
    algorithm, so that a large enough $r_c$ should be defined for the
    gaussian to have decayed sufficiently.
    

    \option[nodes]%
    \fdfindex*{PAO!BasisType:nodes}%
    
    Generalized PAO's.


    \option[nonodes]%
    \fdfindex*{PAO!BasisType:nonodes}%

    The original PAO's are used, multiple-zeta is generated by
    changing the scale-factors, instead of using the excited orbitals.

    
    \option[filteret]
    \fdfindex*{PAO!BasisType:filteret}%
    
    Use the filterets as a systematic basis set.  The size of the
    basis set is controlled by the filter cut-off for the orbitals.
    
  \end{fdfoptions}
  
  \noindent
  Note that, for the \fdf*{split} and \fdf*{nodes} cases the whole
  basis can be generated by \siesta\ with no further information
  required. \siesta\ will use default values as defined in the
  following (\fdf{PAO!BasisSize}, \fdf{PAO!EnergyShift}, and
  \fdf{PAO!SplitNorm}, see below).
  
\end{fdfentry}


\subsubsection{Size of the basis set}

\begin{fdfentry}{PAO!BasisSize}[string]<DZP>
  \index{basis!PAO}

  It defines usual basis sizes. It has effect only if there is no
  block \fdf{PAO!Basis} present.

  \begin{fdfoptions}

    \option[SZ|minimal]%
    \fdfindex*{PAO!BasisSize:SZ}%
    \fdfindex*{PAO!BasisSize:minimal}%
    \index{basis!minimal}
    \index{single-$\zeta$}

    Use single-$\zeta$ basis.


    \option[DZ]%
    \fdfindex*{PAO!BasisSize:DZ}%
    \index{double-$\zeta$}

    Double zeta basis, in the scheme defined by \fdf{PAO!BasisType}.


    \option[SZP]%
    \fdfindex*{PAO!BasisSize:SZP}%

    Single-zeta basis plus polarization orbitals.

    
    \option[DZP|standard]%
    \fdfindex*{PAO!BasisSize:DZP}%
    
    Like \fdf*{DZ} plus polarization orbitals. Polarization orbitals
    are constructed from perturbation theory,\index{perturbative
        polarization} and they are defined so they
    have\index{basis!polarization} the minimum angular momentum $l$
    such that there are not occupied orbitals with the same $l$ in the
    valence shell of the ground-state atomic configuration. They
    polarize the corresponding $l-1$ shell.

    \note The ground-state atomic configuration used internally by
    \siesta\ is defined in the source file
    \shell{Src/periodic\_table.f}.  For some elements (e.g., Pd), the
    configuration might not be the standard one\index{Ground-state
        atomic configuration}.
    
  \end{fdfoptions}

\end{fdfentry}


\begin{fdfentry}{PAO!BasisSizes}[block]
  \index{basis!PAO}

  Block which allows to specify a different value of the variable
  \fdf{PAO!BasisSize} for each species. For example,
  \begin{fdfexample}
     %block PAO.BasisSizes
         Si      DZ
         H       DZP
         O       SZP
     %endblock PAO.BasisSizes
  \end{fdfexample}
  
\end{fdfentry}



\subsubsection{Range of the orbitals}

\begin{fdfentry}{PAO!EnergyShift}[energy]<$0.02\,\mathrm{Ry}$>

  A standard for orbital-confining cutoff radii. It is the excitation
  energy of the PAO's due to the confinement to a finite-range. It
  offers a general procedure for defining the confining radii of the
  original (first-zeta) PAO's for all the species guaranteeing the
  compensation of the basis. It only has an effect when the block
  \fdf{PAO!Basis} is not present or when the radii specified in
  that block are zero for the first zeta.

\end{fdfentry}  


\begin{fdfentry}{Write!Graphviz}[string]<none|atom|orbital|atom+orbital>

  Write out the sparsity pattern after having determined the basis
  size overlaps. This will generate \sysfile{ATOM.gv} or
  \sysfile{ORB.gv} which both may be converted to a graph using
  Graphviz's program \program{neato}:
  \begin{shellexample}
    neato -x -Tpng siesta.ATOM.gv -o siesta_ATOM.png
  \end{shellexample}
  The resulting graph will list each atom as $i (j)$ where $i$ is the
  atomic index and $j$ is the number of other atoms it is connected
  to.

\end{fdfentry}


\subsubsection{Generation of multiple-zeta orbitals}

\begin{fdfentry}{PAO!SplitNorm}[real]<$0.15$>
  \index{basis!split valence}

  A standard to define sensible default radii for the split-valence
  type of basis. It gives the amount of norm that the second-$\zeta$
  split-off piece has to carry. The split radius is defined
  accordingly. If multiple-$\zeta$\index{multiple-$\zeta$} is used,
  the corresponding radii are obtained by imposing smaller fractions
  of the SplitNorm (1/2, 1/4 ...) value as norm carried by the higher
  zetas. It only has an effect when the block \fdf{PAO!Basis} is
  not present or when the radii specified in that block are zero for
  zetas higher than one.
  
\end{fdfentry}

\begin{fdfentry}{PAO!SplitNormH}[real]<\fdfvalue{PAO!SplitNorm}>
  \index{basis!split valence for H}  

  This option is as per \fdf{PAO!SplitNorm} but allows a separate
  default to be specified for hydrogen which typically needs larger
  values than those for other elements.
  
\end{fdfentry}

\begin{fdflogicalF}{PAO!NewSplitCode}
  \index{basis!new split-valence code}  

  Enables a new, simpler way to match the multiple-zeta radii.

  If an old-style (tail+parabola) calculation is being done, perform a
  scan of the tail+parabola norm in the whole range of the 1st-zeta
  orbital, and store that in a table. The construction of the 2nd-zeta
  orbital involves simply scanning the table to find the appropriate
  place. Due to the idiosyncracies of the old algorithm, the new one
  is not guaranteed to produce exactly the same results, as it might
  settle on a neighboring grid point for the matching.
  
\end{fdflogicalF}

\begin{fdflogicalF}{PAO!FixSplitTable}
  \index{basis!fix split-valence table}  

  After the scan of the allowable split-norm values, apply a damping
  function to the tail to make sure that the table goes to zero at the
  radius of the first-zeta orbital.
  
\end{fdflogicalF}

\begin{fdflogicalF}{PAO!SplitTailNorm}
  \index{basis!new split-valence code}

  Use the norm of the tail instead of the full tail+parabola
  norm. This is the behavior described in the JPC paper. (But note
  that, for numerical reasons, the square root of the tail norm is
  used in the algorithm.) This is the preferred mode of operation for
  automatic operation, as in non-supervised basis-optimization runs.
  
\end{fdflogicalF}


As a summary of the above options:
\begin{itemize}
  \item%
  For complete backwards compatibility, do nothing.

  \item%
  To exercise the new code, set \fdf{PAO!NewSplitCode}.
  
  \item%
  To maintain the old split-norm heuristic, but making sure that the
  program finds a solution (even if not optimal, in the sense of
  producing a second-$\zeta$ $r_c$ very close to the first-$\zeta$
  one), set \fdf{PAO!FixSplitTable} (this will automatically set
  \fdf{PAO!NewSplitCode}).
  
  \item%
  If the old heuristic is of no interest (for example, if only a
  robust way of mapping split-norms to radii is needed), set
  \fdf{PAO!SplitTailNorm} (this will set \fdf{PAO!NewSplitCode}
  automatically).

\end{itemize}

\begin{fdfentry}{PAO!EnergyCutoff}[energy]<$20\,\mathrm{Ry}$>

  If the multiple zetas are generated using filterets then only the
  filterets with an energy lower than this cutoff are
  included. Increasing this value leads to a richer basis set
  (provided the cutoff is raised above the energy of any filteret that
  was previously not included) but a more expensive calculation.  It
  only has an effect when the option \fdf{PAO!BasisType} is set to
  \fdf*{filteret}.
  
\end{fdfentry}

\begin{fdfentry}{PAO!EnergyPolCutoff}[energy]<$20\,\mathrm{Ry}$>

  If the multiple zetas are generated using filterets then only the
  filterets with an energy lower than this cutoff are included for the
  polarisation functions. Increasing this value leads to a richer
  basis set (provided the cutoff is raised above the energy of any
  filteret that was previously not included) but a more expensive
  calculation. It only has an effect when the option
  \fdf{PAO!BasisType} is set to filteret.

\end{fdfentry}

\begin{fdfentry}{PAO!ContractionCutoff}[real]<$0$|$0-1$>

  If the multiple zetas are generated using filterets then any
  filterets that have a coefficient less than this threshold within
  the original PAO will be contracted together to form a single
  filteret.  Increasing this value leads to a smaller basis set but
  allows the underlying basis to have a higher kinetic energy cut-off
  for filtering. It only has an effect when the option
  \fdf{PAO!BasisType} is set to \fdf*{filteret}.
  
\end{fdfentry}



\subsubsection{Soft-confinement options}

\begin{fdflogicalF}{PAO!SoftDefault}
  \index{basis!default soft confinement}

  If set to true then this option causes soft confinement to be the
  default form of potential during orbital generation. The default
  potential and inner radius are set by the commands given below.

\end{fdflogicalF}

\begin{fdfentry}{PAO!SoftInnerRadius}[real]<$0.9$>
  \index{basis!default soft confinement radius} 

  For default soft confinement, the inner radius is set at a fraction
  of the outer confinement radius determined by the energy shift. This
  option controls the fraction of the confinement radius to be used.
  
\end{fdfentry}

\begin{fdfentry}{PAO!SoftPotential}[energy]<$40\,\mathrm{Ry}$>
  \index{basis!default soft confinement potential}

  For default soft confinement, this option controls the value of the
  potential used for all orbitals.

  \note Soft-confinement options (inner radius, prefactor) have been
  traditionally used to optimize the basis set, even though formally
  they are just a technical necessity to soften the decay of the
  orbitals at rc. To achieve this, it might be enough to use the above
  global options.
  
\end{fdfentry}


\subsubsection{Kleinman-Bylander projectors}


\begin{fdfentry}{PS!lmax}[block]

  Block with the maximum angular momentum of the Kleinman-Bylander
  projectors,\index{Kleinman-Bylander projectors} \texttt{lmxkb}.
  This information is optional. If the block is absent, or for a
  species which is not mentioned inside it, \siesta\ will take
  \texttt{lmxkb(is) = lmxo(is) + 1}, where \texttt{lmxo(is)} is the
  maximum angular momentum of the basis orbitals of species
  \texttt{is}.
However, the value of \texttt{lmxkb} is actually
limited by the highest-l channel in the pseudopotential file.
  \begin{fdfexample}
      %block Ps.lmax
          Al_adatom   3
          H           1
          O           2
      %endblock Ps.lmax
  \end{fdfexample}
  
By default \texttt{lmax} is the maximum angular momentum plus one,
limited by the highest-l channel
in the pseudopotential file.
\end{fdfentry}


\begin{fdfentry}{PS.KBprojectors}[block]

  This block provides
  information about the number of Kleinman-Bylander projectors per
  angular momentum, and for each species, that will used in the
  calculation. This block is optional.  If the block is absent, or for
  species not mentioned in it, only one projector will be used for
  each angular momentum (except for l-shells with semicore states, for
  which two projectors will be constructed). The projectors will be
  constructed using the eigenfunctions of the respective
  pseudopotentials.


This block allows to specify the number of projector for each l, and also
the reference energies of the wavefunctions used to build them.
The specification of the reference energies is optional. If these
energies are not given, the program will use the eigenfunctions with an
increasing number of nodes (if there is not bound state with
the corresponding number of nodes, the ``eigenstates'' are taken to be just
functions which are made zero at very long distance of the nucleus).
The units for the energy can be optionally specified; if not, the
program will assumed that they are given in Rydbergs.
The data provided in this block must be consistent with those
read from the block \fdf{PS!lmax}. For example,

\begin{verbatim}
         %block PS.KBprojectors
             Si  3
              2   1
             -0.9     eV
              0   2
             -0.5  -1.0d4 Hartree
              1   2
             Ga  1
              1  3
             -1.0  1.0d5 -6.0
         %endblock PS.KBprojectors
\end{verbatim}

The reading is done this way (those variables in brackets are optional,
therefore they are only read if present):

\begin{verbatim}
 From is = 1 to  nspecies
     read: label(is), l_shells(is)
     From lsh=1 to l_shells(is)
         read: l, nkbl(l,is)
         read: {erefKB(izeta,il,is)}, from ikb = 1 to nkbl(l,is), {units}
\end{verbatim}

All angular momentum shells should be specified.  Default values are
assigned to missing shells with l below lmax, where lmax is the
highest angular momentum present in the block for that particular
species. High-l shells (beyond lmax) not specified in the block will
also be assigned default values.

Care should be taken for l-shells with semicore states. For them, two
KB projectors should be generated. This is not checked while
processing this block.

When a very high energy, higher that 1000 Ry, is specified, the
default is taken instead.  On the other hand, very low (negative)
energies, lower than -1000 Ry, are used to indicate that the energy
derivative of the last state must be used. For example, in the block
given above, two projectors will be used for the \textit{s}
pseudopotential of Si. One generated using a reference energy of -0.5
Hartree, and the second one using the energy derivative of this
state. For the \textit{p} pseudopotential of Ga, three projectors will be
used.  The second one will be constructed from an automatically
generated wavefunction with one node, and the other projectors from
states at -1.0 and -6.0 Rydberg.

The analysis looking for possible \textit{ghost} states is only performed
when a single projector is used.  Using several projectors some
attention should be paid to the ``KB cosine'' (kbcos), given in the
output of the program.  The KB cosine gives the value of the overlap
between the reference state and the projector generated from it.  If
these numbers are very small ( $<$ 0.01, for example) for \textbf{all}
the projectors of some angular momentum, one can have problems related
with the presence of ghost states.

The default is \emph{one} KB projector from each angular momentum,
constructed from the nodeless eigenfunction, used for each angular
momentum, except for l-shells with semicore states, for which two
projectors will be constructed.
Note that the value of \texttt{lmxkb} is actually limited by the
highest-l channel in the pseudopotential file.

For full spin-orbit calculations, the program generates $lj$
projectors using the $l+1/2$ and $l-1/2$ components of the
(relativistic) pseudopotentials. In this case the specification of the
reference energies for projectors is not changed: only $l$ is
relevant.

\end{fdfentry}  


\begin{fdflogicalF}{KB.New.Reference.Orbitals}

  If \fdftrue, the routine to generate KB projectors will use slightly
  different parameters for the construction of the reference orbitals
  involved (Rmax=60 Bohr both for integration and normalization).
  
\end{fdflogicalF}


\subsubsection{The PAO.Basis block}

\begin{fdfentry}{PAO!Basis}[block]
  \index{basis!PAO}

  Block with data to define explicitly the basis to be used.  It
  allows the definition by hand of all the parameters that are used to
  construct the atomic basis. There is no need to enter information
  for all the species present in the calculation. The
  basis\index{basis!PAO} for the species not mentioned in this block
  will be generated automatically using the parameters
  \fdf{PAO!BasisSize}, \fdf{PAO!BasisType}, \fdf{PAO!EnergyShift},
  \fdf{PAO!SplitNorm} (or \fdf{PAO!SplitNormH}), and the
  soft-confinement defaults, if used (see \fdf{PAO!SoftDefault}).

  Some parameters can be set to zero, or left out completely.  In
  these cases the values will be generated from the magnitudes defined
  above, or from the appropriate default values. For example, the
  radii\index{cutoff radius} will be obtained from
  \fdf{PAO!EnergyShift} or from \fdf{PAO!SplitNorm} if they are
  zero; the scale factors will be put to 1 if they are zero or not
  given in the input.  An example block for a two-species calculation
  (H and O) is the following (\texttt{opt} means optional):
  
  \begin{fdfexample}
%block PAO.Basis     # Define Basis set
O    2  nodes  1.0   # Label, l_shells, type (opt), ionic_charge (opt)
 n=2 0 2  E 50.0 2.5 # n (opt if not using semicore levels),l,Nzeta,Softconf(opt)
     3.50  3.50      #     rc(izeta=1,Nzeta)(Bohr)
     0.95  1.00      #     scaleFactor(izeta=1,Nzeta) (opt)
     1 1  P 2        # l, Nzeta, PolOrb (opt), NzetaPol (opt)
     3.50            #     rc(izeta=1,Nzeta)(Bohr)
H    2               # Label, l_shells, type (opt), ionic_charge (opt)
     0 2 S 0.2       # l, Nzeta, Per-shell split norm parameter
     5.00  0.00      #     rc(izeta=1,Nzeta)(Bohr)
     1 1 Q 3. 0.2    # l, Nzeta, Charge conf (opt): Z and screening
     5.00            #    rc(izeta=1,Nzeta)(Bohr)
%endblock PAO.Basis
  \end{fdfexample}
   

  \noindent
  The reading is done this way (those variables in brackets are
  optional, therefore they are only read if present) (See
  the routines in \shell{Src/basis\_specs.f} for detailed information):
  
\begin{shellexample}
    From js = 1 to  nspecies
       read: label(is), l_shells(is), { type(is) }, { ionic_charge(is) }
       From lsh=1 to l_shells(is)
        read:
         { n }, l(lsh), nzls(lsh,is), { PolOrb(l+1) }, { NzetaPol(l+1) },
         {SplitNormfFlag(lsh,is)}, {SplitNormValue(lsh,is)}
         {SoftConfFlag(lsh,is)}, {PrefactorSoft(lsh,is)}, {InnerRadSoft(lsh,is)},
         {FilteretFlag(lsh,is)}, {FilteretCutoff(lsh,is)}
         {ChargeConfFlag(lsh,is)}, {Z(lsh,is)}, {Screen(lsh,is)}, {delta(lsh,is}
           read: rcls(izeta,lsh,is), from izeta = 1 to nzls(l,is)
           read: { contrf(izeta,il,is) }, from izeta = 1 to nzls(l,is)
\end{shellexample}

  \noindent
  And here is the variable description:
  \begin{itemize}
    \item[-] %
    \texttt{Label}: Species label, this label determines
    the species index \texttt{is} according to the block
    \fdf{ChemicalSpeciesLabel}

    \item[-]%
    \texttt{l\_shells(is)}: Number of shells of orbitals
    with different angular momentum for species \texttt{is}

    \item[-]%
    \texttt{type(is)}: \textit{Optional input}.  Kind of basis set
    generation procedure for species \texttt{is}.  Same options as
    \fdf{PAO!BasisType}

    \item[-]%
    \texttt{ionic\_charge(is)}: \textit{Optional input}.  Net charge
    of species \texttt{is}. This is only used for basis set generation
    purposes. \textit{Default value}: \texttt{0.0} (neutral
    atom). Note that if the pseudopotential was generated in an ionic
    configuration, and no charge is specified in PAO.Basis, the ionic
    charge setting will be that of pseudopotential generation.

    \item[-]%
    \texttt{n}: Principal quantum number of the shell. This is an
    optional input for normal atoms, however it must be specified when
    there are \textit{semicore} states (i.e. when states that usually
    are not considered to belong to the valence shell have been
    included in the calculation)

    \item[-]%
    \texttt{l}: Angular momentum of basis orbitals of this shell

    \item[-]%
    \texttt{nzls(lsh,is)}: Number of ``zetas'' for this shell. For a
    filteret basis this number is ignored since the number is
    controlled by the cutoff.
    For bessel-floating orbitals, the different 'zetas' map to
    increasingly excited states with the same angular momentum (with
    increasing number of nodes).
    

    \item[-]%
    \texttt{PolOrb(l+1)}: \textit{Optional input}. If set equal to
    \texttt{P}, a shell of polarization functions (with angular
    momentum $l+1$) will be constructed from the first-zeta orbital of
    angular momentum $l$. \textit{Default value}: ' ' (blank = No
    polarization orbitals).

    \item[-]%
    \texttt{NzetaPol(l+1)}: \textit{Optional input}. Number of
    ``zetas'' for the polarization shell (generated automatically in a
    split-valence fashion).  For a filteret basis this number is
    ignored since the number is controlled by the cutoff.  Only active
    if \texttt{PolOrb = P}. \textit{Default value}: \texttt{1}

    \item[-]%
    \texttt{SplitNormFlag(lsh,is)}:\index{basis!per-shell split norm}
    \textit{Optional input}. If set equal to \texttt{S}, the following
    number sets the split-norm parameter for that shell.

    \item[-]%
    \texttt{SoftConfFlag(l,is)}:\index{basis!soft confinement
        potential} \textit{Optional input}. If set equal to
    \texttt{E}, the soft confinement potential proposed in equation
    (1) of the paper by J. Junquera \textit{et al.}, Phys. Rev. B
    \textbf{64}, 235111 (2001), is used instead of the Sankey
    hard-well potential.

    \item[-]%
    \texttt{PrefactorSoft(l,is)}: \textit{Optional input}. Prefactor
    of the soft confinement potential ($V_{0}$ in the formula). Units
    in Ry.  \textit{Default value}: 0 Ry.

    \item[-]%
    \texttt{InnerRadSoft(l,is)}: \textit{Optional input}. Inner radius
    where the soft confinement potential starts off ($r_{i}$ in the
    formula).  If negative, the inner radius will be computed as the
    given fraction of the PAO cutoff radius.  Units in
    bohrs. \textit{Default value}: 0 bohrs.

    \item[-]%
    \texttt{FilteretFlag(l,is)}:\index{basis!filteret basis set}
    \textit{Optional input}. If set equal to \texttt{F}, then an
    individual filter cut-off can be specified for the shell.

    \item[-]%
    \texttt{FilteretCutoff(l,is)}: \textit{Optional
        input}. Shell-specific value for the filteret basis
    cutoff. Units in Ry.  \textit{Default value}: The same as the
    value given by \fdf{FilterCutoff}.

    \item[-]%
    \texttt{ChargeConfFlag(lsh,is)}: \textit{Optional input}. If set
    equal to \texttt{Q}, the charge confinement potential in equation
    (2) above is added to the confining potential. If present it
    requires at least one number after it (\texttt{Z}), but it can be
    followed by two or three numbers.  \index{Charge confinement}

    \item[-]%
    \texttt{Z(lhs,is)}: \textit{Optional input, needed if \texttt{Q}
        is set}. $Z$ charge in equation (2) above for charge
    confinement (units of $e$).

    \item[-]%
    \texttt{Screen(lhs,is)}: \textit{Optional input}. Yukawa screening
    parameter $\lambda$ in equation (2) above for charge confinement
    (in Bohr$^{-1}$).

    \item[-]%
    \texttt{delta(lhs,is)}: \textit{Optional input}. Singularity
    regularisation parameter $\delta$ in equation (2) above for charge
    confinement (in Bohr).

    \item[-]%
    \texttt{rcls(izeta,l,is)}: Cutoff radius (Bohr) of each 'zeta' for
    this shell. For the second zeta onwards, if this value is
    negative, the actual rc used will be the given fraction of the
    first zeta's rc.
    If the number of rc's for a given shell is less than the number of
    'zetas', the program will assign the last rc value to the remaining
    zetas, rather than stopping with an error. This is particularly
    useful for Bessel suites of orbitals.

    \item[-]%
    \texttt{contrf(izeta,l,is)}: \textit{Optional input}.  Contraction
    factor\index{scale factor} of each ``zeta'' for this shell.   
    If the number of entries for a given shell is less than the number of
    'zetas', the program will assign the last contraction value to the remaining
    zetas, rather than stopping with an error.
    \textit{Default value}: \texttt{1.0}
\end{itemize}

Polarization orbitals\index{perturbative
    polarization}\index{basis!polarization} are generated by solving
the atomic problem in the presence of a polarizing electric field. The
orbitals are generated applying perturbation theory to the first-zeta
orbital of lower angular momentum.  They have the same cutoff radius
as the orbitals from which they are constructed.

Note: The perturbative method has traditionally used the 'l' component
of the pseudopotential. It can be argued that it should use the 'l+1'
component. By default, for backwards compatibility, the traditional
method is used, but the alternative one can be activated by setting
the logical \fdf{PAO!OldStylePolOrbs} variable to \fdffalse.

There is a different possibility for generating polarization orbitals:
by introducing them explicitly in the \fdf{PAO!Basis} block.
It has to be remembered, however, that they sometimes correspond to
unbound states of the atom, their shape depending very much on the
cutoff radius, not converging by increasing it, similarly to the
multiple-zeta orbitals generated with the \texttt{nodes} option.
Using \fdf{PAO!EnergyShift} makes no sense, and a cut off
radius different from zero must be explicitly given (the same cutoff radius
as the orbitals they polarize is usually a sensible choice).

A species with atomic number = -100 will be considered by \siesta\ as
a constant-pseudopotential atom, \textit{i.e.}, the basis functions
generated will be spherical Bessel functions\index{Bessel functions}
with the specified $r_c$. In this case, $r_c$ has to be given, as
\fdf*{PAO.EnergyShift} will not calculate it.\index{basis!Bessel functions}

Other negative atomic numbers will be interpreted by \siesta\ as
\textit{ghosts}\index{ghost atoms}\index{basis!ghost atoms}
of the corresponding positive value: the orbitals
are generated and put in position as determined by the coordinates,
but neither pseudopotential nor electrons are considered for that
ghost atom. Useful for BSSE\index{basis!basis set superposition
error (BSSE)} correction.

\textit{Use:} This block is optional, except when Bessel functions or
semicore states are present.

\textit{Default:} Basis characteristics defined by global definitions given
above.

\end{fdfentry}

\subsubsection{Filtering}
\label{sec:filtering}

\begin{fdfentry}{FilterCutoff}[energy]<$0\,\mathrm{eV}$>
  \index{basis!filtering}

  Kinetic energy cutoff of plane waves used to filter all the atomic
  basis functions, the pseudo-core densities for partial core
  corrections, and the neutral-atom potentials.  The basis functions
  (which must be squared to obtain the valence density) are really
  filtered with a cutoff reduced by an empirical factor
  $0.7^2 \simeq 0.5$. The \fdf{FilterCutoff} should be similar or
  lower than the \fdf{Mesh!Cutoff} to avoid the \emph{eggbox
      effect} on the atomic forces.  However, one should not try to
  converge \fdf{Mesh!Cutoff} while simultaneously changing
  \fdf{FilterCutoff}, since the latter in fact changes the used
  basis functions. Rather, fix a sufficiently large
  \fdf{FilterCutoff} and converge only \fdf{Mesh!Cutoff}.  If
  \fdf{FilterCutoff} is not explicitly set, its value is calculated
  from \fdf{FilterTol}.

\end{fdfentry}

\begin{fdfentry}{FilterTol}[energy]<$0\,\mathrm{eV}$>
  \index{basis!filtering}

  Residual kinetic-energy leaked by filtering each basis function.
  While \fdf{FilterCutoff} sets a common reciprocal-space cutoff for
  all the basis functions, \fdf{FilterTol} sets a specific cutoff for
  each basis function, much as the \fdf{PAO!EnergyShift} sets their
  real-space cutoff. Therefore, it is reasonable to use similar values
  for both parameters.  The maximum cutoff required to meet the
  \fdf{FilterTol}, among all the basis functions, is used (multiplied
  by the empirical factor $1/0.7^2 \simeq 2$) to filter the
  pseudo-core densities and the neutral-atom
  potentials. \fdf{FilterTol} is ignored if \fdf{FilterCutoff} is
  present in the input file.  If neither \fdf{FilterCutoff} nor
  \fdf{FilterTol} are present, no filtering is performed.  See
  \citet{SOLER20091134}, for details of the filtering procedure.

  \textbf{Warning:} If the value of \fdf{FilterCutoff} is made too
  small (or \fdf{FilterTol} too large) some of the filtered basis
  orbitals may be meaningless, leading to incorrect results or even a
  program crash.

  To be implemented: If \fdf{Mesh!Cutoff} is not present in the
  input file, it can be set using the maximum filtering cutoff used
  for the given \fdf{FilterTol} (for the time being, you can use
  \fdf{AtomSetupOnly} \fdftrue\ to stop the program after basis generation,
  look at the maximum filtering cutoff used, and set the mesh-cutoff
  manually in a later run.)

\end{fdfentry}



\subsubsection{Saving and reading basis-set information}

\index{basis}\index{output!basis}
\siesta\ (and the standalone program \textsc{Gen-basis})
\index{basis!Gen-basis standalone program}
always generate the files
\textit{Atomlabel}\texttt{.ion}, where \textit{Atomlabel} is the atomic label
specified in block \fdf{ChemicalSpeciesLabel}.  Optionally, if
NetCDF support is compiled in, the programs generate
NetCDF files \index{NetCDF format}
\textit{Atomlabel}\texttt{.ion.nc} (except for ghost atoms).
See an Appendix for information on the optional NetCDF package.

These files can be used to read back information into \siesta.

\begin{fdflogicalF}{User!Basis}
  \index{basis!User basis}

  If true, the basis, KB projector, and other information is read from
  files \textit{Atomlabel}\texttt{.ion}, where \textit{Atomlabel} is
  the atomic species label specified in block
  \fdf{ChemicalSpeciesLabel}. These files can be generated by a
  previous \siesta\ run or (one by one) by the standalone program
  \program{Gen-basis}.\index{Gen-basis
      program@\program{Gen-basis}}\index{basis!Gen-basis standalone
      program} No pseudopotential files are necessary.

\end{fdflogicalF}

\begin{fdflogicalF}{User!Basis.NetCDF}
  \index{basis!User basis (NetCDF format)}%
  \index{NetCDF format}%

  If true, the basis, KB projector, and other information is read from
  NetCDF files \textit{Atomlabel}\texttt{.ion.nc}, where
  \textit{Atomlabel} is the atomic label specified in block
  \fdf{ChemicalSpeciesLabel}. These files can be generated by a
  previous \siesta\ run or by the standalone program
  \program{Gen-basis}.\index{Gen-basis
      program@\program{Gen-basis}}\index{basis!Gen-basis standalone
      program} No pseudopotential files are necessary. NetCDF support
  is needed. Note that ghost atoms cannot yet be adequately treated
  with this option.
  
\end{fdflogicalF}



\subsubsection{Tools to inspect the orbitals and KB projectors}

The program \texttt{ioncat} in \texttt{Util/Gen-basis} can be used to
extract orbital, KB projector, and other information contained in the
\texttt{.ion} files. The output can be easily plotted with a graphics
program.  If the option \fdf{WriteIonPlotFiles} is enabled, \siesta\ 
will generate and extra set of files that can be plotted
with the \texttt{gnuplot} scripts in \texttt{Tutorials/Bases}.
The stand-alone program \texttt{gen-basis} sets that option by default, and
the script \texttt{Tutorials/Bases/gen-basis.sh} can be used to automate
the process. See also the NetCDF-based utilities in \texttt{Util/PyAtom}.

\subsubsection{Basis optimization}

There are quite a number of options for the input of the basis-set and
KB projector specification, and they are all optional! By default,
\siesta\ will use a DZP basis set with appropriate choices for the
determination of the range, etc. Of course, the more you experiment
with the different options, the better your basis set can get. To aid
in this process we offer an auxiliary program for optimization which
can be used in particular to obtain variationally optimal basis sets
(within a chosen basis size). See \texttt{Util/Optimizer}
for general information, and \texttt{Util/Optimizer/Examples/Basis\_Optim}
for an example.

\begin{fdfentry}{BasisPressure}[pressure]<$0.2\,\mathrm{GPa}$>

  \siesta\ will compute and print the value of the ``effective basis
  enthalpy'' constructed by adding a term of the form
  $p_{basis}V_{orbs}$ to the total energy. Here $p_{basis}$ is a
  fictitious basis pressure and $V_{orbs}$ is the volume of the
  system's orbitals. This is a useful quantity for basis optimization
  (See Anglada \emph{et al.\/}). The total basis enthalpy is also
  written to the ASCII file \file{BASIS\_ENTHALPY}.
  
\end{fdfentry}



\subsubsection{Low-level options regarding the radial grid}

For historical reasons, the basis-set and KB projector code in
\siesta\ uses a logarithmic radial grid, which is taken from the
pseudopotential file. Any ``interesting'' radii have to fall on a grid
point, which introduces a certain degree of coarseness that can limit
the accuracy of the results and the faithfulness of the mapping of
input parameters to actual operating parameters. For example, the same
orbital will be produced by a finite range of \fdf{PAO!EnergyShift}
values, and any user-defined cutoffs will not be exactly reflected in
the actual cutoffs. This is particularly troublesome for automatic
optimization procedures (such as those implemented in
\texttt{Util/Optimizer}), as the engine might be confused by the extra
level of indirection. The following options can be used to fine-tune
the mapping.  They are not enabled by default, as they change the
numerical results apreciably (in effect, they lead to different basis
orbitals and projectors).

\begin{fdflogicalF}{Reparametrize.Pseudos}

  By changing the $a$ and $b$ parameters of the logarithmic grid, a new
  one with a more adequate grid-point separation can be used for the
  generation of basis sets and projectors. For example, by using
  $a=0.001$ and $b=0.01$, the grid point separations at $r=0$ and 10
  bohrs are 0.00001 and 0.01 bohrs, respectively. More points are needed
  to reach r's of the order of a hundred bohrs, but the extra
  computational effort is negligible.  The net effect of this option
  (notably when coupled to \fdf{Restricted.Radial.Grid} \fdffalse)
  is a closer mapping of any user-specified cutoff radii and of the
  radii implicitly resulting from other input parameters to the actual
  values used by the program. (The small grid-point separation near r=0
  is still needed to avoid instabilities for s channels that occurred
  with the previous (reparametrized) default spacing of 0.005 bohr. This
  effect is not yet completely understood. )

\end{fdflogicalF}

\begin{fdfentry}{New!A.Parameter}[real]<$0.001$>
  \index{basis!reparametrization of pseudopotential}

  New setting for the pseudopotential grid's $a$ parameter

\end{fdfentry}

\begin{fdfentry}{New!B.Parameter}[real]<$0.01$>
  \index{basis!reparametrization of pseudopotential}

  New setting for the pseudopotential grid's $b$ parameter

\end{fdfentry}

\begin{fdfentry}{Rmax.Radial.Grid}[real]<$50.0$>
  \index{basis!point at infinity}

  New setting for the maximum value of the radial coordinate for
  integration of the atomic Schrodinger equation.

  If \fdf{Reparametrize.Pseudos} is \fdffalse\ this will be the
  maximum radius in the pseudopotential file.

\end{fdfentry}

\begin{fdflogicalT}{Restricted.Radial.Grid}

  In normal operation of the basis-set and projector generation code
  the various cutoff radii are restricted to falling on an
  odd-numbered grid point, shifting then accordingly. This restriction
  can be lifted by setting this parameter to \fdffalse.

\end{fdflogicalT}


\subsection{Structural information}

There are many ways to give \siesta\ structural information.

\begin{itemize}
  \item%
  Directly from the fdf file in traditional format.

  \item%
  Directly from the fdf file in the newer Z-Matrix format, using
  a \fdf{Zmatrix} block.

  \item%
  From an external data file
\end{itemize}

Note that, regardless of the way in which the structure is described,
the \fdf{ChemicalSpeciesLabel} block is mandatory.

In the following sections we document the different structure input
methods, and provide a guide to their precedence.

\subsubsection{Traditional structure input in the fdf file}

Firstly, the size of the cell itself should be specified, using some
combination of the options \fdf{LatticeConstant},
\fdf{LatticeParameters}, and \fdf{LatticeVectors}, and
\fdf{SuperCell}.  If nothing is specified, \siesta\ will construct a
cubic cell in which the atoms will reside as a cluster.

Secondly, the positions of the atoms within the cells must be
specified, using either the traditional \siesta\ input format (a
modified xyz format) which must be described within a
\fdf{AtomicCoordinatesAndAtomicSpecies} block.

\begin{fdfentry}{LatticeConstant}[length]

  Lattice constant. This is just to define the scale of the lattice
  vectors.

  \textit{Default value:} Minimum size to include the system (assumed
  to be a molecule) without intercell interactions, plus 10\%.

  \note A LatticeConstant value, even if redundant, might be needed
  for other options, such as the units of the $k$-points used for
  band-structure calculations. This mis-feature will be corrected in
  future versions.
  
\end{fdfentry}

\begin{fdfentry}{LatticeParameters}[block]

  Crystallographic way of specifying the lattice vectors, by giving
  six real numbers: the three vector modules, $a$, $b$, and $c$, and
  the three angles $\alpha$ (angle between $\vec b$ and $\vec c$),
  $\beta$, and $\gamma$. The three modules are in units of
  \fdf{LatticeConstant}, the three angles are in degrees.

  This defaults to a square cell with side-lengths equal to \fdf{LatticeConstant}.
  \begin{fdfexample}
    1.0  1.0  1.0   90.  90.  90.
  \end{fdfexample}
  
\end{fdfentry}

\begin{fdfentry}{LatticeVectors}[block]

  The cell vectors are read in units of the lattice constant defined
  above.  They are read as a matrix \texttt{CELL(ixyz,ivector)}, each
  vector being one line.

  This defaults to a square cell with side-lengths equal to \fdf{LatticeConstant}.
  \begin{fdfexample}
    1.0  0.0  0.0
    0.0  1.0  0.0
    0.0  0.0  1.0
  \end{fdfexample}

  \noindent
  If the \fdf{LatticeConstant} default is used, the default of
  \fdf{LatticeVectors} is still diagonal but not necessarily cubic.
  
\end{fdfentry}

\begin{fdfentry}{SuperCell}[block]

  Integer 3x3 matrix defining a supercell in terms of the unit cell.
  Any values larger than $1$ will expand the unitcell (plus atoms)
  along that lattice vector direction (if possible).
  \begin{fdfexample}
     %block SuperCell
        M(1,1)  M(2,1)  M(3,1) 
        M(1,2)  M(2,2)  M(3,2) 
        M(1,3)  M(2,3)  M(3,3) 
     %endblock SuperCell
  \end{fdfexample}
  and the supercell is defined as
  $\mathrm{SuperCell}(ix,i) = \sum_j \mathrm{CELL}(ix,j)*M(j,i)$.
  Notice that the matrix indexes are inverted: each input line
  specifies one supercell vector.

  \textit{Warning:} \fdf{SuperCell} is disregarded if the geometry is
  read from the XV file, which can happen inadvertently.

  \textit{Use:} The atomic positions must be given only for the unit
  cell, and they are 'cloned' automatically in the rest of the
  supercell. The \fdf{NumberOfAtoms} given must also be that in a
  single unit cell. However, all values in the output are given for
  the entire supercell. In fact, \texttt{CELL} is immediately
  redefined as the whole supercell and the program no longer knows the
  existence of an underlying unit cell.  All other input (apart from
  NumberOfAtoms and atomic positions), including
  \fdf{kgrid!MonkhorstPack} must refer to the supercell (this is a
  change over previous versions). Therefore, to avoid confusions, we
  recommend to use \fdf{SuperCell} only to generate atomic positions,
  and then to copy them from the output to a new input file with all
  the atoms specified explicitly and with the supercell given as a
  normal unit cell.
  
\end{fdfentry}

\begin{fdfentry}{AtomicCoordinatesFormat}[string]<Bohr>

  Character string to specify the format of the atomic positions in
  input. These can be expressed in four forms:

  \begin{fdfoptions}
    \option[Bohr|NotScaledCartesianBohr]%
    \fdfindex*{AtomicCoordinatesFormat:Bohr}%
    \fdfindex*{AtomicCoordinatesFormat:NotScaledCartesianBohr}%

    atomic positions are given directly in Bohr, in Cartesian
    coordinates

    \option[Ang|NotScaledCartesianAng]%
    \fdfindex*{AtomicCoordinatesFormat:Ang}%
    \fdfindex*{AtomicCoordinatesFormat:NotScaledCartesianAng}%

    atomic positions are given directly in \AA ngstr\"om, in Cartesian
    coordinates

    \option[LatticeConstant|ScaledCartesian]%
    \fdfindex*{AtomicCoordinatesFormat:ScaledCartesian}%
    \fdfindex*{AtomicCoordinatesFormat:LatticeConstant}%

    atomic positions are given in Cartesian coordinates, in units of
    the lattice constant

    \option[Fractional|ScaledByLatticeVectors]%
    \fdfindex*{AtomicCoordinatesFormat:Fractional}%
    \fdfindex*{AtomicCoordinatesFormat:ScaledByLatticeVectors}%

    atomic positions are given referred to the lattice vectors

  \end{fdfoptions}
  
\end{fdfentry}


\begin{fdfentry}{AtomCoorFormatOut}[string]<\fdfvalue{AtomicCoordinatesFormat}>

  Character string to specify the format of the atomic positions in
  output. 

  Same possibilities as for input \fdf{AtomicCoordinatesFormat}.
  
\end{fdfentry}

\begin{fdfentry}{AtomicCoordinatesOrigin}[block/string]

  The user can request a rigid shift of the coordinates, for example
  to place a molecule near the center of the cell. This shift can be
  specified in two ways:

  \begin{itemize}
    \item By an explicit vector,
  given in the same format and units as the coordinates. Notice that the atomic
  positions (shifted or not) need not be within the cell formed by
  \fdf{LatticeVectors}, since periodic boundary conditions are
  always assumed.

  This defaults to the origin:
  \begin{fdfexample}
    0.0   0.0   0.0
  \end{fdfexample}

    \item By a string that indicates an automatic shift that places
      the ``center'' of the system at the center of the unit cell, or
      that places the system near the borders of the
      cell. In this case, the contents of the block, or the values
      associated directly to the label (see below) can be:

  \begin{fdfoptions}
    \option[COP]%
    \fdfindex*{AtomicCoordinatesOrigin:COP}

    Place the center of coordinates in the middle of the unit-cell.

    \option[COM]%
    \fdfindex*{AtomicCoordinatesOrigin:COM}

    Place the center of mass in the middle of the unit-cell.

    \option[MIN]%
    \fdfindex*{AtomicCoordinatesOrigin:MIN}

    Shift the coordinates so that the minimum value along each
    cartesian axis is $0$.

  \end{fdfoptions}

   \note  Ghost atoms are not taken into account for the above ``centering''
    calculations (but their coordinates are indeed shifted).

  All string options may be given an optional value. For instance,
  \fdf*{COP-XZ} which limits the \fdf*{COP} option to only affect $x$
  and $z$ Cartesian coordinates.

  The accepted suffixes are: \fdf*{-X}, \fdf*{-Y}, \fdf*{-Z},
  \fdf*{-XY}/\fdf*{-YX}, \fdf*{-YZ}/\fdf*{-YZ}, \fdf*{-XZ}/\fdf*{-ZX}
  and anything else will be regarded as all directions.

  \begin{fdfexample}
    AtomicCoordinatesOrigin COP-X ! COP only for x-direction
    AtomicCoordinatesOrigin COM-ZY ! COM only for y- and z-directions
    AtomicCoordinatesOrigin MIN-Z ! MIN only for z-direction
    AtomicCoordinatesOrigin MIN-XYZ ! MIN for all directions
    AtomicCoordinatesOrigin MIN ! MIN for all directions
  \end{fdfexample}

  \end{itemize}
  
\end{fdfentry}

\begin{fdfentry}{AtomicCoordinatesAndAtomicSpecies}[block]

  Block specifying the position and species of each atom.  One line
  per atom, the reading is done this way:
  \begin{shellexample}
       From ia = 1 to natoms
            read: xa(ix,ia), isa(ia)
  \end{shellexample}
  where \shell{xa(ix,ia)} is the \shell{ix} coordinate of atom
  \shell{iai} in the format (units) specified by
  \fdf{AtomicCoordinatesFormat}, and \shell{isa(ia)} is the species
  index of atom \shell{ia}.

  \note This block \emph{must} be present in the fdf file. If
  \fdf{NumberOfAtoms} is not specified, \fdf{NumberOfAtoms} will be
  defaulted to the number of atoms in this block.

  \note \fdf{Zmatrix} has precedence if specified.
  
\end{fdfentry}


\subsubsection{Z-matrix format and constraints}
\label{sec:Zmatrix}

The advantage of the traditional format is that it is
much easier to set up a system. However, when working
on systems with constraints, there are only a limited
number of (very simple) constraints that may be expressed
within this format, and recompilation is needed for each
new constraint.

For any more involved set of constraints, a
full \fdf{Zmatrix} formulation should be used - this
offers much more control, and may be specified fully at
run time (thus not requiring recompilation) - but
it is more work to generate the input files for this form.


\begin{fdfentry}{Zmatrix}[block]
  
  This block provides a means for inputting the system geometry using
  a Z-matrix format, as well as controlling the optimization
  variables. This is particularly useful when working with molecular
  systems or restricted optimizations (such as locating transition
  states or rigid unit movements). The format also allows for hybrid
  use of Z-matrices and Cartesian or fractional blocks, as is
  convenient for the study of a molecule on a surface.  As is always
  the case for a Z-matrix, the responsibility falls to the user to
  chose a sensible relationship between the variables to avoid triads
  of atoms that become linear.

  Below is an example of a Z-matrix input for a water molecule:
  \begin{fdfexample}
    %block Zmatrix
    molecule fractional
      1 0 0 0   0.0 0.0 0.0 0 0 0
      2 1 0 0   HO1 90.0 37.743919 1 0 0
      2 1 2 0   HO2 HOH 90.0 1 1 0
    variables
        HO1 0.956997
        HO2 0.956997
        HOH 104.4
    %endblock Zmatrix
  \end{fdfexample}

  The sections that can be used within the Zmatrix block are as
  follows:

  Firstly, all atomic positions must be specified within either a
  ``\texttt{molecule}'' block or a ``\texttt{cartesian}'' block.  Any
  atoms subject to constraints more complicated than ``do not change
  this coordinate of this atom'' must be specified within a
  ``\texttt{molecule}'' block.

  \begin{fdfoptions}
    
    \option[molecule]%
    There must be one of these blocks for each independent set of
    constrained atoms within the simulation.
    
    This specifies the atoms that make up each molecule and their
    geometry. In addition, an option of ``\texttt{fractional}'' or
    ``\texttt{scaled}'' may be passed, which indicates that distances are
    specified in scaled or fractional units. In the absence of such an
    option, the distance units are taken to be the value of
    ``\texttt{ZM.UnitsLength}''.

    A line is needed for each atom in the molecule; the format of each
    line should be:
    \begin{fdfexample}
      Nspecies i j k r a t ifr ifa ift
    \end{fdfexample}

    Here the values \texttt{Nspecies}, \texttt{i}, \texttt{j}, \texttt{k},
    \texttt{ifr}, \texttt{ifa}, and \texttt{ift} are integers and
    \texttt{r}, \texttt{a}, and \texttt{t} are double precision reals.

    For most atoms, \texttt{Nspecies} is the species number of the atom,
    \texttt{r} is distance to atom number \texttt{i}, \texttt{a} is the
    angle made by the present atom with atoms \texttt{j} and \texttt{i},
    while \texttt{t} is the torsional angle made by the present atom with
    atoms \texttt{k}, \texttt{j}, and \texttt{i}. The values \texttt{ifr},
    \texttt{ifa} and \texttt{ift} are integer flags that indicate whether
    \texttt{r}, \texttt{a}, and \texttt{t}, respectively, should be
    varied; 0 for fixed, 1 for varying.


    The first three atoms in a molecule are a special case. Because there
    are insufficient atoms defined to specify a distance/angle/torsion,
    the values are set differently. For atom 1, \texttt{r}, \texttt{a},
    and \texttt{t}, are the Cartesian coordinates of the atom.  For the
    second atom, \texttt{r}, \texttt{a}, and \texttt{t} are the
    coordinates in spherical form of the second atom relative to the
    first: first the radius, then the polar angle (angle between the
    $z$-axis and the displacement vector) and then the azimuthal angle
    (angle between the $x$-axis and the projection of the displacement
    vector on the $x$-$y$ plane). Finally, for the third atom, the numbers
    take their normal form, but the torsional angle is defined relative to
    a notional atom 1 unit in the z-direction above the atom \texttt{j}.

    Secondly. blocks of atoms all of which are subject to the simplest of
    constraints may be specified in one of the following three ways,
    according to the units used to specify their coordinates:

    \option[cartesian]%
    This section specifies a block of atoms
    whose coordinates are to be specified in Cartesian coordinates. Again,
    an option of ``\texttt{fractional}'' or ``\texttt{scaled}'' may be
    added, to specify the units used; and again, in their absence, the
    value of ``\texttt{ZM.UnitsLength}'' is taken.

    The format of each atom in the block will look like:
    \begin{fdfexample}
      Nspecies x y z ix iy iz
    \end{fdfexample}

    Here \texttt{Nspecies}, \texttt{ix}, \texttt{iy}, and \texttt{iz} are
    integers and \texttt{x}, \texttt{y}, \texttt{z} are
    reals. \texttt{Nspecies} is the species number of the atom being
    specified, while \texttt{x}, \texttt{y}, and \texttt{z} are the
    Cartesian coordinates of the atom in whichever units are being
    used. The values \texttt{ix}, \texttt{iy} and \texttt{iz} are integer
    flags that indicate whether the \texttt{x}, \texttt{y}, and \texttt{z}
    coordinates, respectively, should be varied or not. A value of 0
    implies that the coordinate is fixed, while 1 implies that it should
    be varied.  \textbf{NOTE}: When performing ``variable cell''
    optimization while using a Zmatrix format for input, the algorithm
    will not work if some of the coordinates of an atom in a
    \texttt{cartesian} block are variables and others are not (i.e.,
    \texttt{ix iy iz} above must all be 0 or 1). This will be fixed in
    future versions of the program.

    A Zmatrix block may also contain the following, additional, sections, which
    are designed to make it easier to read.
    

    \option[constants]%
    Instead of specifying a numerical value, it is possible to specify
    a symbol within the above geometry definitions. This section
    allows the user to define the value of the symbol as a
    constant. The format is just a symbol followed by the value:
    \begin{fdfexample}
      HOH 104.4
    \end{fdfexample}


    \option[variables]%
    Instead of specifying a numerical value, it is possible to specify
    a symbol within the above geometry definitions. This section
    allows the user to define the value of the symbol as a
    variable. The format is just a symbol followed by the value:
    \begin{fdfexample}
      HO1 0.956997
    \end{fdfexample}
    
    Finally, constraints must be specified in a \fdf*{constraints} block.

    
    \option[constraint]%
    This sub-section allows the user to create
    constraints between symbols used in a Z-matrix:
    \begin{fdfexample}
      constraint
        var1 var2 A B
    \end{fdfexample}
    Here \fdf*{var1} and \fdf*{var2} are text symbols for two
    quantities in the Z-matrix definition, and $A and $B are real
    numbers. The variables are related by $\fdf*{var1} = A*\fdf*{var2}
    + B$.
    
  \end{fdfoptions}
  
  An example of a Z-matrix input for a benzene molecule over a metal surface is:
  \begin{fdfexample}
    %block Zmatrix
      molecule
       2 0 0 0 xm1 ym1 zm1 0 0 0
       2 1 0 0 CC 90.0 60.0 0 0 0
       2 2 1 0 CC CCC 90.0 0 0 0
       2 3 2 1 CC CCC 0.0 0 0 0
       2 4 3 2 CC CCC 0.0 0 0 0
       2 5 4 3 CC CCC 0.0 0 0 0
       1 1 2 3 CH CCH 180.0 0 0 0
       1 2 1 7 CH CCH 0.0 0 0 0
       1 3 2 8 CH CCH 0.0 0 0 0
       1 4 3 9 CH CCH 0.0 0 0 0
       1 5 4 10 CH CCH 0.0 0 0 0
       1 6 5 11 CH CCH 0.0 0 0 0
      fractional
       3 0.000000 0.000000 0.000000 0 0 0
       3 0.333333 0.000000 0.000000 0 0 0
       3 0.666666 0.000000 0.000000 0 0 0
       3 0.000000 0.500000 0.000000 0 0 0
       3 0.333333 0.500000 0.000000 0 0 0
       3 0.666666 0.500000 0.000000 0 0 0
       3 0.166667 0.250000 0.050000 0 0 0
       3 0.500000 0.250000 0.050000 0 0 0
       3 0.833333 0.250000 0.050000 0 0 0
       3 0.166667 0.750000 0.050000 0 0 0
       3 0.500000 0.750000 0.050000 0 0 0
       3 0.833333 0.750000 0.050000 0 0 0
       3 0.000000 0.000000 0.100000 0 0 0
       3 0.333333 0.000000 0.100000 0 0 0
       3 0.666666 0.000000 0.100000 0 0 0
       3 0.000000 0.500000 0.100000 0 0 0
       3 0.333333 0.500000 0.100000 0 0 0
       3 0.666666 0.500000 0.100000 0 0 0
       3 0.166667 0.250000 0.150000 0 0 0
       3 0.500000 0.250000 0.150000 0 0 0
       3 0.833333 0.250000 0.150000 0 0 0
       3 0.166667 0.750000 0.150000 0 0 0
       3 0.500000 0.750000 0.150000 0 0 0
       3 0.833333 0.750000 0.150000 0 0 0
     constants
       ym1 3.68
     variables
       zm1 6.9032294
       CC 1.417
       CH 1.112
       CCH 120.0
       CCC 120.0
     constraints
       xm1 CC -1.0 3.903229
   %endblock Zmatrix
  \end{fdfexample}
  
  Here the species 1, 2 and 3 represent H, C, and the metal of the
  surface, respectively.
   
  (Note: the above example shows the usefulness of symbolic names
  for the relevant coordinates, in particular for those which are
  allowed to vary. The current output options for Zmatrix information
  work best when this approach is taken. By using a ``fixed'' symbolic
  Zmatrix block and specifying the actual coordinates in a ``variables''
  section, one can monitor the progress of the optimization and
  easily reconstruct the coordinates of intermediate steps in the
  original format.)

\end{fdfentry}

\begin{fdfentry}{ZM!UnitsLength}[string]<Bohr>

  Parameter that specifies the units of length used during Z-matrix
  input.

  Specify \fdf*{Bohr} or \fdf*{Ang} for the corresponding unit of length.
  
\end{fdfentry}

\begin{fdfentry}{ZM!UnitsAngle}[string]<rad>

  Parameter that specifies the units of angles used during Z-matrix input.

  Specify \fdf*{rad} or \fdf*{deg} for the corresponding unit of angle.

\end{fdfentry}


\subsubsection{Output of structural information}

\siesta\ is able to generate several kinds of files containing
structural information (maybe too many).

\begin{itemize}

  \item\sysfile{STRUCT\_OUT}:%
  Siesta always produces a \sysfile*{STRUCT\_OUT} file with cell
  vectors in {\AA} and atomic positions in fractional
  coordinates. This file, renamed to \sysfile*{STRUCT\_IN} can be used
  for crystal-structure input.  Note that the geometry reported is the
  last one for which forces and stresses were computed.  See
  \fdf{UseStructFile}
  
  \item\sysfile{STRUCT\_NEXT\_ITER}:%
  This file is always written, in the same format as
  \sysfile*{STRUCT\_OUT} file. The only difference is that it contains
  the structural information \emph{after} it has been updated by the
  relaxation or the molecular-dynamics algorithms, and thus it could
  be used as input (renamed as \sysfile*{STRUCT\_IN}) for a
  continuation run, in the same way as the \sysfile*{XV} file.
  
  See \fdf{UseStructFile}

  \item\sysfile{XV}:%
  The coordinates are always written in the \sysfile*{XV} file, and
  overriden at every step.
  
  \item\file{OUT.UCELL.ZMATRIX}:%
  This file is produced if the Zmatrix format is being used for
  input. (Please note that \fdf{SystemLabel} is not used as a
  prefix.)  It contains the structural information in fdf form, with
  blocks for unit-cell vectors and for Zmatrix coordinates. The
  Zmatrix block is in a ``canonical'' form with the following
  characteristics:

\begin{verbatim}
1. No symbolic variables or constants are used.
2. The position coordinates of the first atom in each molecule
   are absolute Cartesian coordinates.
3. Any coordinates in ``cartesian'' blocks are also absolute Cartesians.
4. There is no provision for output of constraints.
5. The units used are those initially specified by the user, and are
   noted also in fdf form.
\end{verbatim}

  Note that the geometry reported is the last one for which forces and
  stresses were computed.

  \item\file{NEXT\_ITER.UCELL.ZMATRIX}:%
  A file with the same format as \file{OUT.UCELL.ZMATRIX} but with
  a possibly updated geometry.
  
  \item The coordinates can be also accumulated
  in the \sysfile{MD} or \sysfile{MDX} files
  depending on \fdf{WriteMDHistory}.

  \item Additionally, several optional formats are supported:
  \begin{fdflogicalF}{WriteCoorXmol}
    \index{JMol@\textsc{JMol}}
    \index{XMol@\textsc{XMol}}
    \index{Molden@\textsc{Molden}}

    If \fdftrue\ it originates the writing of an extra file named
    \sysfile{xyz} containing the final atomic coordinates in a format
    directly readable by \method{XMol}.\footnote{XMol is under
        \copyright\ copyright of Research Equipment Inc., dba
        Minnesota Supercomputer Center Inc.} Coordinates come out in
    \AA ngstr\"om independently of what specified in
    \fdf{AtomicCoordinatesFormat} and in
    \fdf{AtomCoorFormatOut}. There is a present \method{Java}
    implementation of \method{XMol} called \method{JMol}.
    
  \end{fdflogicalF}
  
  \begin{fdflogicalF}{WriteCoorCerius}
    \index{Cerius2@\textsc{Cerius2}}

    If \fdftrue it originates the writing of an extra file named
    \sysfile{xtl} containing the final atomic coordinates in a format
    directly readable by \method{Cerius}.\footnote{\method{Cerius} is
        under \copyright\ copyright of Molecular Simulations Inc.}
    Coordinates come out in \fdf*{Fractional} format (the same as
    \fdf*{ScaledByLatticeVectors}) independently of what specified
    in \fdf{AtomicCoordinatesFormat} and in
    \fdf{AtomCoorFormatOut}.  If negative coordinates are to be
    avoided, it has to be done from the start by shifting all the
    coordinates rigidly to have them positive, by using
    \fdf{AtomicCoordinatesOrigin}.  See the
    \program{Sies2arc}\index{Sies2arc@\textsc{Sies2arc}} utility in the
    \program{Util/} directory for generating \sysfile*{arc} files for CERIUS animation.

  \end{fdflogicalF}

  \begin{fdflogicalF}{WriteMDXmol}
    \index{XMol@\textsc{XMol}}
    \index{Molden@\textsc{Molden}}

    If \fdftrue\ it causes the writing of an extra file
    named \sysfile{ANI} containing all the atomic
    coordinates of the simulation in a format directly readable by
    \method{XMol} for animation.\index{animation} Coordinates come out in
    \AA ngstr\"om independently of what is specified in
    \fdf{AtomicCoordinatesFormat} and in \fdf{AtomCoorFormatOut}.
    This file is accumulative even for different runs.
    
    There is an alternative for animation by generating a \sysfile*{arc} file for
    \method{CERIUS}. It is through the
    \method{Sies2arc}\index{Sies2arc@\program{Sies2arc}} postprocessing
    utility in the Util/ directory, and it 
    requires the coordinates to be accumulated in the output file, i.e.,
    \fdf{WriteCoorStep} \fdftrue.

  \end{fdflogicalF}

\end{itemize}



\subsubsection{Input of structural information from external files}

The structural information can be also read from external files. Note
that \fdf{ChemicalSpeciesLabel} is mandatory in the fdf file.

\begin{fdflogicalF}{MD!UseSaveXV}
  \index{reading saved data!XV}

  Logical variable which instructs \siesta\ to read the atomic
  positions and velocities stored in file \sysfile{XV} by a previous
  run.
  
  If the file does not exist, a warning is printed but the
  program does not stop. Overrides \fdf{UseSaveData}, but can be
  implicitly set by it.

\end{fdflogicalF}

\begin{fdflogicalF}{UseStructFile}

  Controls whether the structural information is read from an external
  file of name \sysfile{STRUCT\_IN}. If \fdftrue, all other
  structural information in the fdf file will be ignored.

  The format of the file is implied by the following code:
\begin{verbatim}
read(*,*) ((cell(ixyz,ivec),ixyz=1,3),ivec=1,3)  ! Cell vectors, in Angstroms
read(*,*) na
do ia = 1,na
   read(iu,*) isa(ia), dummy, xfrac(1:3,ia)  ! Species number
                                             ! Dummy numerical column
                                             ! Fractional coordinates
enddo
\end{verbatim}
  
  \textit{Warning:} Note that the resulting geometry could be clobbered if
  an \sysfile*{XV} file is read after this file. It is up to the user to remove
  any \sysfile*{XV} files.
  
\end{fdflogicalF}

\begin{fdflogicalF}{MD!UseSaveZM}
  \index{reading saved data!ZM}

  Instructs to read the Zmatrix information stored
  in file \sysfile*{ZM} by a previous run.

  If the required file does not exist, a warning is
  printed but the program does not stop. Overrides \fdf{UseSaveData},
  but can be implicitly set by it.
  
  \textit{Warning:} Note that the resulting geometry could be clobbered if
  an \sysfile*{XV} file is read after this file. It is up to the user to remove
  any \sysfile*{XV} files.
  
\end{fdflogicalF}



\subsubsection{Input from a FIFO file}

See the ``Forces'' option in \fdf{MD.TypeOfRun}. Note that
\fdf{ChemicalSpeciesLabel} is still mandatory in the fdf file.

\subsubsection{Precedence issues in structural input}
\index{structure input precedence issues}

\begin{itemize}
  \item If the ``Forces'' option is active, it takes precedence over
  everything (it will overwrite all other input with the information it
  gets from the FIFO file).

  \item If \fdf{MD!UseSaveXV} is active, it takes precedence over the options below.

  \item If \fdf{UseStructFile} (or \fdf*{MD!UseStructFile}) is active, it takes precedence
  over the options below.

  \item For atomic coordinates, the traditional and Zmatrix formats in
  the fdf file are mutually exclusive. If \fdf{MD!UseSaveZM} is
  active, the contents of the ZM file, if found, take precedence over
  the Zmatrix information in the fdf file.

\end{itemize}

\subsubsection{Interatomic distances}

\begin{fdfentry}{WarningMinimumAtomicDistance}[length]<$1\,\mathrm{Bohr}$>

  Fixes a threshold interatomic distance below which a warning
  message is printed.

\end{fdfentry}

\begin{fdfentry}{MaxBondDistance}[length]<$6\,\mathrm{Bohr}$>

  \siesta\ prints the interatomic distances\index{interatomic
      distances}, up to a range of \fdf{MaxBondDistance}, to file
  \sysfile{BONDS} upon first reading the structural information, and
  to file \sysfile{BONDS\_FINAL} after the last geometry
  iteration. The reference atoms are all the atoms in the unit
  cell. The routine now prints the real location of the neighbor atoms
  in space, and not, as in earlier versions, the location of the
  equivalent representative in the unit cell.

\end{fdfentry}



\subsection{\texorpdfstring{$k$}{k}-point sampling}
\label{ssec:k-points}

These are options for the $k$-point grid used in the SCF cycle. For
other specialized grids, see Secs.~\ref{sec:macroscopic-polarization}
and \ref{sec:dos}. The order of the following keywords
is equivalent to their precedence.

\begin{fdfentry}{kgrid!MonkhorstPack}[block/list]<$\Gamma$-point>
  
  Real-space supercell, whose reciprocal unit cell is that of the
  k-sampling grid, and grid displacement for each grid coordinate.
  Specified as an integer matrix and a real vector:

  \begin{fdfexample}
     %block kgrid.MonkhorstPack
        Mk(1,1)  Mk(2,1)  Mk(3,1)   dk(1)
        Mk(1,2)  Mk(2,2)  Mk(3,2)   dk(2)
        Mk(1,3)  Mk(2,3)  Mk(3,3)   dk(3)
     %endblock
     kgrid.MonkhorstPack [Mk(1,1) Mk(2,2) Mk(3,3)]
  \end{fdfexample}
  
  where \texttt{Mk(j,i)} are integers and \texttt{dk(i)} are usually
  either 0.0 or 0.5 (the program will warn the user if the displacements
  chosen are not optimal).
  The k-grid supercell is defined from \texttt{Mk}
  as in block \fdf{SuperCell} above, i.e.:
  $KgridSuperCell(ix,i) = \sum_j CELL(ix,j)*Mk(j,i)$.
  Note again that the matrix indexes are inverted: each input line
  gives the decomposition of a supercell vector in terms of the unit
  cell vectors.
  
  
  \textit{Use:} Used only if \fdf{SolutionMethod} \fdf*{diagon}.  The
  k-grid supercell is compatible and unrelated (except for the default
  value, see below) with the \fdf{SuperCell} specifier. Both
  supercells are given in terms of the CELL specified by the
  \fdf{LatticeVectors} block.  If \texttt{Mk} is the identity matrix
  and \texttt{dk} is zero, only the $\Gamma$ point of the
  \textbf{unit} cell is used.  Overrides \fdf{kgrid!Cutoff}.

  One may also use the \emph{list} input (last line in above example),
  in that case the block input must not be present and in this case
  the displacement vector cannot be selected.

\end{fdfentry}

\begin{fdfentry}{kgrid!Cutoff}[length]<$0.\,\mathrm{Bohr}$>

  Parameter which determines the fineness of the $k$-grid used for
  Brillouin zone sampling. It is half the length of the smallest
  lattice vector of the supercell required to obtain the same sampling
  precision with a single k point.  Ref: Moreno and Soler, PRB 45,
  13891 (1992).

  \textit{Use:} If it is zero, only the gamma point is used.  The resulting
  k-grid is chosen in an optimal way, according to the method of Moreno
  and Soler (using an effective supercell which is as spherical as
  possible, thus minimizing the number of k-points for a given
  precision). The grid is displaced for even numbers of effective mesh
  divisions.  This parameter is not used if \fdf{kgrid!MonkhorstPack}
  is specified. If the unit cell changes during the calculation (for
  example, in a cell-optimization run, the k-point
  grid will change accordingly (see \fdf{ChangeKgridInMD} for the case
  of variable-cell molecular-dynamics runs, such as Parrinello-Rahman).
  This is analogous to the changes in the
  real-space grid, whose fineness is specified by an energy cutoff. If
  sudden changes in the number of k-points are not desired, then the
  Monkhorst-Pack data block should be used instead. In this case there
  will be an implicit change in the quality of the sampling as the cell
  changes. Both methods should be equivalent for a well-converged
  sampling.
  
\end{fdfentry}

\begin{fdfentry}{kgrid!File}[string]<none>

  Specify a file from where the $k$-points are read in.  The format of
  the file is identical to the \sysfile{KP} file with the exception
  that the $k$-points are given in units of the reciprocal lattice
  vectors. I.e. the range of the $k$-points are $]-1/2 ; 1/2]$.

  An example input may be (not physically justified in any sense):
  \begin{shellexample}
    4
    1 0.0 0.0 0.0 0.25
    2 0.5 0.5 0.5 0.25
    3 0.2 0.2 0.2 0.25
    4 0.3 0.3 0.3 0.25
  \end{shellexample}
  The first integer specifies the total number of $k$-points in the
  file. The first column is an index; the next 3 columns are the
  $k$-point specification for each of the reciprocal lattice vectors
  while the fifth column is the weight for the $k$-point.

  \siesta\ checks whether the sum of weights equals 1. If not,
  \siesta\ will die.

\end{fdfentry}

\begin{fdflogicalF}{ChangeKgridInMD}

  If \fdftrue, the $k$-point grid is recomputed at every
  iteration during MD runs that potentially change the unit cell:
  Parrinello-Rahman, Nose-Parrinello-Rahman, and Anneal. Regardless of
  the setting of this flag, the k-point grid is always updated at
  every iteration of a variable-cell optimization and after each step
  in a ``siesta-as-server'' run.

  It is defaulted to \fdffalse\ for historical reasons. The rationale
  was to avoid sudden jumps in some properties when the sampling
  changes, but if the calculation is well-converged there should be no
  problems if the update is enabled.
  
\end{fdflogicalF}


\begin{fdflogicalT}{TimeReversalSymmetryForKpoints}

If \fdftrue, the k-points in the BZ generated by the methods above
are paired as (k,-k) and only one member of the pair is retained. This
symmetry is valid in the absence of external magnetic fields or
non-collinear/spin-orbit interaction.

This flag is only honored for spinless or collinear-spin calculations,
as the code will produce wrong results if there is no support for the
appropriate symmetrization.

The default value is \fdftrue unless: a) the option \fdf{Spin!Spiral} 
is used. In this case time-reversal-symmetry is broken explicitly. b)
non-collinear spin calculations. This case is less clear cut, but the
time-reversal symmetry is not used to avoid possible breakings due to
subtle implementation details, and to make the set of wavefunctions
compatible with spin-orbit case in analysis tools.



\end{fdflogicalT}


\subsubsection{Output of k-point information}
\index{output!grid $\vec k$ points}

The coordinates of the $\vec k$ points used in the sampling
are always stored in the file \sysfile{KP}.

\begin{fdflogicalF}{WriteKpoints}
  \index{output!grid $\vec k$ points}
  
  If \fdftrue\ it writes the coordinates of the $\vec k$ vectors used
  in the grid for $k$-sampling, into the main output file.
  
  Default depends on \fdf{LongOutput}.

\end{fdflogicalF}



\subsection{Exchange-correlation functionals}


\begin{fdfentry}{XC!Functional}[string]<LDA>
  
  Exchange-correlation functional type. May be \fdf*{LDA} (local
  density approximation, equivalent to \fdf*{LSD}), \fdf*{GGA}
  (Generalized Gradient Approximation), or \fdf*{VDW} (van der
  Waals).

\end{fdfentry}

\begin{fdfentry}{XC!Authors}[string]<PZ>
  \newcommand\xcidx[1]{\index{exchange-correlation!#1}}
  
  Particular parametrization of the exchange-correlation
  functional. Options are:
  \begin{itemize}
    \item%
    \fdf*{CA} (equivalent to \fdf*{PZ}): \xcidx{CA} \xcidx{PZ}
    (Spin) local density approximation (LDA/LSD). \xcidx{LDA}
    \xcidx{LSD} Quantum Monte Carlo calculation of the homogeneous
    electron gas by D. M. Ceperley and B. J. Alder,
    Phys. Rev. Lett. \textbf{45},566 (1980), as parametrized by
    J. P. Perdew and A. Zunger, Phys. Rev B \textbf{23}, 5075 (1981)
    
    \item% 
    \fdf*{PW92}: \xcidx{PW92}
    LDA/LSD, as parametrized by 
    J. P. Perdew and Y. Wang, Phys. Rev B, \textbf{45}, 13244 (1992)
    
    \item%
    \fdf*{PW91}: \xcidx{PW91}%
    Generalized gradients approximation (GGA) \xcidx{GGA} of Perdew and Wang. 
    Ref: P\&W, J. Chem. Phys., \textbf{100}, 1290 (1994)

    \item%
    \fdf*{PBE}: \xcidx{PBE}%
    GGA of J. P. Perdew, K. Burke and M. Ernzerhof, 
    Phys. Rev. Lett. \textbf{77}, 3865 (1996) 
    
    \item%
    \fdf*{revPBE}: \xcidx{revPBE}%
    Modified GGA-PBE functional of Y. Zhang and W. Yang, 
    Phys. Rev. Lett. \textbf{80}, 890 (1998)

    \item%
    \fdf*{RPBE}: \xcidx{RPBE}%
    Modified GGA-PBE functional of 
    B. Hammer, L. B. Hansen and J. K. Norskov Phys. Rev. B \textbf{59}, 7413 (1999)
    
    \item%
    \fdf*{WC}: \xcidx{WC}%
    Modified GGA-PBE functional of 
    Z. Wu and R. E. Cohen, Phys. Rev. B \textbf{73}, 235116 (2006)

    \item%
    \fdf*{AM05}: \xcidx{AM05}%
    Modified GGA-PBE functional of 
    R. Armiento and A. E. Mattsson, Phys. Rev. B \textbf{72}, 085108 (2005)
    
    \item%
    \fdf*{PBEsol}: \xcidx{PBEsol}%
    Modified GGA-PBE functional of 
    J. P. Perdew et al, Phys. Rev. Lett. \textbf{100}, 136406 (2008)

    \item%
    \fdf*{PBEJsJrLO}: \xcidx{PBEJsJrLO}%
    GGA-PBE functional with parameters $\beta, \mu$, and $\kappa$ fixed by 
    the jellium surface (Js), jellium response (Jr), 
    and Lieb-Oxford bound (LO) criteria, respectively, as described by 
    L. S. Pedroza, A. J. R. da Silva, and K. Capelle, 
    Phys. Rev. B \textbf{79}, 201106(R) (2009), and by 
    M. M. Odashima, K. Capelle, and S. B. Trickey, 
    J. Chem. Theory Comput. \textbf{5}, 798 (2009)

    \item%
    \fdf*{PBEJsJrHEG}: \xcidx{PBEJsJrHEG}%
    Same as PBEJsJrLO, with parameter $\kappa$ fixed by the  Lieb-Oxford bound 
    for the low density limit of the homogeneous electron gas (HEG)

    \item%
    \fdf*{PBEGcGxLO}: \xcidx{PBEGcGxLO}%
    Same as PBEJsJrLO, with parameters $\beta$ and $\mu$ fixed by the 
    gradient expansion of correlation (Gc) and exchange (Gx), respectively
    
    \item%
    \fdf*{PBEGcGxHEG}: \xcidx{PBEGcGxHEG}%
    Same as previous ones, with parameters $\beta,\mu$, and $\kappa$ fixed by 
    the Gc, Gx, and HEG criteria, respectively.

    \item%
    \fdf*{BLYP} (equivalent to \fdf*{LYP}): \xcidx{BLYP}%
    GGA with Becke exchange (A. D. Becke, Phys. Rev. A \textbf{38}, 3098 (1988)) 
    and Lee-Yang-Parr correlation 
    (C. Lee, W. Yang, R. G. Parr, Phys. Rev. B \textbf{37}, 785 (1988)), 
    as modified by B. Miehlich, A. Savin, H. Stoll, and H. Preuss,
    Chem. Phys. Lett. \textbf{157}, 200 (1989). 
    See also B. G. Johnson, P. M. W. Gill and J. A. Pople,
    J. Chem. Phys. \textbf{98}, 5612 (1993). (Some errors were detected in this
    last paper, so not all of their expressions correspond exactly to those
    implemented in \siesta)

    \item%
    \fdf*{DRSLL} (equivalent to \fdf*{DF1}): \xcidx{vdW-DF1}
    \xcidx{DRSLL}%
    van der Waals \xcidx{vdW} density functional (vdW-DF) \xcidx{vdW-DF}
    of M. Dion, H. Rydberg, E. Schr\"{o}der, D. C. Langreth, and B. I. Lundqvist,
    Phys. Rev. Lett. \textbf{92}, 246401 (2004), with the efficient implementation of 
    G. Rom\'an-P\'erez and J. M. Soler, Phys. Rev. Lett. \textbf{103},  096102 (2009)
    
    \item%
    \fdf*{LMKLL} (equivalent to \fdf*{DF2}): \xcidx{vdW-DF2} \xcidx{LMKLL}%
    vdW-DF functional of Dion \textit{et al} (same as DRSLL)
    reparametrized by K. Lee, E. Murray, L. Kong, B. I. Lundqvist and 
    D. C. Langreth, Phys. Rev. B \textbf{82}, 081101 (2010)

    \item%
    \fdf*{KBM}: \xcidx{KBM}%
    vdW-DF functional of Dion \textit{et al} (same as DRSLL)
    with exchange modified by J. Klimes, D. R. Bowler, and A. Michaelides, 
    J. Phys.: Condens. Matter \textbf{22}, 022201 (2010) (optB88-vdW version)
    
    \item%
    \fdf*{C09}: \xcidx{C09}%
    vdW-DF functional of Dion \textit{et al} (same as DRSLL)
    with exchange modified by V. R. Cooper, Phys. Rev. B \textbf{81}, 161104 (2010)
    
    \item%
    \fdf*{BH}: \xcidx{BH}%
    vdW-DF functional of Dion \textit{et al} (same as DRSLL) 
    with exchange modified by 
    K. Berland and P. Hyldgaard, Phys. Rev. B 89, 035412 (2014)
    
    \item%
    \fdf*{VV}: \xcidx{VV}%
    vdW-DF functional of O. A. Vydrov and T. Van Voorhis, 
    J. Chem. Phys. \textbf{133}, 244103 (2010)
    
  \end{itemize}

\end{fdfentry}


\begin{fdfentry}{XC!Hybrid}[block]
  
  This data block allows the user to create a ``cocktail'' functional by
  mixing the desired amounts of exchange and correlation from each of
  the functionals described under XC.authors. Note that these ``mixed''
  functionals do \emph{not} have the exact Hartree-Fock exchange which
  is a key ingredient of the true ``hybrid'' functionals. The use of
  the word ``hybrid'' in the label is unfortunate in this regard, and
  might be deprecated in a future version.

  The first line of the block must contain the number of functionals to
  be mixed. On the subsequent lines the values of XC.functl and
  XC.authors must be given and then the weights for the exchange and
  correlation, in that order. If only one number is given then the same
  weight is applied to both exchange and correlation.

  The following is an example in which a 75:25 mixture of Ceperley-Alder
  and PBE correlation is made, with an equal split of the exchange
  energy:
  
  \begin{fdfexample}
     %block XC.hybrid
        2
        LDA CA  0.5 0.75
        GGA PBE 0.5 0.25
     %endblock XC.hybrid
  \end{fdfexample}

\end{fdfentry}

\begin{fdflogicalF}{XC!Use.BSC.CellXC}
  \index{exchange-correlation!cellXC}

  If \fdftrue, the version of \texttt{cellXC} from the BSC's mesh
  suite is used instead of the default SiestaXC version. BSC's version
  might be slightly better for GGA operations. SiestaXC's version is
  mandatory when dealing with van der Waals functionals.

\end{fdflogicalF}


\subsection{Spin polarization}


\begin{fdfentry}{Spin}[string]<non-polarized>
  \fdfdeprecates{SpinPolarized,NonCollinearSpin,SpinOrbit}

  Choose the spin-components in the simulation.

  \note This flag has precedence over \fdf*{SpinOrbit}, \fdf*{NonCollinearSpin} and
  \fdf*{SpinPolarized} while these deprecated flags may still be used.
  \begin{fdfoptions}

    \option[non-polarized]%
    \fdfindex*{Spin:non-polarized}%
    Perform a calculation with spin-degeneracy (only one component).

    \option[polarized]%
    \fdfindex*{SpinPolarized}%
    \fdfindex*{Spin:polarized}%
    Perform a calculation with colinear spin (two spin components).

    \option[non-colinear]%
    \fdfindex*{NonCollinearSpin}%
    \fdfindex*{Spin:non-colinear}%
    Perform a calculation with non-colinear spin (4 spin components),
    up-down and angles.

    Refs: T. Oda et al, PRL, \textbf{80}, 3622 (1998); 
    V. M. Garc\'{\i}a-Su\'arez et al, Eur. Phys. Jour. B \textbf{40}, 371 (2004);
    V. M. Garc\'{\i}a-Su\'arez et al, Journal of
    Phys: Cond. Matt \textbf{16}, 5453 (2004).

    \option[spin-orbit]%
    \fdfindex*{SpinOrbit}%
    \fdfindex*{Spin:spin-orbit}%
    Performs calculations including the spin-orbit coupling. By default the 
    off-site SO option is set to \fdftrue. To perform an on-site SO calculations 
    this option has to be \fdf*{spin-orbit+onsite}. This requires the
    pseudopotentials to be relativistic.

    See Sect.~\ref{sec:spin-orbit} for further specific spin-orbit options.

  \end{fdfoptions}

  \siesta\ can read a \sysfile*{DM} with different spin structure by
  adapting the information to the currently selected spin
  multiplicity, averaging or splitting the spin components equally, as
  needed. This may be used to greatly increase convergence.

  Certain options may not be used together with specific
  parallelization routines.

\end{fdfentry}

\begin{fdflogicalF}{Spin!Fix}
  \index{fixed spin state}\index{LSD}
  \fdfindex*{FixSpin}[|Spin!Fix]%

  If \fdftrue, the calculation is done with a fixed value of the spin
  of the system, defined by variable \fdf{Spin!Total}. This option can
  only be used for colinear spin polarized calculations.

\end{fdflogicalF}

\begin{fdfentry}{Spin!Total}[real]<$0$>
  \index{fixed spin state}\index{LSD}
  \index{spin}
  \fdfindex*{TotalSpin}[|Spin!Total]
  
  Value of the imposed total spin polarization of the system (in units
  of the electron spin, 1/2). It is only used if \fdf{Spin!Fix} \fdftrue.
  
\end{fdfentry}

\begin{fdflogicalF}{SingleExcitation}

  If \fdftrue, \siesta\ calculates a very rough approximation to the
  lowest excited state by swapping the populations of the HOMO and the
  LUMO. If there is no spin polarisation, it is half swap only.  It is
  done for the first spin component (up) and first $k$ vector.
  
\end{fdflogicalF}


\subsection{Spin-Orbit coupling}
\label{sec:spin-orbit}

\siesta\ includes the posibility to perform fully relativistic
calculations by means of the inclusion in the total Hamiltonian not
only the Darwin and velocity correction terms~(Scalar--Relativistic
calculations), but also the spin-orbit~(SO) contribution. There are
two approaches regarding the SO formalism: on-site and off-site.
Within the on-site approximation only the intra-atomic SO
contribution is taken into account. In the off-site scheme additional
neighboring interactions are also included in the SO term. By default,
the off-site SO formalism is switched on, being necessary to change
the \fdf{Spin} flag in the input file if the on-site approximation
wants to be used. See \fdf{Spin} on how to handle the spin-orbit
coupling.

The on-site spin-orbit scheme in this version of \siesta\ has been implemented by
Dr. Ram\'on Cuadrado based on the original on-site SO formalism and
implementation developed by Prof. Jaime Ferrer and his collaborators \textit{et al}~(L
Fern\'andez--Seivane, M Oliveira, S Sanvito, and J Ferrer, Journal of
Physics: Condensed Matter, \textbf{18}, 7999 (2006); L Fern\'andez--Seivane 
and Jaime Ferrer, Phys. Rev. Lett. \textbf{99}, 183401 (2007)).

The off-site scheme has been implemented by
Dr. Ram\'on Cuadrado and Dr. Jorge I. Cerd\'a based on their initial 
work~(R. Cuadrado and J. I. Cerd\'a ``Fully relativistic pseudopotential 
formalism under an atomic orbital basis: spin-orbit splittings and 
magnetic anisotropies'', J. Phys.: Condens. Matter \textbf{24}, 086005 (2012); 
``In-plane/out-of-plane disorder influence on the magnetic anisotropy of 
Fe$_{1-y}$Mn$_y$Pt-L1(0) bulk alloy'', R. Cuadrado, Kai Liu, Timothy 
J. Klemmer and R. W. Chantrell, Applied Physics Letters, \textbf{108}, 
123102 (2016)).  

The inclusion of the SO term in the Hamiltonian (and in the Density
Matrix) causes an increase in the number of non-zero elements in their
off-diagonal parts, i.e., for some $(\mu,\nu)$ pair of basis
orbitals, $\mathbf H^{\sigma\sigma'}_{\mu\nu}$ ($\mathbf{DM}^{\sigma\sigma'}_{\mu\nu}$)
[$\sigma,\sigma'=\uparrow,\downarrow$] will be $\neq0$. This is
mainly due to the fact that the $\mathbf L\cdot\mathbf S$ operator
will promote the mixing between different spin-up/down components.
In addition, these $\mathbf H^{\sigma\sigma'}_{\mu\nu}$ (and
$\mathbf{DM}^{\sigma\sigma'}_{\mu\nu}$) elements will be complex, in contrast
with typical polarized/non-polarized calculations where these
matrices are purely real. Since the spin-up and spin-down manifolds
are essentially mixed, the solver has to deal with matrices whose
dimensions are twice as large as for the collinear (unmixed) spin
problem. Due to this, we advise to take special
attention to the memory needed to perform a spin-orbit calculation.


Unless explicitly advised the following type of calculation can be carried out 
regardless of whether on-site or off-site approximation is employed: 
\begin{itemize}
  % 
  \item Selfconsistent calculations for gamma point as well as for
  bulks.
  %
  \item Structure optimizations 
  %
  %%% *** Incompatible... \item LDA+U calculations~(See Sect.\ref{sec:lda+u} for further info).
  % 
  \item Magnetic Anisotropy Energy~(MAE) can be easily
    calculated. From first principles it is obtained after subtracting
    the total selfconsistent energy calculated for two different
    magnetic orientations. In \siesta\ it is possible to perform
    calculations with different initial magnetic orderings
    by means of the use of the block \fdf{DM.InitSpin} in the fdf
    file. In doing so one will be able to include the initial
    orientation angles of the magnetization for each atom, as well as
    an initial value of its net magnetic moments.
  % 
  \item By means of Mulliken analysis, after the selfconsistent
  procedure, local spin and orbital moments can be calculated by means
  of the flags \fdf{WriteMullikenPop} and \fdf{WriteOrbMom}.
  % 
\end{itemize}

Note: Due to the small SO contribution to the total energy, the level
of precision required to perform a proper fully relativistic
calculation during the selfconsistent process is quite demanding. The
following values must be carefully converged and checked for each
specific system to assure that the results are accurate enough:
\fdf{SCF.H!Tolerance} during the selfconsistency (typically between
$10^{-3}\,\mathrm{eV}$ -- $10^{-4}\,\mathrm{eV}$),
\fdf{ElectronicTemperature}, \textbf{k}--point sampling and high
values of \fdf{MeshCutoff}~(specifically for extended solids). In
general, one can say that a good calculation will have high number of
\textbf{k}--points, low \fdf{ElectronicTemperature}, extremely small
\fdf{SCF.H!Tolerance} and high values of \fdf{MeshCutoff}.  We
encourage the user to test carefully these options for each system. An
additional point to take into account is the mixing scheme
employed. You are encouraged to use \fdf{SCF.Mix:Hamiltonian}
(currently this is the default) instead of density matrix mixing,
since it speeds up the convergence.  The pseudopotentials have to be
properly generated and tested for each specific system and they have
to be in their fully relativistic form, together with the non-linear
core corrections. Finally it is worth to mention that the
selfconsistent convergence for some non-highly symmetric
magnetizations directions with respect to the physical symmetry axis
could still be difficult.

\begin{fdfentry}{Spin!OrbitStrength}[real]<1.0>

  It allows to vary the strength of the 
  spin-orbit interaction from zero to any positive value. It can be
  used for both the on-site and off-site SOC flavors, but only for
  debugging and testing purposes, as the only physical value is 1.0.
  Note that this feature is implemented by modifying the SO parts of the
  semilocal potentials read from a \code{.psf} file. Care must be
  taken when re-using any \code{.ion} files produced.
  
\end{fdfentry}

\begin{fdflogicalF}{WriteOrbMom}

  If \fdftrue, a table is provided in the output file that
  includes an estimation of the vector orbital magnetic
  moments, in units of the Bohr magneton, projected 
  onto each orbital and also onto each atom. The estimation for the 
  orbital moments is based on a two-center approximation, and makes use 
  of the Mulliken population analysis.

  If \fdf{MullikenInScf} is \fdftrue, this information is printed at
  every scf step.

\end{fdflogicalF}

\begin{fdflogicalT}{SOC.Split.SR.SO}

  In calculations with spin-orbit-coupling (SOC) the program carries
  out a splitting of the contributions to the Hamiltonian and energies
  into scalar-relativistic (SR) and spin-orbit (SO) parts. The
  splitting procedure for the off-site flavor of SOC (involving full
  lj projectors) can sometimes be ill-defined, and in those cases the
  program relies on a heuristic to compute the two contributions. A
  warning is printed.
  
  If this option is set to \fdffalse, it will prevent the program from
  attempting the splitting (but it still will be able to detect a
  possible problem and report an informational message).
  
  For the onsite flavor of SOC this problem does not appear, but the
  option is also available for generality.
  
  When the SO contribution is not split, the relevant energy
  contributions in the output file are tagged 
  \code{Enl(+so)} and \code{Eso(nil)}.
  
  The CML file is not thus changed (but there is a new parameter
  \code{Split-SR-SO}).
  
  Note that this is only a cosmetic change affecting the reporting of
  some components of the energy. All the other results should be
  unchanged.

\end{fdflogicalT}

\subsection{The self-consistent-field loop}
\label{sec:scf}

\textbf{IMPORTANT NOTE: Convergence of the Kohn-Sham energy and forces}

In versions prior to 4.0 of the program, the Kohn-Sham energy was computed
using the ``in'' DM. The typical DM used as input for the calculation
of H was not directly computed from a set of wave-functions (it was
either the product of mixing or of the initialization from atomic
values). In this case, the ``kinetic energy'' term in the total energy
computed in the way stated in the Siesta paper had an error which
decreased with the approach to self-consistency, but was non-zero. The
net result was that the Kohn-Sham energy converged more slowly than
the ``Harris'' energy (which is correctly computed).

When mixing H (see below under ``Mixing Options''), the KS energy is
in effect computed from DM(out), so this error vanishes.

As a related issue, the forces and stress computed after SCF
convergence were calculated using the DM coming out of the cycle,
which by default was the product of a final mixing. This also
introduced errors which grew with the degree of non-selfconsistency.

The current version introduces several changes:

\begin{itemize}
\item When mixing the DM, the Kohn-Sham energy may be corrected to make it
  variational. This involves an extra call to \texttt{dhscf} (although
  with neither forces nor matrix elements being calculated, i.e. only
  calls to \texttt{rhoofd}, \texttt{poison}, and \texttt{cellxc}), and is
  turned on by the option \fdf{SCF!Want.Variational.EKS}.


\item The program now prints a new column labeled ``dHmax'' for the
  self-consistent cycle. The value represents the maximum absolute
  value of the changes in the entries of H, but its actual meaning
  depends on whether DM or H mixing is in effect: if mixing the DM,
  dHmax refers to the change in H(in) with respect to the previous
  step; if mixing H, dHmax refers to H(out)-H(in) in the previous(?)
  step.

  \item When achieving convergence, the loop might be exited without a
  further mixing of the DM, thus preserving DM(out) for further
  processing (including the calculation of forces and the analysis of
  the electronic structure) (see the \fdf{SCF.Mix!AfterConvergence}
  option).

  \index{Variational character of E\_KS}

  \item It remains to be seen whether the forces, being computed
  ``right'' on the basis of DM(out), exhibit somehow better
  convergence as a function of the scf step. In order to gain some
  more data and heuristics on this we have implemented a
  force-monitoring option, activated by setting to \fdftrue\ the
  variable \fdf{SCF!MonitorForces}. The program will then print the
  maximum absolute value of the change in forces from one step to the
  next. Other statistics could be implemented.

  \item While the (mixed) DM is saved at every SCF step, as was
  standard practice, the final DM(out) overwrites the \sysfile{DM}
  file at the end of the SCF cycle. Thus it is still possible to use a
  ``mixed'' DM for restarting an interrupted loop, but a ``good'' DM
  will be used for any other post-processing.


\end{itemize}


\begin{fdfentry}{MinSCFIterations}[integer]<0>

  Minimum number of SCF\index{SCF} iterations per time step. In MD
  simulations this can with benefit be set to 3.
  
\end{fdfentry}

\begin{fdfentry}{MaxSCFIterations}[integer]<1000>

  Maximum number of SCF\index{SCF} iterations per time step. 
  
\end{fdfentry}

\begin{fdflogicalT}{SCF!MustConverge}

  Defines the behaviour if convergence is not reached in the maximum
  number of SCF iterations. The default is to stop on the first SCF
  convergence failure. Increasing \fdf{MaxSCFIterations} to a large
  number may be advantageous when this is \fdftrue.
  
\end{fdflogicalT}

\subsubsection{Harris functional}

\begin{fdflogicalF}{Harris!Functional}
  
  Logical variable to choose between self-consistent Kohn-Sham
  functional or non self-consistent Harris functional to calculate
  energies and forces.
  \begin{itemize}
    \item \fdffalse: Fully self-consistent Kohn-Sham functional.
    \item \fdftrue: Non self consistent Harris functional. Cheap but
    pretty crude for some systems. The forces are computed within the
    Harris functional in the first SCF step. Only implemented for LDA in
    the Perdew-Zunger parametrization. It really only applies to starting
    densities which are superpositions of atomic charge densities.

    When this option is choosen, the values of \fdf{DM!UseSaveDM},
    \fdf{SCF!MustConverge} and \fdf{SCF.Mix!First} are automatically
    set \fdffalse and \fdf{MaxSCFIterations} is set to $1$, no matter
    whatever other specification are in the INPUT file.
  \end{itemize}

\end{fdflogicalF}


\subsubsection{Mixing options}
\label{sec:scf:mix}
\index{SCF!mixing}

Whether a calculation reaches self-consistency in a moderate number of
steps depends strongly on the mixing parameters used. The available
mixing options should be carefully tested for a given calculation
type. This search for optimal parameters can repay itself handsomely
by potentially saving many self-consistency steps in production runs.


\begin{fdfentry}{SCF.Mix}[string]<Hamiltonian|density|charge>
  \index{SCF!mixing}
  Control what physical quantity to mix in the self-consistent cycle.

  The default is mixing the Hamiltonian, which may typically perform
  better than density matrix mixing. 

  \begin{fdfoptions}
    \option[Hamiltonian]%
    \fdfindex*{SCF.Mix:Hamiltonian}%
    \index{SCF!mixing!Hamiltonian}
    Mix the Hamiltonian matrix (default).

    \option[density]%
    \fdfindex*{SCF.Mix:density}%
    \index{SCF!mixing!Density}
    Mix the density matrix.

    \option[charge]%
    \fdfindex*{SCF.Mix:charge}%
    \index{SCF!mixing!Charge}
    Mix the real-space charge density. Note this is an experimental
    feature.

  \end{fdfoptions}
  
  \note Real-space charge density does not follow the regular options
  that adhere to density-matrix or Hamiltonian mixing. Also it is not
  recommended to use real-space charge density mixing with \tsiesta.

\end{fdfentry}

\begin{fdfentry}{SCF.Mix!Spin}[string]<all|spinor|sum|sum+diff>

  Controls how the mixing is performed when carrying out
  spin-polarized calculations.
  
  \begin{fdfoptions}
    \option[all] %
    Use all spin-components in the mixing

    \option[spinor] %
    Estimate mixing coefficients using the spinor components

    \option[sum] %
    Estimate mixing coefficients using the sum of the spinor
    components 

    \option[sum+diff] %
    Estimate mixing coefficients using the sum \emph{and} the
    difference between the spinor components 
  \end{fdfoptions}

  \note This option only influences density-matrix ($\DM$) or
  Hamiltonian ($\Ham$) mixing when using anything but the
  \fdf*{linear} mixing scheme. And it does not influence not charge
  ($\rho$) mixing.
  
\end{fdfentry}

\begin{fdflogicalT}{SCF.Mix!First}
  \fdfindex*{DM.MixSCF1}[|see SCF.Mix.First]
  \fdfdeprecates{DM.MixSCF1}%
  \fdfdepend{SCF.Mix!First.Force}%

  This flag is used to decide whether mixing (of the DM or H) should
  be done in the first SCF step. If mixing is not performed the output
  DM or H generated in the first SCF step is used as input in the next
  SCF step. When mixing the DM, this ``reset'' has the effect of
  avoiding potentially undesirable memory effects: for example, a DM
  read from file which corresponds to a different structure might not
  satisfy the correct symmetry, and mixing will not fix it. On the
  other hand, when reusing a DM for a restart of an interrupted
  calculation, a full reset might not be advised.
  
  The value of this flag is one of the ingredients used by \siesta\ to
  decide what to do. If \fdftrue\ (the default), mixing will be
  performed in all cases, except when a DM has been read from file and
  the sparsity pattern of the DM on file is different from the current
  one. To ensure that a first-step mixing is done even in this case,
  \fdf{SCF.Mix!First.Force} should be set to \fdftrue.
  
  If the flag is \fdffalse, no mixing in the first step will be
  performed, except if overridden by \fdf{SCF.Mix!First.Force}.
  
  \note that the default value for this flag has changed from the old
  (pre-version 4) setting in \siesta. The new setting is most
  appropriate for the case of restarting calculations. On the other
  hand, it means that mixing in the first SCF step will also be
  performed for the standard case in which the initial DM is built as
  a (diagonal) superposition of atomic orbital occupation values. In
  some cases (e.g. spin-orbit calculations) better results might be
  obtained by avoiding this mixing.

\end{fdflogicalT}

\begin{fdflogicalF}{SCF.Mix!First.Force}

  Force the mixing (of DM or H) in the first SCF step, regardless of
  what \siesta\ may heuristically decide.

  This overrules \fdf{SCF.Mix!First}.
  
\end{fdflogicalF}


In the following the density matrix ($\DM$) will be used in the
equations, while for Hamiltonian mixing, $\DM$, should be replaced by
the Hamiltonian matrix.
%
Also we define $\Res[i] = \DM^i_{\mathrm{out}} - \DM^i_{\mathrm{in}}$ and 
$\RRes[i] = \Res[i] - \Res[i-1]$.

\begin{fdfentry}{SCF.Mixer!Method}[string]<Pulay|Broyden|Linear>

  Choose the mixing algorithm between different methods. Each method
  may have different variants, see \fdf{SCF.Mixer!Variant}.
  
  \begin{fdfoptions}

    \option[Linear] %
    \index{SCF!mixing!Linear}
    A simple linear extrapolation of the input matrix as
    \begin{equation}
      \DM^{n+1}_{\mathrm{in}} = \DM^n_{\mathrm{in}} + w \Res[n].
    \end{equation}


    \option[Pulay] %
    \index{SCF!mixing!Pulay}
    Using the Pulay mixing method corresponds using the
    \citet{Kresse1996} variant. It relies on the previous $N$ steps and
    uses those for estimating an optimal input
    $\DM^{n+1}_{\mathrm{in}}$ for the following iteration. The
    equation can be written as
    \begin{equation}
      \DM^{n+1}_{\mathrm{in}} = \DM^n_{\mathrm{in}} + G \Res[n]
      + \sum_{i=n-N+1}^{N-1} \alpha_i (\Res[i] + G \RRes[i]),
    \end{equation}
    where $G$ is the damping factor of the Pulay mixing (also known as
    the mixing weight).
    The values $\alpha_i$ are calculated using this formula
    \begin{equation}
      \alpha_i = - \sum_{j=1}^{N-1}\mathbf A_{ji}^{-1} 
         \langle \RRes[j] | \Res[N] \rangle,
    \end{equation}    
    with $\mathbf A_{ji} = \langle \RRes[j] | \RRes[i] \rangle$.

    In \siesta\ $G$ is a constant, and not a matrix.

    \note Pulay mixing is a special case of Broyden mixing, see the
    Broyden method.


    \option[Broyden] %
    \index{SCF!mixing!Broyden}
    The Broyden mixing is mixing method relying on the previous $N$
    steps in the history for calculating an optimum input
    $\DM^{n+1}_{\mathrm{in}}$ for the following iteration.  The
    equation can be written as
    \begin{equation}
      \DM^{n+1}_{\mathrm{in}} = \DM^n_{\mathrm{in}} + G \Res[n]
      - \sum_{i=n-N+1}^{N-1}\sum_{j=n-N+1}^{N-1} w_iw_j c_j \beta_{ij} (\Res[i] + G \RRes[i]),
    \end{equation}
    where $G$ is the damping factor (also known as
    the mixing weight).
    The values weights may be expressed by
    \begin{align}
      w_i &= 1 \quad\text{, for }i>0
      \\
      c_i &= \langle \RRes[i] | \Res[n] \rangle,
      \\
      \beta_{ij} &= \Big[\big(w_0^2\mathbf I + \mathbf
      A\big)^{-1}\Big]_{ij}
      \\
      A_{ij} &= w_iw_j \langle \RRes[i] | \RRes[j] \rangle.
    \end{align}
    It should be noted that $w_i$ for $i>0$ may be chosen arbitrarily.
    Comparing with the Pulay mixing scheme it is obvious that Broyden
    and Pulay are equivalent for a suitable set of parameters.

  \end{fdfoptions}
  
\end{fdfentry}

\begin{fdfentry}{SCF.Mixer!Variant}[string]<original>

  Choose the variant of the mixing method.

  \begin{fdfoptions}

    \option[Pulay] %
    This is implemented in two variants:
    \begin{fdfoptions}

      \option[original$\vert$kresse]%
      The original\footnote{As such the ``original'' version is a
          variant it-self. But this is more stable in the far majority
          of cases.} Pulay mixing scheme, as implemented in
      \citet{Kresse1996}.
      
      \option[GR] %
      The ``guaranteed-reduction'' variant of
      Pulay\cite{Bowler2000}. This variant has a special convergence
      path. It interchanges between linear and Pulay mixing thus using
      the exact gradient at each $\DM^n_{\mathrm{in}}$.  For
      relatively simple systems this may be advantageous to
      use. However, for complex systems it may be worse until it
      reaches a convergence basin.

      To obtain the original guaranteed-reduction variant one should
      set \fdf{SCF.Mixer.<>!weight.linear} to $1$.

    \end{fdfoptions}

  \end{fdfoptions}

\end{fdfentry}

\begin{fdfentry}{SCF.Mixer!Weight}[real]<0.25>%
  \fdfdeprecates{DM!MixingWeight}%
  \fdfindex*{DM!MixingWeight}[|see SCF.Mixer.Weight]

  The mixing weight used to mix the quantity.
  In the linear mixing case this refers to
  \begin{equation}
    \DM^{n+1}_{\mathrm{in}} = \DM^n_{\mathrm{in}} + w \Res[n].
  \end{equation}
  For details regarding the other methods please see
  \fdf{SCF.Mixer!Method}.

  \note the older keyword \fdf{DM!MixingWeight} is used if this key is
  not found in the input.

\end{fdfentry}


\begin{fdfentry}{SCF.Mixer!History}[integer]<2>%
  \fdfdeprecates{DM.NumberPulay,DM.NumberBroyden}%
  \fdfindex*{DM.NumberPulay}[|see SCF.Mixer.History]%
  \fdfindex*{DM.NumberBroyden}[|see SCF.Mixer.History]

  Number of previous SCF steps used in estimating the following input.
  Increasing this number, typically, increases stability and a number
  of around 6 or above may be advised.

  \note the older keyword \fdf{DM.NumberPulay}/\fdf{DM.NumberBroyden}
  is used if this key is not found in the input.

\end{fdfentry}


\begin{fdfentry}{SCF.Mixer!Kick}[integer]<0>%
  \fdfindex*{DM.NumberKick}[|see SCF.Mixer.Kick]

  After every $N$ SCF steps a linear mix is inserted to \emph{kick}
  the SCF cycle out of a possible local minimum. 

  The mixing weight for this linear kick is determined by \fdf{SCF.Mixer!Kick.Weight}.
  
\end{fdfentry}

\begin{fdfentry}{SCF.Mixer!Kick.Weight}[real]<\fdfvalue{SCF.Mixer!Weight}>%
  \fdfindex*{DM!KickMixingWeight}[|see SCF.Mixer.Kick.Weight]

  The mixing weight for the linear kick (if used).
  
\end{fdfentry}



\begin{fdfentry}{SCF.Mixer!Restart}[integer]<0>

  When using advanced mixers (Pulay/Broyden) the mixing scheme may
  periodically restart the history. This may greatly improve the
  convergence path as local constraints in the minimization process
  are periodically removed. This method has similarity to the method
  proposed in \citet{Banerjee2016} and is a special case of the
  \fdf{SCF.Mixer!Kick} method.

  Please see \fdf{SCF.Mixer!Restart.Save} which is advised to be set
  simultaneously. 
  
\end{fdfentry}

\begin{fdfentry}{SCF.Mixer!Restart.Save}[integer]<1>

  When restarting the history of saved SCF steps one may choose to
  save a subset of the latest history steps.
  %
  When using \fdf{SCF.Mixer!Restart} it is encouraged to also save a
  couple of previous history steps.
  
\end{fdfentry}


\begin{fdfentry}{SCF.Mixer!Linear.After}[integer]<-1>

  After reaching convergence one may run additional SCF cycles using a
  linear mixing scheme. If this has a value $\ge 0$ \siesta\ will
  perform linear mixing after it has converged using the regular
  mixing method (\fdf{SCF.Mixer!Method}).

  The mixing weight for this linear mixing is controlled by \fdf{SCF.Mixer!Linear.After.Weight}.

\end{fdfentry}



\begin{fdfentry}{SCF.Mixer!Linear.After.Weight}[real]<\fdfvalue{SCF.Mixer!Weight}>

  After reaching convergence one may run additional SCF cycles using a
  linear mixing scheme. If this has a value $\ge 0$ \siesta\ will
  perform linear mixing after it has converged using the regular
  mixing method (\fdf{SCF.Mixer!Method}).

  The mixing weight for this linear mixing is controlled by \fdf{SCF.Mixer!Linear.After.Weight}.

\end{fdfentry}

In conjunction with the above simple settings controlling the SCF
cycle \siesta\ employs a very configurable mixing scheme. In essence
one may switch mixing methods, arbitrarily, during the SCF cycle via
control commands. This can greatly speed up convergence. 

\begin{fdfentry}{SCF.Mixers}[block]
  
  Each line in this block defines a separate mixer that is defined in
  a subsequent \fdf{SCF.Mixer.<>} block.

  The first line is the initial mixer used.

  See the following options for controlling individual mixing
  methods. 
  
  \note If this block is defined you \emph{must} define all mixing
  parameters individually.

\end{fdfentry}


\begin{fdfentry}{SCF.Mixer.<>}[block]

  This block controls the mixer named \fdf*{<>}. 

  \begin{fdfoptions}

    \option[method]%
    \fdfindex*{SCF.Mixer.<>!method}%
    Define the method for the mixer, see \fdf{SCF.Mixer!Method} for
    possible values.

    \option[variant]%
    \fdfindex*{SCF.Mixer.<>!variant}%
    Define the variant of the method, see \fdf{SCF.Mixer!Variant} for
    possible values.

    \option[weight|w]%
    \fdfindex*{SCF.Mixer.<>!weight}%
    Define the mixing weight for the mixing scheme, see
    \fdf{SCF.Mixer!Weight}.

    \option[history]%
    \fdfindex*{SCF.Mixer.<>!history}%
    Define number of previous history steps used in the minimization process, see
    \fdf{SCF.Mixer!History}.

    \option[weight.linear|w.linear]%
    \fdfindex*{SCF.Mixer.<>!weight.linear}%
    Define the linear mixing weight for the mixing scheme. This only
    has meaning for Pulay or Broyden mixing. It defines the initial
    linear mixing weight. 

    To obtain the original Pulay Guarenteed-Reduction variant one
    should set this to $1$.

    \option[restart]%
    \fdfindex*{SCF.Mixer.<>!restart}%
    Define the periodic restart of the saved history, see
    \fdf{SCF.Mixer!Restart}.

    \option[restart.save]%
    \fdfindex*{SCF.Mixer.<>!restart.save}%
    Define number of latest history steps retained when restarting the
    history, see \fdf{SCF.Mixer!Restart.Save}.

    \option[iterations]%
    \fdfindex*{SCF.Mixer.<>!iterations}%
    Define the maximum number of iterations this mixer should run
    before changing to another mixing method.

    \note This \emph{must} be used in conjunction with the \fdf*{next} setting.

    \option[next \fdf*{<>}]%
    \fdfindex*{SCF.Mixer.<>!next}%
    Specify the name of the next mixing scheme after having conducted
    \fdf*{iterations} SCF cycles using this mixing method.

    \option[next.conv \fdf*{<>}]%
    \fdfindex*{SCF.Mixer.<>!next.conv}%
    If SCF convergence is reached using this mixer, switch to the
    mixing scheme via \fdf*{<>}. Then proceed with the SCF cycle.

    \option[next.p]%
    \fdfindex*{SCF.Mixer.<>!next.p}%
    If the relative difference between the latest two residuals is
    below this quantity, the mixer will switch to the method given in
    \fdf*{next}.
    Thus if
    \begin{equation}
      \frac{\langle \Res[i]|\Res[i]\rangle - \langle
          \Res[i-1]|\Res[i-1]\rangle}%
      {\langle \Res[i-1]|\Res[i-1]\rangle} <
      \fdf*{next.p}
    \end{equation}
    is fulfilled it will skip to the next mixer.

    \option[restart.p]%
    \fdfindex*{SCF.Mixer.<>!restart.p}%
    If the relative difference between the latest two residuals is
    below this quantity, the mixer will restart the history.
    Thus if
    \begin{equation}
      \frac{\langle \Res[i]|\Res[i]\rangle - \langle
          \Res[i-1]|\Res[i-1]\rangle}%
      {\langle \Res[i-1]|\Res[i-1]\rangle} <
      \fdf*{restart.p}
    \end{equation}
    is fulfilled it will reset the history.
    
  \end{fdfoptions}
    
\end{fdfentry}


The options covered now may be exemplified in these examples. If the
input file contains:
\begin{fdfexample}
  SCF.Mixer.Method pulay
  SCF.Mixer.Weight 0.05
  SCF.Mixer.History 10
  SCF.Mixer.Restart 25
  SCF.Mixer.Restart.Save 4
  SCF.Mixer.Linear.After 0
  SCF.Mixer.Linear.After.Weight 0.1
\end{fdfexample}

This may be equivalently setup using the more advanced input blocks:
\begin{fdfexample}
  %block SCF.Mixers
    init
    final
  %endblock

  %block SCF.Mixer.init
     method pulay
     weight 0.05
     history 10
     restart 25
     restart.save 4
     next.conv final
  %endblock

  %block SCF.Mixer.final
     method linear
     weight 0.1
  %endblock
\end{fdfexample}

This advanced setup may be used to change mixers during the SCF to
change certain parameters of the mixing method, or fully change the
method for mixing. For instance it may be advantageous to increase the
mixing weight once a certain degree of self-consistency has been
reached. In the following example we change the mixing method to a
different scheme by increasing the weight and decreasing the history steps:
\begin{fdfexample}
  %block SCF.Mixers
    init
    final
  %endblock

  %block SCF.Mixer.init
     method pulay
     weight 0.05
     history 10
     next final
     # Switch when the relative residual goes below 5%
     next.p 0.05
  %endblock

  %block SCF.Mixer.final
     method pulay
     weight 0.1
     history 6
  %endblock
\end{fdfexample}

In essence, very complicated schemes of convergence may be created
using the block's input.

The following options refer to the global treatment of how/when mixing
should be performed.


% Only show if not showing the other options
\begin{fdflogicalF}{Compat!Pre-v4-DM-H}
  \index{Backward compatibility}
  \index{SCF!compat-pre4-dm-h}

  This controls the default values of \fdf{SCF.Mix!AfterConvergence},
  \fdf{SCF!RecomputeHAfterScf} and \fdf{SCF.Mix!First}.
  
  In versions prior to v4 the two former options where defaulted to
  \fdftrue\ while the latter option was defaulted to \fdffalse.
  
\end{fdflogicalF}

\begin{fdflogicalF}{SCF.Mix!AfterConvergence}
  \index{SCF!mixing}%
  \index{SCF!mixing!end of cycle}%

  Indicate whether mixing is done in the last SCF cycle (after
  convergence has been achieved) or not. Not mixing after convergence
  improves the quality of the final Kohn-Sham energy and of the forces
  when mixing the DM.

  \note See \fdf{Compat!Pre-v4-DM-H}.

\end{fdflogicalF}

\begin{fdflogicalF}{SCF!RecomputeHAfterSCF}
  \index{SCF!Recomputing H}%
  
  Indicate whether the Hamiltonian is updated after the scf cycle,
  while computing the final energy, forces, and stresses. Not
  recomputing H makes further analysis tasks (such as the computation
  of band structures) more consistent, as they will be able to use the
  same H used to generate the last density matrix.
  
  \note See \fdf{Compat!Pre-v4-DM-H}.

\end{fdflogicalF}


\ifdeprecated
\subsubsection{Deprecated mixing options}

\begin{description}

\index{MixHamiltonian@\textbf{MixHamiltonian}}\index{SCF!mixing!Hamiltonian}
\item[\textbf{MixHamiltonian}] (\textit{logical}):
Mixing of the Hamiltonian instead of the density-matrix, a
feature previously available only for \tsiesta\ runs has been
implemented for general use. It is enabled by setting either the
\textbf{MixHamiltonian} option (preferred) or the old-style \textbf{TS.MixH}
option (deprecated but retained due to its use in \tsiesta).

The evidence obtained so far from test runs indicates that Hamiltonian
mixing might be better than density-matrix mixing in most cases, and
is seldom worse. Users are encouraged to test this feature for their
favorite tough-converging systems, and report their experiences.

The H mixing algorithms available are exactly the same as for
density-matrix mixing (Pulay, Broyden, Fire), and the keywords
controlling it are also the same, e.g. \texttt{DM.MixingWeight} applies
both to H and DM mixing.

\item[\textbf{DM.MixSCF1}] (\textit{logical}):\index{SCF!mixing}
\index{DM.MixSCF1@\textbf{DM.MixSCF1}}\index{SCF!mixing}
Logical variable to indicate whether mixing is done in the
first SCF cycle or not. Usually, mixing should not be done in
the first cycle, to avoid non-idempotency in density matrix
from Harris or previous steps. It can be useful, though,
for restarts of selfconsistency runs.

\textit{Default value:} \texttt{.true.}

\item[\textbf{DM.MixingWeight}] (\textit{real}):
\index{DM.MixingWeight@\textbf{DM.MixingWeight}}
\index{DM.MixingWeight@\textbf{DM.MixingWeight}|see SCF.Mix.Weight}
\index{SCF!mixing!linear}
\index{SCF.Mix.Weight@\textbf{SCF.Mix.Weight}}

Proportion $\alpha$ of
output Density Matrix to be used for the input Density Matrix of
next SCF cycle (linear mixing):
$\rho^{n+1}_{in} = \alpha \rho^n_{out}
+(1 - \alpha) \rho^n_{in}$.

\textit{Default value:} \texttt{0.25}

Note that this parameter is also used in more sophisticated mixing 
approaches (i.e. Pulay).

Pulay mixing (also known as DIIS extrapolation) is the method of
choice for accelerating the convergence of the scf cycle.

\item[\textbf{DM.NumberPulay}] (\textit{integer}):\index{Pulay mixing}
\index{DM.NumberPulay@\textbf{DM.NumberPulay}}\index{SCF!mixing!Pulay}
It controls the Pulay convergence accelerator. Pulay mixing generally
accelerates convergence quite significantly, and can
reach convergence in cases where linear mixing cannot.
%One Pulay mixing will be performed every \textbf{DM.NumberPulay} SCF
%iterations, the other iterations using linear mixing. If
%it is less than 2, only linear mixing is used.
The guess for the $n+1$ iteration is constructed using the
input and output matrices of the \textbf{DM.NumberPulay} former
SCF cycles, in the following way:
$\rho^{n+1}_{in} = \alpha_P \bar{\rho}^{n}_{out}
+(1 - \alpha_P) \bar{\rho}^{n}_{in}$, where $\bar{\rho}^{n}_{out}$
and $\bar{\rho}^{n}_{in}$ are constructed from the previous
$N=$\textbf{DM.NumberPulay} cycles:
%
\begin{equation}
\bar{\rho}^{n}_{out} = \sum_{i=1}^N
\beta_i \rho_{out}^{(n-N+i)} \hspace{0.5truecm}; \hspace{0.5truecm}
\bar{\rho}^{n}_{in} = \sum_{i=1}^N
\beta_i \rho_{in}^{(n-N+i)}.
\nonumber
\end{equation}
%
The values of $\beta_i$ are obtained by minimizing the distance
between $\bar{\rho}^{n}_{out}$ and $\bar{\rho}^{n}_{in}$.
The value of $\alpha_P$ is given by default by variable \textbf{DM.MixingWeight}, although it can be set directly by \textbf{SCF.PulayDamping}.

If \textbf{DM.NumberPulay} is 0 or 1, simple linear mixing is
performed.

\textit{Default value:} \texttt{0}


\item[\textbf{SCF.Pulay.Damping}] (\textit{real}):\index{SCF!mixing}
\index{SCF.Pulay.Damping@\textbf{SCF.Pulay.Damping}}\index{SCF!mixing!Pulay!damping}

Proportion $\alpha_P$ of the predicted
output Density Matrix to be used for the input Density Matrix of
next SCF cycle in Pulay mixing (see above). Typically, this can be
significantly higher than the mixing parameter used for linear
mixing. 

\textit{Default value:} \textit{(the value of \textbf{DM.MixingWeight})}

\item[\textbf{SCF.PulayMinimumHistory}] (\textit{integer}):\index{Pulay mixing}
\index{SCF.PulayMinimumHistory@\textbf{SCF.PulayMinimumHistory}}\index{SCF!mixing!Pulay}

Pulay mixing might kick in only after a specified number of history
steps have been built up.

\textit{Default value:} \texttt{2}

\item[\textbf{SCF.PulayDmaxRegion}] (\textit{real}):\index{SCF!mixing!Pulay}
\index{SCF.PulayDmaxRegion@\textbf{SCF.PulayDmaxRegion}}\index{SCF!mixing!Pulay!close region}

Pulay mixing might not work well if far from the fixed point. This option
will avoid inserting the current $X_{in}$, $X_{out}$ pair ($X$ could be
the density matrix or the hamiltonian) in the history stack if
the maximum difference is above the specified number.

\textit{Default value:} \textit{(a very high number, so no effect by
  default)}

\item[\textbf{DM.NumberKick}] (\textit{integer}):\index{Linear mixing kick}
\index{DM.NumberKick@\textbf{DM.NumberKick}}
%\index{SCF!mixing!linear!Pulay!Broyden}
\index{SCF!mixing!linear}
Option to skip the Pulay (or Broyden) mixing each certain number of iterations,
and use a linear mixing instead. Linear mixing is done
every \textbf{DM.NumberKick} iterations, using a mixing coefficient
$\alpha$ given by variable \textbf{DM.KickMixingWeight}
(instead of the usual mixing \textbf{DM.MixingWeight}).
This allows in some difficult cases to bring the SCF out of
loops in which the selfconsistency is stuck.
If \textbf{DM.MixingWeight}=0, no linear mix is used.

\textit{Default value:} \texttt{0}

\item[\textbf{DM.KickMixingWeight}] (\textit{real}):\index{SCF!mixing!Pulay!Broyden}
\index{DM.KickMixingWeight@\textbf{DM.KickMixingWeight}}
%\index{SCF!mixing!linear!Pulay!Broyden}
\index{SCF!mixing!linear}
Proportion $\alpha$ of
output Density Matrix to be used for the input Density Matrix of
next SCF cycle (linear mixing):
$\rho^{n+1}_{in} = \alpha \rho^{n}_{out}
+(1 - \alpha) \rho^{n}_{in}$, for linear mixing kicks within the
Pulay or Broyden mixing schemes.
This mixing is done every \textbf{DM.NumberKick} cycles.

\textit{Default value:} \texttt{0.50}


\item[\textbf{SCF.Pulay.LinearMixing.Before}] (\textit{integer}):
\index{SCF.Pulay.LinearMixing.Before@\textbf{SCF.Pulay.LinearMixing.Before}}
\index{SCF!mixing!Pulay} 

Instead of instantly starting the Pulay mixing one can initiate several
linear mix steps. This prohibits the initial steps in the Pulay history.

This can be particularly useful when \emph{on-the-fly} switches between different solution
methods.

\textit{Default value:} \texttt{0}

\item[\textbf{SCF.Pulay.MixingWeight.Before}] (\textit{real}):
\index{SCF.Pulay.MixingWeight.Before@\textbf{SCF.Pulay.MixingWeight.Before}}
\index{SCF!mixing!Pulay}

Proportion $\alpha$ of output $X_{out}$ to be used for the input
$X_{in}$ of next SCF cycle, for linear mixing steps after a Pulay
mixing step if the option \textbf{SCF.LinearMixingAfterPulay} is
activated.

\textit{Default value:} \texttt{0.01}

\item[\textbf{SCF.Pulay.LinearMixing.After}] (\textit{logical}):
\index{SCF.Pulay.LinearMixing.After@\textbf{SCF.Pulay.LinearMixing.After}}
\index{SCF!mixing!Pulay} 

The damping of the DIIS-predicted $X_{in}$ is done to avoid introducing
linear dependencies into the Pulay history stack. Alternatively (or
simultaneously) one can use the most recent $X_{in}$, $X_{out}$ pair in a
linear mixing step, and use a possibly different mixing
parameter. This would be akin to a ``kick'', but without removing all
the history information.

\textit{Default value:} \texttt{.false.}

\item[\textbf{SCF.Pulay.MixingWeight.After}] (\textit{real}):
\index{SCF.Pulay.MixingWeight.After@\textbf{SCF.Pulay.MixingWeight.After}}
\index{SCF!mixing!Pulay}

Proportion $\alpha$ of output Density Matrix to be used for the input
Density Matrix of next SCF cycle, for linear mixing
steps after a Pulay mixing step if the option \textbf{SCF.Pulay.LinearMixing.After} is activated.

\textit{Default value:} \texttt{0.50}

\item[\textbf{SCF.Pulay.UseSVD}] (\textit{logical}):
  \index{SCF.Pulay.UseSVD@\textbf{SCF.Pulay.UseSVD}}\index{SCF!mixing!Pulay!SVD} 

Instead of a direct matrix inversion, the more robust SVD algorithm can
be used to perform the DIIS extrapolation.

\textit{Default value:} \texttt{.false.}

\item[\textbf{SCF.Pulay.DebugSVD}] (\textit{logical}):
\index{SCF.Pulay.DebugSVD@\textbf{SCF.Pulay.DebugSVD}}
\index{SCF!mixing!Pulay!SVD} 

Print more information (effective rank, singular values) if the SVD algorithm
isused to perform the DIIS extrapolation.

\textit{Default value:} {\texttt{.true.} \textit{when using SVD}}

\item[\textbf{SCF.Pulay.RcondSVD}] (\textit{real}):\index{SCF!mixing!Pulay!SVD}
\index{SCF.Pulay.RcondSVD@\textbf{SCF.Pulay.RcondSVD}}
\index{SCF!mixing!Pulay!SVD}

Singular values which are smaller than \texttt{rcond} times the maximum
singular value are effectively discarded in the SVD algorithm for
solving the DIIS equations. This lowers the effective rank of the
problem, and is a sign of (near) linear dependencies in the
extrapolation data.

\textit{Default value:} \texttt{10$^{-8}$}


\item[\textbf{DM.PulayOnFile}] (\textit{logical}):
\index{DM.PulayOnFile@\textbf{DM.PulayOnFile}}

\textbf{NOTE:} This feature is temporarily disabled pending a proper
implementation that works well in parallel.

Store intermediate information of Pulay mixing in files
(\texttt{.true.}) or in memory (\texttt{.false.}).
Memory storage can increase considerably the
memory requirements for large systems.
If files are used, the filenames will be
\texttt{SystemLabel}.P1 and
\texttt{SystemLabel}.P2,
where SystemLabel is the name associated
to parameter \texttt{SystemLabel}.

\textit{Default value:} \texttt{.false.}



\item[\textbf{DM.NumberBroyden}] (\textit{integer}):\index{Broyden mixing}
\index{DM.NumberBroyden@\textbf{DM.NumberBroyden}}\index{SCF!mixing!Broyden}
It controls the Broyden-Vanderbilt-Louie-Johnson
convergence accelerator, which is based on the use of past information
(up to \textbf{DM.NumberBroyden} steps) to construct the input density
matrix for the next iteration.

See D.D. Johnson, Phys. Rev. B\textbf{38}, 12807 (1988), and references therein;
Kresse and Furthmuller, Comp. Mat. Sci \textbf{6}, 15 (1996).

If \textbf{DM.NumberBroyden} is 0, the program performs linear mixings,
or, if requested, Pulay mixings.

Broyden mixing takes precedence over Pulay mixing if both are
specified in the input file.

\textbf{Note:} The Broyden mixing algorithm is still in development,
notably with regard to the effect of its various modes of operation, and
the assigment of weights. In its default mode, its effectiveness is
very similar to Pulay mixing. As memory usage is not yet optimized,
casual users might want to stick with Pulay mixing for now.

\textit{Default value:} \texttt{0}

\item[\textbf{DM.Broyden.Cycle.On.Maxit}] (\textit{logical}):
\index{DM.Broyden.Cycle.On.Maxit@\textbf{DM.Broyden.Cycle.On.Maxit}}
\index{SCF!mixing!Broyden}
Upon reaching the maximum number of historical data sets which are
kept for Broyden mixing (see description of variable \fdf{DM.NumberBroyden}), throw away the oldest and shift the rest to make
room for a new data set. This procedure tends, heuristically, to
perform better than the alternative, which is to re-start the Broyden
mixing algorithm from a first step of linear mixing.

\textit{Default value:} \texttt{.true.}

\item[\textbf{DM.Broyden.Variable.Weight}] (\textit{logical}):
\index{DM.Broyden.Variable.Weight@\textbf{DM.Broyden.Variable.Weight}}
\index{SCF!mixing!Broyden}
If \texttt{.true.}, the different historical data sets used in
the Broyden mixing (see description of variable \fdf{DM.NumberBroyden}) are assigned a weight depending on the
norm of their residual $\rho^n_{out}-\rho^n_{in}$.

\textit{Default value:} \texttt{.true.}


\end{description}

\fi


\subsubsection{Mixing of the Charge Density}
\index{SCF!mixing!Charge}

See \fdf{SCF.Mix} on how to enable charge density mixing.  If charge
density mixing is enabled the fourier components of the charge density
are mixed, as done in some plane-wave codes. (See for example Kresse
and Furthm\"uller, Comp. Mat. Sci. 6, 15-50 (1996), KF in what
follows.)

The charge mixing is implemented roughly as follows:
\begin{itemize}
  \item The charge density computed in dhscf is fourier-transformed
  and stored in a new module. This is done both for
  ``$\rho(\mathbf{G})(\mathrm{in})$'' and
  ``$\rho(\mathbf{G})(\mathrm{out})$'' (the ``out'' charge is computed
  during the extra call to dhscf for correction of the variational
  character of the Kohn-Sham energy)

  \item The ``in'' and ``out''
  charges are mixed (see below), and the resulting ``in'' fourier
  components are used by dhscf in successive iterations to reconstruct
  the charge density.

  \item The new arrays needed and the processing
  of most new options is done in the new module m\_rhog.F90. The
  fourier-transforms are carried out by code in rhofft.F.  

  \item
  Following standard practice, two options for mixing are offered:
  \begin{itemize}
    \item A simple Kerker mixing, with an optional Thomas-Fermi wavevector to 
    damp the contributions for small G's. The overall mixing weight is
    the same as for other kinds of mixing, read from \fdf{DM!MixingWeight}.
    
    \item A DIIS (Pulay) procedure that takes into account a sub-set of
    the G vectors (those within a smaller cutoff). Optionally, the
    scalar product used for the construction of the DIIS matrix from
    the residuals uses a weight factor. 
    
    The DIIS extrapolation is followed by a  Kerker mixing step.
    
    The code is m\_diis.F90. The DIIS history is kept in a circular
    stack, implemented using the new framework for reference-counted
    types. This might be overkill for this particular use, and there
    are a few rough edges, but it works well.

  \end{itemize}

\end{itemize}

The default convergence criteria remains based on the differences in
the density matrix, but in this case the differences are from step to
step, not the more fundamental \texttt{DM\_out-DM\_in}. Perhaps some
other criterion should be made the default (max $|\Delta rho(G)|$,
convergence of the free-energy...)

Note that with charge mixing the Harris energy as it is currently
computed in Siesta loses its meaning, since there is no
\texttt{DM\_in}. The program prints zeroes in the Harris energy field.

Note that the KS energy is correctly computed throughout the scf
cycle, as there is an extra step for the calculation of the charge
stemming from \texttt{DM\_out}, which also updates the
energies. Forces and final energies are correctly computed with the
final \texttt{DM\_out}, regardless of the setting of the option for
mixing after scf convergence.

Initial tests suggest that charge mixing has some desirable properties
and could be a drop-in replacement for density-matrix mixing, but many
more tests are needed to calibrate its efficiency for different kinds
of systems, and the heuristics for the (perhaps too many) parameters:


\begin{fdfentry}{SCF.Kerker.q0sq}[energy]<$0\,\mathrm{Ry}$>

  Determines the parameter $q_0^2$ featuring in the Kerker
  preconditioning, which is always performed on all components of
  $\rho(\mathbf{G})$, even those treated with the DIIS scheme. 
  
\end{fdfentry}

\begin{fdfentry}{SCF.RhoGMixingCutoff}[energy]<$9\,\mathrm{Ry}$>

  Determines the sub-set of G vectors which will undergo the DIIS
  procedure.  Only those with kinetic energies below this cutoff will
  be considered.  The optimal extrapolation of the $\rho(\textbf{G})$
  elements will be replaced in the fourier series before performing
  the Kerker mixing.
  
\end{fdfentry}

\begin{fdfentry}{SCF.RhoG.DIIS.Depth}[integer]<0>

  Determines the maximum number of previous steps considered in the DIIS
  procedure. 
  
\end{fdfentry}


\textbf{NOTE}: The information from the first scf step is not included in
the DIIS history. There is no provision yet for any other kind of
``kick-starting'' procedure. The logic is in m\_rhog (rhog\_mixing routine).

\begin{fdfentry}{SCF.RhoG.Metric.Preconditioner.Cutoff}[energy]

  Determines the value of $q_1^2$ in the weighing of the different \textbf{G}
  components in the scalar products among residuals in the DIIS
  procedure. Following the KF ansatz, this parameter is chosen so that
  the smallest (non-zero) \textbf{G} has a weight 20 times larger than that of
  the smallest G vector in the DIIS set. 

  The default is the result of the KF prescription.

\end{fdfentry}

\begin{fdflogicalF}{SCF.DebugRhoGMixing}
  \index{SCF!mixing!Charge}

  Controls the level of debugging output in the mixing procedure
  (basically whether the first few stars worth of Fourier components are
  printed). Note that this feature will only display the components in
  the master node.

\end{fdflogicalF} 


\begin{fdflogicalF}{Debug!DIIS}
  \index{SCF!mixing!Charge}

  Controls the level of debugging output in the DIIS procedure. If set,
  the program prints the DIIS matrix and the extrapolation coefficients.
  
\end{fdflogicalF}

\begin{fdflogicalF}{SCF.MixCharge!SCF1}
  \index{SCF!mixing!Charge}


  Logical variable to indicate whether or not the charge is mixed in the
  first SCF cycle. Anecdotal evidence indicates that it might be
  advantageous, at least for calculations started from scratch, to avoid
  that first mixing, and retain the ``out'' charge density as ``in'' for
  the next step. 

\end{fdflogicalF}



\subsubsection{Initialization of the density-matrix}

NOTE: The conditions and options for density-matrix re-use are quite
varied and not completely orthogonal at this point. For further
information, see routine \file{Src/m\_new\_dm.F}. What follows is a
summary.

The Density matrix can be:

\begin{verbatim}
    1. Synthesized directly from atomic occupations.
       (See the options below for spin considerations)
    2. Read from a .DM file (if the appropriate options are set)
    3. Extrapolated from previous geometry steps
       (this includes as a special case the re-use of the DM 
        of the previous geometry iteration)

    In cases 2 and 3, the structure of the read or extrapolated DM
    is automatically adjusted to the current sparsity pattern.

    In what follows, "Initialization" of the DM means that the DM is
    either read from file (if available) or synthesized from atomic
    data. This is confusing, and better terminology should be used.


    Special cases:

           Harris functional: The matrix is always initialized

           Force calculation: The DM should be written to disk
                              at the time of the "no displacement"
                              calculation and read from file at
                              every subsequent step.

           Variable-cell calculation:
   
             If the auxiliary cell changes, the DM is forced to be
             synthesized (conceivably one could rescue some important
             information from an old DM, but it is too much trouble
             for now). NOTE that this is a change in policy with respect
             to previous versions of the program, in which a (blind?)
             re-use was allowed, except if 'ReInitialiseDM' was 'true'.
             Now 'ReInitialiseDM' is 'true' by default. Setting it to
             'false' is not recommended.

             In all other cases (including "server operation"), the
             default is to allow DM re-use (with possible extrapolation)
             from previous geometry steps.

             For "CG" calculations, the default is not to extrapolate the
             DM (unless requested by setting 'DM.AllowExtrapolation' to
             "true"). The previous step's DM is reused.

             The fdf variables 'DM.AllowReuse' and 'DM.AllowExtrapolation'
             can be used to turn off DM re-use and extrapolation.

\end{verbatim}


\begin{fdflogicalF}{DM!UseSaveDM}
  \index{reading saved data!density matrix}
  
  Instructs to read the density matrix stored in file
  \sysfile{DM} by a previous run.
  
  \siesta\ will continue even if \sysfile*{DM} is not found.

  \note That if the spin settings has changed \siesta\ allows reading
  a \sysfile*{DM} from a similar calculation with different \fdf{Spin}
  option. This may be advantageous when going from non-polarized
  calculations to polarized, and beyond, see \fdf{Spin} for details. 

\end{fdflogicalF}

\begin{fdflogicalT}{DM!Init.Unfold}
  \fdfdepend{DM!UseSaveDM}
  \index{reading saved data!density matrix}

  When reading the DM from a previous calculation there may be
  inconsistencies in the auxiliary supercell. E.g. if the previous
  calculation did not use an auxiliary supercell and the current
  calculation does (adding $k$-point sampling). \siesta\ will
  automatically \emph{unfold} the $\Gamma$-only DM to the auxiliary
  supercell elements (if \fdftrue).

  For \fdffalse\ the DM elements are assumed to originate from an
  auxiliary supercell calculation and the sparse elements are not
  unfolded but directly copied.

  \note Generally this shouldn't not be touched, however, if the
  initial DM is generated using \sisl\cite{sisl} and only on-site DM
  elements are set, this should be set to \fdffalse.

\end{fdflogicalT}

\begin{fdflogicalF}{DM!FormattedFiles}
  \index{reading saved data!density matrix}

  Setting this alters the default for \fdf{DM!FormattedInput} and 
  \fdf{DM!FormattedOutput}.
  Instructs to use formatted files for reading and writing
  the density matrix. In this case, the files are labelled
  \sysfile{DMF}.
  
  Only usable if one has problems transferring files from one computer
  to another.

\end{fdflogicalF}

\begin{fdflogicalF}{DM!FormattedInput}
  \index{reading saved data!density matrix}

  Instructs to use formatted files for reading the density
  matrix.
  
\end{fdflogicalF}

\begin{fdflogicalF}{DM!FormattedOutput}
  \index{reading saved data!density matrix}

  Instructs to use formatted files for writing the density
  matrix.
  
\end{fdflogicalF}

\begin{fdfentry}{DM!Init}<atomic>
  \index{spin!initialization}%

  Specify the initial density matrix composition. Methods are 
  compatible with a possible specification of \fdf{DM!InitSpin!AF}.
  Only a single option is available now, but more could be
  implemented. See also \fdf{DM!Init!RandomStates}.
  
  \begin{fdfoptions}
    
    \option[atomic]%
    \fdfindex*{DM!Init!atomic}

    Only initialize the diagonal (on-site) elements of the density matrix according
    to the atomic ground-state populations of the atomic orbitals.

  \end{fdfoptions}

\end{fdfentry}


\begin{fdflogicalF}{DM!InitSpin!AF}
  \index{spin!initialization}%
  \index{ferromagnetic initial DM}%
  \index{antiferromagnetic initial DM}
  
  It defines the initial spin density for a spin polarized calculation.
  The spin density is initially constructed with the maximum possible
  spin polarization for each atom in its atomic configuration.
  This variable defines the relative orientation of the atomic
  spins:

  If \fdffalse\ the initial spin-configuration is a ferromagnetic
  order (all spins up).
  %
  If \fdftrue\ all odd atoms are initialized to spin-up, all even
  atoms are initialized to spin-down.

\end{fdflogicalF}

\begin{fdfentry}{DM!InitSpin}[block]
  
  Define the initial spin density for a spin polarized calculation
  atom by atom. In the block there is one line per atom to be
  spin-polarized, containing the atom index (integer, ordinal in the
  block \fdf{AtomicCoordinatesAndAtomicSpecies}) and the desired
  initial spin-polarization (real, positive for spin up, negative for
  spin down). A value larger than possible will be reduced to the
  maximum possible polarization, keeping its sign. Maximum
  polarization can also be given by introducing the symbol \texttt{+}
  or \texttt{-} instead of the polarization value. There is no need
  to include a line for every atom, only for those to be
  polarized. The atoms not contemplated in the block will be given
  non-polarized initialization. 

  For non-colinear spin, the spin direction may be specified for each
  atom by the polar angle $\theta$ and the azimuthal angle $\phi$
  (using the physics ISO convention), given as the last two arguments
  in degrees. If not specified, $\theta=0$ is assumed
  ($z$-polarized). \fdf{Spin} must be set to use non-colinear or
  spin-orbit for the directions to have effect.

  Example:
  \begin{fdfexample}
     %block DM.InitSpin
        5  -1.   90.   0.   # Atom index, spin, theta, phi (deg)
        3   +    45. -90.
        7   -
     %endblock DM.InitSpin
  \end{fdfexample}
  In the above example, atom 5 is polarized in the $x$-direction.

  If this block is defined, but empty, all atoms are not polarized.
  This block has precedence over \fdf{DM!InitSpin!AF}.

\end{fdfentry}

\begin{fdfentry}{DM!Init.RandomStates}[integer]<0>

The program will 'remove' $N$ electrons from the initial density
matrix and add $N$ electrons in randomized 'states' (i.e., $N$ random
vectors which are normalized according to the S metric are used as
``synthetic states''). These extra states are not orthogonal to the
occupied manifold. The orbital coefficients of these states are scaled
with the atomic charges, to avoid populating high-lying shells.

This procedure is wholly experimental and meant to provide a kick to
the DM. It is inspired by the ``random-wavefunction'' initialization
used in some plane-wave codes. It is turned off by default.

This option only has an effect if the density matrix is initialized from an
atomic density and/or when using \fdf{DM!InitSpin}.

In case it is used together with \fdf{DM!InitSpin} it also randomizes
the spin-configuration, which may be undesirable.

\note This option is currently experimental since the randomized states are
not ensured to be orthogonal. This flag may be removed in later
revisions or superseded by other options. If testing this, start with
a value of $1$ to see if it has an effect; any higher numbers will
probably be worse.

\end{fdfentry}


\begin{fdflogicalT}{DM!AllowReuse}

  Controls whether density matrix information from previous geometry
  iterations is re-used to start the new geometry's SCF cycle.
  
\end{fdflogicalT}

\begin{fdflogicalT}{DM!AllowExtrapolation}

  Controls whether the density matrix information from several
  previous geometry iterations is extrapolated to start the new
  geometry's SCF cycle.  This feature is useful for molecular dynamics
  simulations and possibly also for geometry relaxations.  The number
  of geometry steps saved is controlled by the variable
  \fdf{DM!History.Depth}.

  This is default \fdftrue\ for molecular-dynamics simulations, but
  \fdffalse, for now, for geometry-relaxations (pending further tests
  which users are kindly requested to perform).

\end{fdflogicalT}


\begin{fdfentry}{DM!History.Depth}[integer]<1>

  Sets the number of geometry steps for which density-matrix information
  is saved for extrapolation.
  
\end{fdfentry}



\subsubsection{Initialization of the SCF cycle with charge densities}


\begin{fdflogicalF}{SCF.Read.Charge.NetCDF}
  \index{reading saved data!charge density} 

  Instructs \siesta\ to read the charge density stored in the netCDF
  file \file{Rho.IN.grid.nc}. This feature allows the easier re-use of
  electronic-structure information from a previous run. It is not
  necessary that the basis sets are ``similar'' (a requirement if
  density-matrices are to be read in).

  \note This is an experimental feature. Until robust checks are
  implemented, care must be taken to make sure that the FFT grids in
  the \sysfile*{grid.nc} file and in \siesta\ are the same.

\end{fdflogicalF}


\begin{fdflogicalF}{SCF.Read.Deformation.Charge.NetCDF}
  \index{reading saved data!deformation charge density} 
  
  Instructs Siesta to read the deformation charge density stored in
  the netCDF file \file{DeltaRho.IN.grid.nc}. This feature allows the
  easier re-use of electronic-structure information from a previous
  run. It is not necessary that the basis sets are ``similar'' (a
  requirement if density-matrices are to be read in). The deformation
  charge is particularly useful to give a good starting point for
  slightly different geometries.

  \note This is an experimental feature. Until robust checks are
  implemented, care must be taken to make sure that the FFT grids in
  the \sysfile*{grid.nc} file and in Siesta are the same.

\end{fdflogicalF}


\subsubsection{Output of density matrix and Hamiltonian}
\index{output!density matrix}

\textbf{Performance Note}: For large-scale calculations, writing the DM
at every scf step can have a severe impact on performance. 
The sparse-matrix I/O is undergoing a re-design, to facilitate the
analysis of data and to increase the efficiency. 

\begin{fdflogicalF}{Use.Blocked.WriteMat}

  By using blocks of orbitals (according to the underlying default
  block-cyclic distribution), the sparse-matrix I/O can be speeded-up
  significantly, both by saving MPI communication and by reducing the
  number of file accesses. This is essential for large systems, for
  which the I/O could take a significant fraction of the total
  computation time.
  
  To enable this ``blocked format'' (recommended for large-scale
  calculations) use the option \fdf{Use.Blocked.WriteMat}
  \fdftrue. Note that it is off by default.
  
  The new format is not backwards compatible. A converter program
  (\shell{Util/DensityMatrix/dmUnblock.F90}) has been written to
  post-process those files intended for further analysis or re-use in
  Siesta. This is the best option for now, since it allows liberal
  checkpointing with a much smaller time consumption, and only incurs
  costs when re-using or analyzing files.
  
  Note that \tsiesta\ will continue to produce \sysfile{DM} files, in
  the old format (See save\_density\_matrix.F)

  To test the new features, the option \fdf{S.Only} \fdftrue\ can be
  used. It will produce three files: a standard one, another one with
  optimized MPI communications, and a third, blocked one.

\end{fdflogicalF}

\begin{fdflogicalT}{Write!DM}

  Control the creation of the current iterations density matrix to a
  file for restart purposes and post-processing. If \fdffalse\ nothing
  will be written.

  If \fdf{Use.Blocked.WriteMat} is \fdffalse\ the \sysfile{DM} file
  will be written. Otherwise these density matrix files will be
  created; \file{DM\_MIXED.blocked} and \file{DM\_OUT.blocked} which
  are the mixed and the diagonalization output, respectively.

\end{fdflogicalT}

\begin{fdfentry}{Write!DM.end.of.cycle}[logical]<\fdfvalue{Write!DM}>

  Equivalent to \fdf{Write!DM}, but will only write at the end of each
  SCF loop.

  \note The file generated depends on \fdf{SCF.Mix!AfterConvergence}.

\end{fdfentry}  

\begin{fdflogicalF}{Write!H}

  Whether restart Hamiltonians should be written (not intrinsically
  supported in 4.1). 

  If \fdftrue\ these files will be created; \file{H\_MIXED} or
  \file{H\_DMGEN} which is the mixed or the generated Hamiltonian from
  the current density matrix, respectively. If
  \fdf{Use.Blocked.WriteMat} the just mentioned files will have the
  additional suffix \fdf*{.blocked}.
  
\end{fdflogicalF}

\begin{fdfentry}{Write!H.end.of.cycle}[logical]<\fdfvalue{Write!H}>

  Equivalent to \fdf{Write!H}, but will only write at the end of each
  SCF loop.

  \note The file generated depends on \fdf{SCF.Mix!AfterConvergence}.

\end{fdfentry}  

The following options control the creation of netCDF files. The
relevant routines have not been optimized yet for large-scale
calculations, so in this case the options should not be turned on
(they are off by default).


\begin{fdflogicalT}{Write!DM.NetCDF}
  \index{output!density matrix}
  
  It determines whether the density matrix (after the mixing step) is
  output as a \file{DM.nc} netCDF file or not.

  The file is overwritten at every SCF step. Use the
  \fdf{Write!DM.History.NetCDF} option if a complete history is
  desired.

  The \file{DM.nc} and standard DM file formats can be converted at
  will with the programs in \texttt{Util/DensityMatrix}
  directory. Note that the DM values in the \file{DM.nc} file are in
  single precision.

\end{fdflogicalT}

\begin{fdflogicalT}{Write!DMHS.NetCDF}
  \index{output!density matrix}%
  \index{output!Hamiltonian}%
  \index{output!overlap matrix}

  
  If true, the input density matrix, Hamiltonian, and output density
  matrix, are stored in a netCDF file named \file{DMHS.nc}. The file
  also contains the overlap matrix S.

  The file is overwritten at every SCF step. Use the
  \fdf{Write!DMHS.History.NetCDF} option if a complete history is
  desired.

\end{fdflogicalT}


\begin{fdflogicalF}{Write!DM.History.NetCDF}
  \index{output!density matrix}%
  \index{output!density matrix history}

  If \fdftrue, a series of netCDF files with names of the form
  \file{DM-NNNN.nc} is created to hold the complete history of the
  density matrix (after mixing). (See also \fdf{Write!DM.NetCDF}). Each
  file corresponds to a geometry step.

\end{fdflogicalF}

\begin{fdflogicalF}{Write!DMHS.History.NetCDF}
  \index{output!density matrix history}%
  \index{output!Hamiltonian history}%
  \index{output!overlap matrix}

  If \fdftrue, a series of netCDF files with names of the form
  \file{DMHS-NNNN.nc} is created to hold the complete history of the
  input and output density matrix, and the Hamiltonian.  (See also
  \fdf{Write!DMHS.NetCDF}). Each file corresponds to a geometry
  step. The overlap matrix is stored only once per SCF cycle.

\end{fdflogicalF}

\begin{fdflogicalF}{Write!TSHS.History}
  \index{output!Hamiltonian history}%
  \index{output!overlap matrix history}%

  If true, a series of TSHS files with names of the form
  \sysfile{N.TSHS} is created to hold the complete history of the
  Hamiltonian and overlap matrix. Each file corresponds to a geometry
  step. The overlap matrix is stored only once per SCF cycle. This
  option only works with \tsiesta.

\end{fdflogicalF}


\subsubsection{Convergence criteria}
\index{SCF convergence criteria}

\textbf{NOTE}: The older options with a \fdf*{DM} prefix is still
working for backwards compatibility. However, the following flags has
precedence.

Note that all convergence criteria are additive and may thus be used
simultaneously for complete control.

\begin{fdflogicalT}{SCF.DM!Converge}
  \index{SCF!mixing!Density matrix convergence}

  Logical variable to use the density matrix elements as monitor
  of self-consistency.
  
\end{fdflogicalT}

\begin{fdfentry}{SCF.DM!Tolerance}[real]<$10^{-4}$>%
  \fdfdepend{SCF.DM!Converge}
  \fdfindex*{DM.Tolerance}

  Tolerance of Density Matrix.
  %
  When the maximum difference between the output and the input on each
  element of the DM in a SCF cycle is smaller than
  \fdf{SCF.DM!Tolerance}, the self-consistency has been achieved.


  \note \fdf{DM.Tolerance} is the actual default for this flag.

\end{fdfentry}

\begin{fdfentry}{DM.Normalization.Tolerance}[real]<$10^{-5}$>

  Tolerance for unnormalized density matrices (typically the product
  of solvers such as PEXSI which have a built-in electron-count
  tolerance). If this tolerance is exceeded, the program stops. It is
  understood as a fractional tolerance. For example, the default will
  allow an excess or shorfall of 0.01 electrons in a 1000-electron
  system.

\end{fdfentry}



\begin{fdflogicalT}{SCF.H!Converge}
  \index{SCF!mixing!Hamiltonian convergence}

  Logical variable to use the Hamiltonian matrix elements as monitor
  of self-consistency: this is considered achieved when the maximum
  absolute change (dHmax) in the H matrix elements is below
  \fdf{SCF.H!Tolerance}. The actual meaning of dHmax depends on
  whether DM or H mixing is in effect: if mixing the DM, dHmax refers
  to the change in H(in) with respect to the previous step; if mixing
  H, dHmax refers to H(out)-H(in) in the previous(?) step. 
  
\end{fdflogicalT}

\begin{fdfentry}{SCF.H!Tolerance}[energy]<$10^{-3}\,\mathrm{eV}$>
  \fdfdepend{SCF.H!Converge}

  If \fdf{SCF.H!Converge} is \fdftrue, then self-consistency is
  achieved when the maximum absolute change in the Hamiltonian matrix
  elements is below this value.
  
\end{fdfentry}


\begin{fdflogicalT}{SCF.EDM!Converge}
  \index{SCF!mixing!energy density matrix convergence}

  Logical variable to use the energy density matrix elements as monitor
  of self-consistency: this is considered achieved when the maximum
  absolute change (dEmax) in the energy density matrix elements is below
  \fdf{SCF.EDM!Tolerance}. The meaning of dEmax is equivalent to that
  of \fdf{SCF.DM!Tolerance}.
  
\end{fdflogicalT}

\begin{fdfentry}{SCF.EDM!Tolerance}[energy]<$10^{-3}\,\mathrm{eV}$>
  \fdfdepend{SCF.EDM!Converge}

  If \fdf{SCF.EDM!Converge} is \fdftrue, then self-consistency is
  achieved when the maximum absolute change in the energy density
  matrix elements is below this value.
  
\end{fdfentry}


\begin{fdflogicalF}{SCF.FreeE!Converge}
  \index{SCF!mixing!energy convergence}
  \fdfindex*{DM.RequireEnergyConvergence}

  Logical variable to request an additional requirement for
  self-consistency: it is considered achieved when the change in the
  total (free) energy between cycles of the SCF procedure is below
  \fdf{SCF.FreeE!Tolerance} and the density matrix change criterion is
  also satisfied.

\end{fdflogicalF}

\begin{fdfentry}{SCF.FreeE!Tolerance}[energy]<$10^{-4}\,\mathrm{eV}$>
  \fdfdepend{SCF.FreeE!Converge}
  \fdfindex*{DM.EnergyTolerance}

  If \fdf{SCF.FreeE!Converge} is \fdftrue, then self-consistency is
  achieved when the change in the total (free) energy between cycles
  of the SCF procedure is below this value and the density matrix
  change criterion is also satisfied.

\end{fdfentry}

\begin{fdflogicalF}{SCF.Harris!Converge}
  \index{SCF!mixing!harris energy convergence}
  \fdfindex*{DM.Require.Harris.Convergence}

  Logical variable to use the Harris energy as monitor of
  self-consistency: this is considered achieved when the change in the
  Harris energy between cycles of the SCF procedure is below
  \fdf{SCF.Harris!Tolerance}. This is useful if only
  energies are needed, as the Harris energy tends to converge faster
  than the Kohn-Sham energy. The user is responsible for using the
  correct energies in further processing, e.g., the Harris energy if
  the Harris criterion is used.

  To help in basis-optimization tasks, a new file
  \file{BASIS\_HARRIS\_ENTHALPY} is provided, holding the same
  information as \file{BASIS\_ENTHALPY} but using the Harris energy
  instead of the Kohn-Sham energy.

  \note Setting this to \fdftrue\ makes \fdf{SCF.DM!Converge}
  \fdf{SCF.H!Converge} default to \fdffalse.

\end{fdflogicalF}

\begin{fdfentry}{SCF.Harris!Tolerance}[energy]<$10^{-4}\,\mathrm{eV}$>
  \fdfdepend{SCF.Harris!Converge}

  If \fdf{SCF.Harris!Converge} is \fdftrue, then self-consistency is
  achieved when the change in the Harris energy between cycles of the
  SCF procedure is below this value. This is useful if only energies
  are needed, as the Harris energy tends to converge faster than the
  Kohn-Sham energy.
  
\end{fdfentry}



\vspace{5pt}
\subsection{The real-space grid and the eggbox-effect}

\siesta\ uses a finite 3D grid for the calculation of some
integrals and the representation of charge densities and potentials.
Its fineness is determined by its plane-wave cutoff, as
given by the \fdf{Mesh!Cutoff}option. It means that all periodic
plane waves with kinetic energy lower than this cutoff 
can be represented in the grid without aliasing. In turn,
this implies that if a function (e.g. the density or the 
effective potential) is an expansion of
only these plane waves, it can be Fourier transformed
back and forth without any approximation.

The existence of the grid causes the breaking of translational
symmetry (the egg-box effect, due to the fact that the density
and potential \emph{do have} plane wave components above
the mesh cutoff).  This symmetry breaking is clear when
moving one single atom in an otherwise empty simulation cell. The
total energy and the forces oscillate with the grid periodicity when
the atom is moved, as if the atom were moving on an eggbox. In the
limit of infinitely fine grid (infinite mesh cutoff) this effect
disappears.

For reasonable values of the mesh cutoff, the effect of the eggbox
on the total energy or on the relaxed structure is normally unimportant.
However, it can affect substantially the process of relaxation, by
increasing the number of steps considerably, and can also spoil the
calculation of vibrations, usually much more demanding than relaxations.

The \program{Util/Scripting/eggbox\_checker.py} script can be used to
diagnose the eggbox effect to be expected for a particular
pseudopotential/basis-set combination.

Apart from increasing the mesh cutoff (see the \fdf{Mesh!Cutoff} option),
the following options might help in lessening a given eggbox problem. But
note also that a filtering of the orbitals and the relevant parts of
the pseudopotential and the pseudocore charge might be enough to solve
the issue (see Sec.~\ref{sec:filtering}).

\begin{fdfentry}{Mesh!Cutoff}[energy]<$300\,\mathrm{Ry}$>
  \index{grid}%
  \index{mesh}

  Defines the plane wave cutoff for the grid.
  
  % JMS/AG. To be implemented:
  % \textit{Default value:} If not present, \fdf{Mesh!Cutoff} is made equal 
  % to \fdf{FilterCutoff}, if present. If not, it is obtained from
  % \fdf{FilterTol}, if present, as explained in that parameter.
  % If none of these parameters is present, the default value for 
  % \fdf{Mesh!Cutoff} is \texttt{100 Ry}
 
\end{fdfentry}

\begin{fdfentry}{Mesh!Sizes}[list]<\fdfvalue{Mesh!Cutoff}>
  \index{grid}%
  \index{mesh}

  Manual definition of grid size along each lattice vector. The value
  must be divisible by \fdf{Mesh!SubDivisions}, otherwise the
  program will die. The numbers should also be divisible with $2$, $3$
  and $5$ due to the FFT algorithms.

  This option may be specified as a block, or a list:
  \begin{fdfexample}
    %block Mesh.Sizes
      100 202 210
    %endblock
    # Or equivalently:
    Mesh.Sizes [100 202 210]  
  \end{fdfexample}

  By default the grid size is determined via \fdf{Mesh!Cutoff}. This
  option has precedence if both are specified.

\end{fdfentry}

\begin{fdfentry}{Mesh!SubDivisions}[integer]<$2$>
  \index{grid}%
  \index{mesh}
  
  Defines the number of sub-mesh points in each direction used
  to save index storage on the mesh. It affects the memory
  requirements and the CPU time, but not the results. 

  \note The default value might be a bit conservative. Users might
  experiment with higher values, 4 or 6, to lower the memory and
  cputime usage.

\end{fdfentry}

\begin{fdfentry}{Grid.CellSampling}[block]
  \index{egg-box effect}%
  \index{rippling}

  It specifies points within the grid cell for a symmetrization
  sampling.

  For a given grid the grid-cutoff convergence can be improved (and
  the eggbox lessened) by recovering the lost symmetry: by
  symmetrizing the sensitive quantities. The full symmetrization
  implies an integration (averaging) over the grid cell. Instead, a
  finite sampling can be performed.

  It is a sampling of rigid displacements of the system with respect
  to the grid. The original grid-system setup (one point of the grid
  at the origin) is always calculated. It is the (0,0,0) displacement.
  The block \fdf{Grid.CellSampling} gives the additional
  displacements wanted for the sampling. They are given relative to
  the grid-cell vectors, i.e., (1,1,1) would displace to the next grid
  point across the body diagonal, giving an equivalent grid-system
  situation (a useless displacement for a sampling).

  Examples: Assume a cubic cell, and therefore a (smaller) cubic grid
  cell.  If there is no block or the block is empty, then the original
  (0,0,0) will be used only. The block:
  \begin{fdfexample}
     %block Grid.CellSampling
        0.5    0.5    0.5
     %endblock Grid.CellSampling
  \end{fdfexample}
  would use the body center as a second point in the sampling. Or:
  \begin{fdfexample}
     %block Grid.CellSampling
        0.5    0.5    0.0
        0.5    0.0    0.5
        0.0    0.5    0.5
     %endblock Grid.CellSampling
  \end{fdfexample}
  gives an fcc kind of sampling, and
  \begin{fdfexample}
     %block Grid.CellSampling
        0.5    0.0    0.0
        0.0    0.5    0.0
        0.0    0.0    0.5
        0.0    0.5    0.5
        0.5    0.0    0.5
        0.5    0.5    0.0
        0.5    0.5    0.5
     %endblock Grid.CellSampling
  \end{fdfexample}
  gives again a cubic sampling with half the original side length.  It
  is not trivial to choose a right set of displacements so as to
  maximize the new 'effective' cutoff. It depends on the kind of
  cell. It may be automatized in the future, but it is now left to the
  user, who introduces the displacements manually through this block.

  The quantities which are symmetrized are: ($i$) energy terms that
  depend on the grid, ($ii$) forces, ($iii$) stress tensor, and ($iv$)
  electric dipole.

  The symmetrization is performed at the end of every SCF cycle. The
  whole cycle is done for the (0,0,0) displacement, and, when the
  density matrix is converged, the same (now fixed) density matrix is
  used to obtain the desired quantities at the other displacements
  (the density matrix itself is \emph{not} symmetrized as it gives a
  much smaller egg-box effect). The CPU time needed for each
  displacement in the \fdf{Grid.CellSampling} block is of the order
  of one extra SCF iteration.

  This may be required in systems where very precise forces are needed,
  and/or if partial cores are used. It is advantageous to test whether
  the forces are sampled sufficiently by sampling one point.

  Additionally this may be given as a list of 3 integers which
  corresponds to a ``Monkhorst-Pack'' like grid sampling. I.e.
  \begin{fdfexample}
     Grid.CellSampling [2 2 2]
  \end{fdfexample}
  is equivalent to
  \begin{fdfexample}
     %block Grid.CellSampling
        0.5    0.0    0.0
        0.0    0.5    0.0
        0.5    0.5    0.0
        0.0    0.0    0.5
        0.5    0.0    0.5
        0.0    0.5    0.5
        0.5    0.5    0.5
     %endblock Grid.CellSampling
  \end{fdfexample}
  This is an easy method to see if the flag is important for your
  system or not.

\end{fdfentry}

\begin{fdfentry}{EggboxRemove}[block]
  \index{egg-box effect}%
  \index{rippling}

  For recovering translational invariance in an approximate way.

  It works by substracting from Kohn-Sham's total energy (and forces)
  an approximation to the eggbox energy, sum of atomic contributions.
  Each atom has a predefined eggbox energy depending on where it sits
  on the cell. This atomic contribution is species dependent and is
  obviously invariant under grid-cell translations. Each species
  contribution is thus expanded in the appropriate Fourier series.  It
  is important to have a smooth eggbox, for it to be represented by a
  few Fourier components. A jagged egg-box (unless very small, which
  is then unimportant) is often an indication of a problem with the
  pseudo.

  In the block there is one line per Fourier component. The first
  integer is for the atomic species it is associated with. The other
  three represent the reciprocal lattice vector of the grid cell (in
  units of the basis vectors of the reciprocal cell). The real number
  is the Fourier coefficient in units of the energy scale given in
  \fdf{EggboxScale} (see below), normally 1 eV.

  The number and choice of Fourier components is free, as well as
  their order in the block. One can choose to correct only some
  species and not others if, for instance, there is a substantial
  difference in hardness of the cores. The 0 0 0 components will add a
  species-dependent constant energy per atom. It is thus irrelevant
  except if comparing total energies of different calculations, in
  which case they have to be considered with care (for instance by
  putting them all to zero, i.e. by not introducing them in the list).
  The other components average to zero representing no bias in the
  total energy comparisons.

  If the total energies of the free atoms are put as 0 0 0
  coefficients (with spin polarisation if adequate etc.) the corrected
  total energy will be the cohesive energy of the system (per unit
  cell).

  \emph{Example:} For a two species system, this example would give a
  quite sufficent set in many instances (the actual values of the
  Fourier coefficients are not realistic).
  \begin{fdfexample}
     %block EggBoxRemove
       1   0   0   0 -143.86904
       1   0   0   1    0.00031
       1   0   1   0    0.00016
       1   0   1   1   -0.00015
       1   1   0   0    0.00035
       1   1   0   1   -0.00017
       2   0   0   0 -270.81903
       2   0   0   1    0.00015
       2   0   1   0    0.00024
       2   1   0   0    0.00035
       2   1   0   1   -0.00077
       2   1   1   0   -0.00075
       2   1   1   1   -0.00002
     %endblock EggBoxRemove
  \end{fdfexample}

  It represents an alternative to grid-cell sampling (above).  It is
  only approximate, but once the Fourier components for each species
  are given, it does not represent any computational effort (neither
  memory nor time), while the grid-cell sampling requires CPU time
  (roughly one extra SCF step per point every MD step).

  It will be particularly helpful in atoms with substantial partial
  core or semicore electrons.

  \note This should only be used for fixed cell calculations, i.e. not
  with \fdf{MD.VariableCell}.

  For the time being, it is up to the user to obtain the Fourier
  components to be introduced. They can be obtained by moving one
  isolated atom through the cell to be used in the calculation (for a
  give cell size, shape and mesh), once for each species.  The
  Util/Scripting/eggbox\_checker.py script can be used as a starting
  point for this.

\end{fdfentry}

\begin{fdfentry}{EggboxScale}[energy]<$1\,\mathrm{eV}$>
  \index{egg-box effect}%
  \index{rippling}

  Defines the scale in which the Fourier components of the egg-box
  energy are given in the \fdf{EggboxRemove} block.

\end{fdfentry}

\subsection{Matrix elements of the Hamiltonian and overlap}


\begin{fdflogicalF}{NeglNonOverlapInt}
  
  Logical variable to neglect or compute interactions between orbitals
  which do not overlap. These come from the KB projectors.  Neglecting
  them makes the Hamiltonian more sparse, and the calculation faster.
  
  \note Use with care!

\end{fdflogicalF}

\begin{fdflogicalF}{SCF!Write.Extra}
  \index{output!Hamiltonian \& overlap}

  Instructs \siesta\ to write out a variety of files with the
  Hamiltonian and density matrix.

  The output depends on whether a Hamiltonian mixing or density
  matrix mixing is performed (see \fdf{SCF!Mixing}).
  
  These files are created
  \begin{itemize}
    \item \file{H\_MIXED}; the Hamiltonian after
    mixing
    
    \item \file{DM\_OUT}; the density matrix as calculated by the
    current iteration

    \item \file{H\_DMGEN}; the Hamiltonian used to calculate the
    density matrix
    
    \item \file{DM\_MIXED}; the density matrix after mixing

  \end{itemize}
  
\end{fdflogicalF}


\begin{fdflogicalF}{SaveHS}
  \index{output!Hamiltonian \& overlap}

  Instructs to write the Hamiltonian and overlap matrices, as well as
  other data required to generate bands and density of states, in file
  \sysfile{HSX}. The \sysfile*{HSX} format is more compact than the
  traditional \sysfile*{HS}, and the Hamiltonian, overlap matrix, and
  relative-positions array (which is always output, even for
  gamma-point only calculations) are in single precision.

  The program \program{hsx2hs} in \program{Util/HSX} can be used to
  generate an old-style \sysfile*{HS} file if needed.

  \siesta\ produces also an \sysfile*{HSX} file if the \fdf{COOP.Write} option
  is active.  \index{output!HSX file}


  See also the \fdf{Write!DMHS.NetCDF} and \fdf{Write!DMHS.History.NetCDF}
  options.

\end{fdflogicalF}

\subsubsection{The auxiliary supercell}

When using k-points, this auxiliary supercell is needed to compute properly
the matrix elements involving orbitals in different unit cells.
It is computed automatically by the program at every geometry step.

Note that for gamma-point-only calculations there is an implicit
``folding'' of matrix elements corresponding to the images of orbitals
outside the unit cell. If information about the specific values of
these matrix elements is needed (as for COOP/COHP analysis), one has
to make sure that the unit cell is large enough, or force the use
of an aunxiliary supercell.
\index{COOP/COHP curves!Folding in Gamma-point calculations}
 
\begin{fdflogicalF}{ForceAuxCell}

If \fdftrue, the program uses an auxiliary cell even for gamma-point-only
calculations. This might be needed for COOP/COHP calculations, as
noted above, \index{COOP/COHP curves!Folding in Gamma-point
  calculations} or in degenerate cases, such as when the cell is so
small that a given orbital ``self-interacts'' with its own images (via
direct overlap or through a KB projector). In this case, the diagonal
value of the overlap matrix S for this orbital is different from 1, and an
initialization of the DM via atomic data would be faulty. The
program corrects the problem to zeroth-order by dividing the DM value
by the corresponding overlap matrix entry, but the initial charge
density would exhibit distortions from a true atomic superposition
(See routine \file{m\_new\_dm.F}). The distortion of the charge density
is a serious problem for Harris functional calculations, so this
option must be enabled for them if self-folding is present. (Note that
this should not happen in any serious calculation...)

\end{fdflogicalF}



\subsection{Calculation of the electronic structure}

\siesta\ can use three qualitatively different methods to determine
the electronic structure of the system. The first is standard
diagonalization, which works for all systems and has a cubic scaling
with the size. The second is based on
the direct minimization of a special functional over a set of
trial orbitals. These orbitals can either extend over the entire system,
resulting in a cubic scaling algorithm, or be constrained within a
localization radius, resulting in a linear scaling algorithm. The former
is a recent implementation (described in \ref{SolverOMM}), that can
be viewed as an equivalent approach to diagonalization in terms of the
accuracy of the solution; the latter is the historical O(N) method used by
\siesta\ (described in \ref{SolverON}); it scales in principle
linearly with the size of the system (only if the size is larger than
the radial cutoff for the local solution wave-functions), but is quite
fragile and substantially more difficult to use, and only works for
systems with clearly separated occupied and empty states. The default is
to use diagonalization. The third method (PEXSI) is based on the
pole expansion of the Fermi-Dirac function and the direct computation
of the density matrix via an efficient scheme of selected
inversion (see Sec~\ref{SolverPEXSI}).

The calculation of the H and S matrix elements is always done with an
O(N) method. The actual scaling is not linear for small systems, but
it becomes O(N) when the system dimensions are larger than the scale
of orbital r$_c$'s.

The relative importance of both parts of the computation (matrix
elements and solution) depends on the size and quality of the
calculation. The mesh cutoff affects only the matrix-element
calculation; orbital cutoff radii affect the matrix elements and all
solvers except diagonalization; the need for \textbf{k}-point sampling
affects the solvers only, and the number of basis orbitals affects
them all.

In practice, the vast majority of users employ diagonalization (or the
OMM method) for the calculation of the electronic structure. This is
so because the vast majority of calculations (done for intermediate
system sizes) would not benefit from the O(N) or PEXSI solvers.

\begin{fdfentry}{SolutionMethod}[string]<diagon>

  Character string to choose among diagonalization (\fdf*{diagon}),
  cubic-scaling minimization (\fdf*{OMM}), Order-N (\fdf*{OrderN})
  solution of the Kohn-Sham Hamiltonian, \fdf*{transiesta}, the
  PEXSI method (\fdf*{PEXSI}) or the \fdf*{CheSS} solver.
  In addition, the \fdf*{Dummy} solver will just return a
  slightly perturbed density-matrix without actually solving for
  the electronic structure. This is useful for timing other routines.

  
\end{fdfentry}


\subsubsection{Diagonalization options}

\begin{fdfentry}{NumberOfEigenStates}[integer]<\nonvalue{all orbitals}>
  \fdfdepend{Diag!Algorithm}

  This parameter allows the user to reduce the number of eigenstates
  that are calculated from the maximum possible. The benefit is that,
  for any calculation, the cost of the diagonalization is reduced by
  finding fewer eigenvalues/eigenvectors. For example, during a
  geometry optimisation, only the occupied states are required rather
  than the full set of virtual orbitals. Note, that if the electronic
  temperature is greater than zero then the number of partially
  occupied states increases, depending on the band gap. The value
  specified must be greater than the number of occupied states and
  less than the number of basis functions.

  If a \emph{negative} number is passed it corresponds to the number
  of orbitals above the total charge of the system. In effect it
  corresponds to the number of orbitals above the Fermi level for zero
  temperature. I.e. if $-2$ is specified for a system with $20$
  orbitals and $10$ electrons it is equivalent to $12$.

  Using this option can \emph{greatly} speed up your calculations if
  used correctly.

  \note If experiencing \shell{PDORMTR} errors in $\Gamma$
  calculations with \fdf*{MRRR} algorithm, it is because of a buggy
  ScaLAPACK implementation, simply use another algorithm.

  \note This only affects the \fdf*{MRRR}, \fdf*{ELPA} and
  \fdf*{Expert} diagonalization routines.
  
\end{fdfentry}


\begin{fdfentry}{Diag!WFS.Cache}[string]<none|cdf>
  \fdfdeprecates{UseNewDiagk}

  Specify whether \siesta\ should cache wavefunctions in the
  diagonalization routine. Without a cache, a standard two-pass
  procedure is used. First eigenvalues are obtained to determine the
  Fermi level, and then the wavefunctions are computed to build the
  density matrix.

  Using a cache one can do everything in one go. However, this
  requires substantial IO and performance may vary.

  \begin{fdfoptions}

    \option[none]%
    \fdfindex*{Diag!WFS.Cache:none}%

    The wavefunctions will not be cached and the standard two-pass
    diagonalization method is used.

    \option[cdf]%
    \fdfindex*{Diag!WFS.Cache:cdf}%

    The wavefunctions are stored in \file{WFS.nc} (NetCDF format) and
    created from a single root node. This requires NetCDF support, see
    Sec.~\ref{sec:libs}.

    \note This is an experimental feature.

    \note It is not compatible with the \fdf{Diag!ParallelOverK}
    option.

  \end{fdfoptions}

\end{fdfentry}


\begin{fdflogicalT}{Diag!Use2D}

  Determine whether a 1D or 2D data decomposition should be used when
  calling ScaLAPACK. The use of 2D leads to superior scaling on large
  numbers of processors and is therefore the default. This option only
  influences the parallel performance.

  If \fdf{Diag!BlockSize} is different from \fdf{BlockSize} this flag
  defaults to \fdftrue, else if \fdf{Diag!ProcessorY} is $1$ or the
  total number of processors, then this flag will default to
  \fdffalse.

\end{fdflogicalT}

\begin{fdfentry}{Diag!ProcessorY}[integer]<$\sim \sqrt{\mathrm N}$>
  \fdfdepend{Diag!Use2D}

  Set the number of processors in the 2D distribution along the rows.
  Its default is equal to the lowest multiple of $\mathrm N$ (number
  of MPI cores) below $\sqrt{\mathrm N}$ such that, ideally, the
  distribution will be a square grid.

  The input is required to be a multiple of the total number of MPI
  cores but \siesta\ will reduce the input value such that it
  coincides with this.

  Once the lowest multiple closest to $\sqrt{\mathrm N}$, or the input, is
  determined the 2D distribution will be $\mathrm{ProcessorY}
  \times\mathrm{N}/\mathrm{ProcessorY}$, rows $\times$ columns.

  \note If the automatic correction (lowest multiple of MPI cores) is
  $1$ the default of \fdf{Diag!Use2D} will be \fdffalse.

\end{fdfentry}

\begin{fdfentry}{Diag!BlockSize}[integer]<\fdfvalue{BlockSize}>
  \fdfdepend{Diag!Use2D}
  
  The block-size used for the 2D distribution in the ScaLAPACK calls.
  This number greatly affects the performance of ScaLAPACK.

  If the ScaLAPACK library is threaded this parameter should not be
  too small. In any case it may be advantageous to run a few tests to
  find a suitable value.

  \note If \fdf{Diag!Use2D} is set to \fdffalse\ this flag is not
  used. 

\end{fdfentry}


\begin{fdfentry}{Diag!Algorithm}[string]<Divide-and-Conquer|...>
  \fdfdeprecates{Diag!DivideAndConquer,Diag!MRRR,Diag!ELPA,Diag!NoExpert}

  Select the algorithm when calculating the eigenvalues and/or
  eigenvectors.

  The fastest routines are typically MRRR or ELPA which may be
  significantly faster by specifying a suitable
  \fdf{NumberOfEigenStates} value.

  Currently the implemented solvers are:

  \begin{fdfoptions}

    \option[divide-and-Conquer]%
    \fdfindex*{Diag!Algorithm:Divide-and-Conquer}
    
    Use the divide-and-conquer algorithm.

    \option[divide-and-Conquer-2stage]%
    \fdfindex*{Diag!Algorithm:Divide-and-Conquer-2stage}
    
    Use the divide-and-conquer 2stage algorithm (fall-back to the
    divide-and-conquer if not available).

    
    \option[MRRR]%
    \fdfindex*{Diag!Algorithm:MRRR}%
    \fdfdepend{NumberOfEigenStates}
    
    Use the multiple relatively robust algorithm.
    
    \note The MRRR method is defaulted not to be compiled in, however,
    if your ScaLAPACK library does contain the relevant sources one
    may add this pre-processor flag \texttt{-DSIESTA\_\_MRRR}.
    \index{compile!pre-processor!-DSIESTA\_\_MRRR}

    \option[MRRR-2stage]%
    \fdfindex*{Diag!Algorithm:MRRR-2stage}%
    \fdfdepend{NumberOfEigenStates}
    
    Use the 2-stage multiple relatively robust algorithm.


    \option[expert]%
    \fdfindex*{Diag!Algorithm:Expert}%
    \fdfdepend{NumberOfEigenStates}
    
    Use the expert algorithm which allows calculating a subset of the
    eigenvalues/eigenvectors.


    \option[expert-2stage]%
    \fdfindex*{Diag!Algorithm:Expert-2stage}%
    \fdfdepend{NumberOfEigenStates}

    Use the 2-stage expert algorithm which allows calculating a subset
    of the eigenvalues/eigenvectors.

    
    \option[noexpert|QR]%
    \fdfindex*{Diag!Algorithm:NoExpert}%
    \fdfindex*{Diag!Algorithm:QR}%

    Use the QR algorithm.

    \option[noexpert-2stage|QR-2stage]%
    \fdfindex*{Diag!Algorithm:NoExpert-2stage}%

    Use the 2-stage QR algorithm.

    
    \option[ELPA-1stage]%
    \fdfindex*{Diag!Algorithm:ELPA-1stage}%
    \fdfdepend{NumberOfEigenStates}
    
    Use the ELPA\cite{ELPA,ELPA-1} 1-stage solver. Requires
    compilation of \siesta\ with ELPA, see Sec.~\ref{sec:libs}.

    Not compatible with \fdf{Diag!ParallelOverK}.

    \option[ELPA|ELPA-2stage]%
    \fdfindex*{Diag!Algorithm:ELPA-2stage}%
    \fdfdepend{NumberOfEigenStates}
    
    Use the ELPA\cite{ELPA,ELPA-1} 2-stage solver. Requires
    compilation of \siesta\ with ELPA, see Sec.~\ref{sec:libs}.

    Not compatible with \fdf{Diag!ParallelOverK}.

  \end{fdfoptions}

  \note All the 2-stage solvers are (as of July 2017) only
  implemented in the LAPACK library, so they will only be usable in
  serial or when using \fdf{Diag!ParallelOverK}.

  To enable the 2-stage solvers add this flag to the \file{arch.make}
  \index{compile!pre-processor!-DSIESTA\_\_DIAG\_2STAGE}
  \begin{shellexample}
    FPPFLAGS += -DSIESTA__DIAG_2STAGE
  \end{shellexample}

  If one uses the shipped LAPACK library the 2-stage solvers are added
  automatically.


  \note This flag has precedence over the deprecated flags:
  \fdf{Diag!DivideAndConquer}, \fdf{Diag!MRRR}, \fdf{Diag!ELPA} and
  \fdf{Diag!NoExpert}. However, the default is taking from the
  deprecated flags.
  
\end{fdfentry}

\begin{fdflogicalF}{Diag!ELPA!UseGPU}

  Newer versions of the ELPA library have optional support for GPUs. 
  This flag will request that GPU-specific code be used by the
  library.

  To use this feature, GPU support has to be explicitly enabled during
  compilation of the ELPA library. At present, detection of GPU
  support in the code is not fool-proof, so this flag should only be
  enabled if GPU support is indeed available.

  At present, ELPA offers GPU support in the 'elpa-1' solver in
  released versions, whereas GPU support for the 'elpa-2' solver has
  not been released officially.

\end{fdflogicalF}

\begin{fdflogicalF}{Diag!ParallelOverK}
  
  For the diagonalization there is a choice in strategy about whether
  to parallelise over the $\mathbf k$ points (\fdftrue) or over the
  orbitals (\fdffalse). $\mathbf k$ point diagonalization is close to
  perfectly parallel but is only useful where the number of
  $\mathbf k$ points is much larger than the number of processors and
  therefore orbital parallelisation is generally preferred. The
  exception is for metals where the unit cell is small, but the number
  of $\mathbf k$ points to be sampled is very large. In this last case
  it is recommend that this option be used.

  \note This scheme is not used for the diagonalizations involved in
  the generation of the band-structure (as specified with
  \fdf{BandLines} or \fdf{BandPoints}) or in the generation of
  wave-function information (as specified with
  \fdf{WaveFuncKPoints}). In these cases the program falls back to
  using parallelization over orbitals.

\end{fdflogicalF}

\begin{fdfentry}{Diag!AbsTol}[real]<$10^{-16}$>

  The absolute tolerance for the orthogonality of the eigenvectors.
  This tolerance is only applicable for the solvers:

  \fdf*{expert} for both the serial and parallel solvers.

  \fdf*{mrrr} for the serial solver.
  
\end{fdfentry}

\begin{fdfentry}{Diag!OrFac}[real]<$10^{-3}$>

  Re-orthogonalization factor to determine when the eigenvectors
  should be re-orthogonalized. 

  Only applicable for the \fdf*{expert} serial and parallel solvers.
  
\end{fdfentry}


\begin{fdfentry}{Diag!Memory}[real]<$1$>
  
  Whether the parallel diagonalization of a matrix is successful or
  not can depend on how much workspace is available to the routine
  when there are clusters of eigenvalues. \fdf{Diag!Memory} allows
  the user to increase the memory available, when necessary, to
  achieve successful diagonalization and is a scale factor relative to
  the minimum amount of memory that ScaLAPACK might need.

\end{fdfentry}


\begin{fdfentry}{Diag!UpperLower}[string]<lower|upper>

  Which part of the symmetric triangular part should be used in the
  solvers.

  \note Do not change this variable unless you are performing
  benchmarks. It should be fastest with the \fdf*{lower} part.

\end{fdfentry}

\paragraph{Deprecated diagonalization options}


\begin{fdflogicalF}{Diag!MRRR}
  \fdfdepend{NumberOfEigenStates}

  Use the MRRR method in ScaLAPACK for diagonalization. Specifying a
  number of eigenvectors to store is possible through the symbol
  \fdf{NumberOfEigenStates} (see above).

  \note The MRRR method is defaulted not to be compiled in, however,
  if your ScaLAPACK library does contain the relevant sources one
  may add this pre-processor flag \texttt{-DSIESTA\_\_MRRR}.
  \index{compile!pre-processor!-DSIESTA\_\_MRRR}

  \note Use \fdf{Diag!Algorithm} instead.
  
\end{fdflogicalF}

\begin{fdflogicalT}{Diag!DivideAndConquer}

  Logical to select whether the normal or Divide and Conquer
  algorithms are used within the ScaLAPACK/LAPACK diagonalization
  routines.

  \note Use \fdf{Diag!Algorithm} instead.

\end{fdflogicalT}

\begin{fdflogicalF}{Diag!ELPA}
  \fdfdepend{NumberOfEigenStates}

  See the ELPA articles\cite{ELPA,ELPA-1} for additional information. 

  \note It is not compatible with the \fdf{Diag!ParallelOverK}
  option.

  \note Use \fdf{Diag!Algorithm} instead.

\end{fdflogicalF}


\begin{fdflogicalF}{Diag!NoExpert}

  Logical to select whether the simple or expert versions of the
  ScaLAPACK/LAPACK routines are used. Usually the expert routines are
  faster, but may require slightly more memory.

  \note Use \fdf{Diag!Algorithm} instead.

\end{fdflogicalF}



\subsubsection{Output of eigenvalues and wavefunctions}

This section focuses on the output of eigenvalues and wavefunctions
produced during the (last) iteration of the self-consistent cycle,
and associated to the appropriate k-point sampling.

For band-structure calculations (which typically use a different set
of k-points) and specific requests for wavefunctions, see
Secs.~\ref{sec:band-structure} and~\ref{sec:wf-output-user}, respectively.

The complete set of wavefunctions obtained during the last
iteration of the SCF loop will be written to a NetCDF file
\file{WFS.nc} if the \fdf{Diag!WFS.Cache:cdf} option is in effect.

The complete set of wavefunctions obtained during the last
iteration of the SCF loop will be written to \sysfile{fullBZ.WFSX}
if the \fdf{COOP.Write} option is in effect.


\begin{fdflogicalF}{WriteEigenvalues}
  \index{output!eigenvalues}

  If \fdftrue\ it writes the Hamiltonian eigenvalues for the sampling
  $\vec k$ points, in the main output file.  If \fdffalse, it
  writes them in the file \sysfile{EIG}, which can be used
  by the \program{Eig2DOS}\index{Eig2DOS@\textsc{Eig2DOS}}
  postprocessing utility (in the Util/Eig2DOS directory) for obtaining
  the density of states.\index{density of states}

  \note this option only works for \fdf{SolutionMethod} which
  calculates the eigenvalues.

\end{fdflogicalF}


\subsubsection{Occupation of electronic states and Fermi level}
\label{electronic-occupation}


\begin{fdfentry}{OccupationFunction}[string]<FD>
  
  String variable to select the function that determines the
  occupation of the electronic states. These options are available:
  \begin{fdfoptions}
    \option[FD]%
    The usual Fermi-Dirac occupation function is used.

    \option[MP]%
    The occupation function proposed by Methfessel and
    Paxton (Phys. Rev. B, \textbf{40}, 3616 (1989)), is used.

    \option[Cold]%
    The occupation function proposed by Marzari, Vanderbilt et. al
    (PRL, \textbf{82}, 16 (1999)), is used, this is commonly referred
    to as \emph{cold smearing}.

  \end{fdfoptions}
  The smearing of the electronic occupations is done, in all cases,
  using an energy width defined by the \fdf{ElectronicTemperature}
  variable. Note that, while in the case of Fermi-Dirac, the
  occupations correspond to the physical ones if the electronic
  temperature is set to the physical temperature of the system, this
  is not the case in the Methfessel-Paxton function. In this case, the
  tempeature is just a mathematical artifact to obtain a more accurate
  integration of the physical quantities at a lower cost. In
  particular, the Methfessel-Paxton scheme has the advantage that,
  even for quite large smearing temperatures, the obtained energy is
  very close to the physical energy at $T=0$. Also, it allows a much
  faster convergence with respect to $k$-points, specially for
  metals. Finally, the convergence to selfconsistency is very much
  improved (allowing the use of larger mixing coefficients).

  For the Methfessel-Paxton case, and similarly for cold smearing, one
  can use relatively large values for the \fdf{ElectronicTemperature}
  parameter. How large depends on the specific system. A guide can be
  found in the article by J. Kresse and J. Furthm\"uller,
  Comp. Mat. Sci. \textbf{6}, 15 (1996).

  If Methfessel-Paxton smearing is used, the order of the
  corresponding Hermite polynomial expansion must also be chosen (see
  description of variable \fdf{OccupationMPOrder}).

  We finally note that, in both cases (FD and MP), once a finite
  temperature has been chosen, the relevant energy is not the
  Kohn-Sham energy, but the Free energy. In particular, the atomic
  forces are derivatives of the Free energy, not the KS energy. See
  R. Wentzcovitch \textit{et al.}, Phys. Rev. B \textbf{45}, 11372
  (1992); S. de Gironcoli, Phys. Rev. B \textbf{51}, 6773 (1995);
  J. Kresse and J. Furthm\"uller, Comp. Mat. Sci.  \textbf{6}, 15
  (1996), for details.

\end{fdfentry}

\begin{fdfentry}{OccupationMPOrder}[integer]<1>
  
  Order of the Hermite-Gauss polynomial expansion for the electronic
  occupation functions in the Methfessel-Paxton scheme (see
  Phys. Rev. B \textbf{40}, 3616 (1989)).  Specially for metals,
  higher order expansions provide better convergence to the ground
  state result, even with larger smearing temperatures, and provide
  also better convergence with k-points.

  \note only used if \fdf{OccupationFunction} is \fdf*{MP}.

\end{fdfentry}


\begin{fdfentry}{ElectronicTemperature}[temperature/energy]<$300\,\mathrm{K}$>
  
  Temperature for occupation function. Useful specially for metals,
  and to accelerate selfconsistency in some cases.

\end{fdfentry}



\subsubsection{Orbital minimization method (OMM)}
\label{SolverOMM}

The OMM is an alternative cubic-scaling solver that uses a
minimization algorithm instead of direct diagonalization to find the
occupied subspace.  The main advantage over diagonalization is the
possibility of iteratively reusing the solution from each SCF/MD step
as the starting guess of the following one, thus greatly reducing the
time to solution. Typically, therefore, the first few SCF cycles of
the first MD step of a simulation will be slower than diagonalization,
but the rest will be faster. The main disadvantages are that
individual Kohn-Sham eigenvalues are not computed, and that only a
fixed, integer number of electrons at each k point/spin is
allowed. Therefore, only spin-polarized calculations with
\fdf{Spin!Fix} are allowed, and \fdf{Spin!Total} must be chosen
appropriately. For non-$\Gamma$ point calculations, the number of
electrons is set to be equal at all k points. Non-colinear
calculations (see \fdf{Spin}) are not supported at present.
The OMM implementation was initially developed by Fabiano Corsetti.

It is important to note that the OMM requires all occupied Kohn-Sham
eigenvalues to be negative; this can be achieved by applying a shift
to the eigenspectrum, controlled by \fdf{ON.eta} (in this case,
\fdf{ON.eta} simply needs to be higher than the HOMO level). If the
OMM exhibits a pathologically slow or unstable convergence, this is
almost certainly due to the fact that the default value of
\fdf{ON.eta} (\fdf*{0.0 eV}) is too low, and should be raised by
a few eV.

\begin{fdflogicalT}{OMM!UseCholesky}

  Select whether to perform a Cholesky factorization of the
  generalized eigenvalue problem; this removes the overlap matrix from
  the problem but also destroys the sparsity of the Hamiltonian
  matrix.

\end{fdflogicalT}

\begin{fdflogicalT}{OMM!Use2D}
  
  Select whether to use a 2D data decomposition of the matrices for
  parallel calculations. This generally leads to superior scaling for
  large numbers of MPI processes.

\end{fdflogicalT}

\begin{fdflogicalF}{OMM!UseSparse}

  Select whether to make use of the sparsity of the Hamiltonian and
  overlap matrices where possible when performing matrix-matrix
  multiplications (these operations are thus reduced from $O(N^3)$ to
  $O(N^2)$ without loss of accuracy).

  \note not compatible with \fdf{OMM!UseCholesky},
  \fdf{OMM!Use2D}, or non-$\Gamma$ point calculations

\end{fdflogicalF}

\begin{fdfentry}{OMM!Precon}[integer]<-1>
  
  Number of SCF steps for \emph{all} MD steps for which to apply a
  preconditioning scheme based on the overlap and kinetic energy
  matrices; for negative values the preconditioning is always
  applied. Preconditioning is usually essential for fast and accurate
  convergence (note, however, that it is not needed if a Cholesky
  factorization is performed; in such cases this variable will have no
  effect on the calculation).

  \note cannot be used with \fdf{OMM!UseCholesky}.

\end{fdfentry}


\begin{fdfentry}{OMM!PreconFirstStep}[integer]<\fdfvalue{OMM!Precon}>
  
  Number of SCF steps in the \emph{first} MD step for which to apply
  the preconditioning scheme; if present, this will overwrite the
  value given in \fdf{OMM!Precon} for the first MD step only.

\end{fdfentry}

\begin{fdfentry}{OMM!Diagon}[integer]<0>

  Number of SCF steps for \emph{all} MD steps for which to use a
  standard diagonalization before switching to the OMM; for negative
  values diagonalization is always used, and so the calculation is
  effectively equivalent to \fdf{SolutionMethod} \fdf*{diagon}.
  In general, selecting the first few SCF steps can speed up the
  calculation by removing the costly initial minimization (at present
  this works best for $\Gamma$ point calculations).

\end{fdfentry}

\begin{fdfentry}{OMM!DiagonFirstStep}[integer]<\fdfvalue{OMM!Diagon}>
  
  Number of SCF steps in the \emph{first} MD step for which to use a
  standard diagonalization before switching to the OMM; if present,
  this will overwrite the value given in \fdf{OMM!Diagon} for the
  first MD step only.

\end{fdfentry}

\begin{fdfentry}{OMM!BlockSize}[integer]<\fdfvalue{BlockSize}>
  
  Blocksize used for distributing the elements of the matrix over MPI
  processes. Specifically, this variable controls the dimension
  relating to the trial orbitals used in the minimization (equal to
  the number of occupied states at each k point/spin); the equivalent
  variable for the dimension relating to the underlying basis orbitals
  is controlled by \fdf{BlockSize}.

\end{fdfentry}

\begin{fdfentry}{OMM!TPreconScale}[energy]<$10\,\mathrm{Ry}$>
  
  Scale of the kinetic energy preconditioning (see C.~K.~Gan \emph{et
      al.}, Comput. Phys. Commun.  \textbf{134}, 33 (2001)). A smaller
  value indicates more aggressive kinetic energy preconditioning,
  while an infinite value indicates no kinetic energy
  preconditioning. In general, the kinetic energy preconditioning is
  much less important than the tensorial correction brought about by
  the overlap matrix, and so this value will have fairly little impact
  on the overall performace of the preconditioner; however, too
  aggressive kinetic energy preconditioning can have a detrimental
  effect on performance and accuracy.

\end{fdfentry}

\begin{fdfentry}{OMM!RelTol}[real]<$10^{-9}$>
  
  Relative tolerance in the conjugate gradients minimization of the
  Kohn-Sham band energy (see \fdf{ON!Etol}).

\end{fdfentry}

\begin{fdflogicalF}{OMM!Eigenvalues}

  Select whether to perform a diagonalization at the end of each MD
  step to obtain the Kohn-Sham eigenvalues.

\end{fdflogicalF}

\begin{fdflogicalF}{OMM!WriteCoeffs}

  Select whether to write the coefficients of the solution orbitals to
  file at the end of each MD step.

\end{fdflogicalF}

\begin{fdflogicalF}{OMM!ReadCoeffs}

  Select whether to read the coefficients of the solution orbitals
  from file at the beginning of a new calculation. Useful for
  restarting an interrupted calculation, especially when used in
  conjuction with \fdf{DM.UseSaveDM}. Note that the same number of
  MPI processes and values of \fdf{OMM!Use2D},
  \fdf{OMM!BlockSize}, and \fdf{BlockSize} must be used when
  restarting.

\end{fdflogicalF}


\begin{fdflogicalF}{OMM!LongOutput}
  
  Select whether to output detailed information of the conjugate
  gradients minimization for each SCF step.

\end{fdflogicalF}


\subsubsection{Order(N) calculations}
\label{SolverON}

The Ordern(N) subsystem is quite fragile and only works for systems
with clearly separated occupied and empty states. Note also that the
option to compute the chemical potential automatically does not yet
work in parallel.

NOTE: Since it is used less often, bugs creeping into the O(N) solver have
been more resilient than in more popular bits of the code.  Work is
ongoing to clean and automate the O(N) process, to make the solver
more user-friendly and robust.

\begin{fdfentry}{ON.functional}[string]<Kim>
  
  Choice of order-N minimization functionals:
  \begin{fdfoptions}
    \option[Kim]%
    Functional of Kim, Mauri and Galli, PRB 52, 1640 (1995).

    \option[Ordejon-Mauri]%
    Functional of Ordej\'on et al, or Mauri et al, see PRB 51, 1456
    (1995).  The number of localized wave functions (LWFs) used must
    coincide with $N_{el}/2$ (unless spin polarized).  For the initial
    assignment of LWF centers to atoms, atoms with even number of
    electrons, $n$, get $n/2$ LWFs. Odd atoms get $(n+1)/2$ and
    $(n-1)/2$ in an alternating sequence, ir order of appearance
    (controlled by the input in the atomic coordinates block).

    \option[files]%
    Reads localized-function information from a file and chooses
    automatically the functional to be used.
  \end{fdfoptions}

\end{fdfentry}

\begin{fdfentry}{ON.MaxNumIter}[integer]<1000>
  
  Maximum number of iterations in the conjugate minimization of the
  electronic energy, in each SCF cycle.

\end{fdfentry}

\begin{fdfentry}{ON.Etol}[real]<$10^{-8}$>
  
  Relative-energy tolerance in the conjugate minimization of the
  electronic energy. The minimization finishes if 
  $2 (E_n - E_{n-1}) / (E_n + E_{n-1}) \leq $ \fdf{ON.Etol}.

\end{fdfentry}

\begin{fdfentry}{ON.eta}[energy]<$0\,\mathrm{eV}$>

  Fermi level parameter of Kim \textit{et al.}. This should be in the
  energy gap, and tuned to obtain the correct number of electrons. If
  the calculation is spin polarised, then separate Fermi levels for
  each spin can be specified.
  
\end{fdfentry}

\begin{fdfentry}{ON.eta.alpha}[energy]<$0\,\mathrm{eV}$>

  Fermi level parameter of Kim \textit{et al.} for alpha spin
  electrons.  This should be in the energy gap, and tuned to obtain
  the correct number of electrons. Note that if the Fermi level is not
  specified individually for each spin then the same global eta will
  be used.

\end{fdfentry}

\begin{fdfentry}{ON.eta.beta}[energy]<$0\,\mathrm{eV}$>

  Fermi level parameter of Kim \textit{et al.} for beta spin
  electrons.  This should be in the energy gap, and tuned to obtain
  the correct number of electrons. Note that if the Fermi level is not
  specified individually for each spin then the same global eta will
  be used.

\end{fdfentry}

\begin{fdfentry}{ON.RcLWF}[length]<$9.5\,\mathrm{Bohr}$>
  \index{Localized Wave Functions}

  Localization redius for the Localized Wave Functions (LWF's).

\end{fdfentry}

\begin{fdflogicalF}{ON.ChemicalPotential}
  \index{Chemical Potential}
  
  Specifies whether to calculate an order-$N$ estimate of the Chemical
  Potential, by the projection method (Goedecker and Teter, PRB
  \textbf{51}, 9455 (1995); Stephan, Drabold and Martin, PRB
  \textbf{58}, 13472 (1998)). This is done by expanding the Fermi
  function (or density matrix) at a given temperature, by means of
  Chebyshev polynomials\index{Chebyshev Polynomials}, and imposing a
  real space truncation on the density matrix.  To obtain a realistic
  estimate, the temperature should be small enough (typically, smaller
  than the energy gap), the localization range large enough (of the
  order of the one you would use for the Localized Wannier Functions),
  and the order of the polynomial expansion sufficiently large (how
  large depends on the temperature; typically, 50-100).


  \note this option does not work in parallel. An alternative is to
  obtain the approximate value of the chemical potential using an
  initial diagonalization.

\end{fdflogicalF}


\begin{fdflogicalF}{ON.ChemicalPotential.Use}
  \index{Chemical Potential}

  Specifies whether to use the calculated estimate of the Chemical
  Potential, instead of the parameter
  \fdf{ON.eta} for the
  order-$N$ energy functional minimization.  This is useful if
  you do not know the position of the Fermi level, typically in the
  beginning of an order-$N$ run.

  \note this overrides the value of \fdf{ON.eta} and \fdf{ON.ChemicalPotential}.
  Also, this option does not work in parallel. An alternative
  is to obtain the approximate value of the chemical potential using
  an initial diagonalization.
  
\end{fdflogicalF}

\begin{fdfentry}{ON.ChemicalPotential.Rc}[length]<$9.5\,\mathrm{Bohr}$>
  \index{Chemical Potential}

  Defines the cutoff radius for the density matrix or Fermi operator
  in the calculation of the estimate of the Chemical Potential.

\end{fdfentry}

\begin{fdfentry}{ON.ChemicalPotential.Temperature}[temperature/energy]<$0.05\,\mathrm{Ry}$>
  \index{Chemical Potential}

  Defines the temperature to be used in the Fermi function expansion
  in the calculation of the estimate of the Chemical Potential.  To
  have an accurate results, this temperature should be smaller than
  the gap of the system.

\end{fdfentry}

\begin{fdfentry}{ON.ChemicalPotential.Order}[integer]<$100$>
  \index{Chemical Potential}

  Order of the Chebishev expansion to calculate the estimate of the
  Chemical Potential.

\end{fdfentry}

\begin{fdflogicalF}{ON.LowerMemory}
  \index{Lower order N memory}

  If \fdftrue, then a slightly reduced memory algorithm is used in the
  3-point line search during the order N minimisation. Only affects
  parallel runs.

\end{fdflogicalF}


\paragraph{Output of localized wavefunctions}
\index{Localized Wave Functions}

At the end of each conjugate gradient minimization of the energy
functional, the LWF's are stored on disk. These can be used as an
input for the same system in a restart, or in case something goes
wrong.  The LWF's are stored in sparse form in file SystemLabel.LWF

It is important to keep very good care of this file, since the first
minimizations can take MANY steps. Loosing them will mean performing
the whole minimization again. It is also a good practice to save it
periodically during the simulation, in case a mid-run restart is
necessary.

\begin{fdflogicalF}{ON.UseSaveLWF}
  \index{reading saved data!localized wave functions (order-$N$)}
  \index{Restart of O(N) calculations}

  Instructs to read the localized wave functions stored in file
  \sysfile{LWF} by a previous run.

\end{fdflogicalF}


\subsection{The CheSS solver}
\label{SolverCheSS}
\index{CheSS solver}

The CheSS solver uses an expansion based on Chebyshev polynomials to calculate
the density matrix, thereby exploiting the sparsity of the overlap
and Hamiltonian matrices.
It works best for systems exhibiting a finite HOMO-LUMO gap and a small spectral width.

CheSS exhibits a two level parallelization using MPI and OpenMP and can scale
to many thousand cores.
It can be downloaded and installed freely from
\url{https://launchpad.net/chess}.

See Sec.~\ref{sec:libs} for details on installing \siesta\ with CheSS.

\subsubsection{Input parameters}
Usually CheSS only requires little user input, as the default values for the input parameters
work in general quite well. Moreover CheSS has the capability to determine certain optimal values
on its own. The only input parameters which usually require some human action are the values 
of the buffers required for the matrix multiplications to calculate the Chebyshev polynomials.

\begin{fdfentry}{CheSS!Buffer!Kernel}[length]<$4.0\,\mathrm{Bohr}$>
  Buffer for the density kernel within the CheSS calculation.
\end{fdfentry}

\begin{fdfentry}{CheSS!Buffer!Mult}[length]<$6.0\,\mathrm{Bohr}$>
  Buffer for the matrix vector multiplication within the CheSS calculation.
\end{fdfentry}

\begin{fdfentry}{CheSS!Fscale}[energy]<$10^{-1}\,\mathrm{Ry}$>
	Initial guess for the error function decay length (will be adjusted automatically).
\end{fdfentry}

\begin{fdfentry}{CheSS!FscaleLowerbound}[energy]<$10^{-2}\,\mathrm{Ry}$>
	Lower bound for the error function decay length.
\end{fdfentry}

\begin{fdfentry}{CheSS!FscaleUpperbound}[energy]<$10^{-1}\,\mathrm{Ry}$>
	Upper bound for the error function decay length.
\end{fdfentry}

\begin{fdfentry}{CheSS!evlowH}[energy]<$-2.0\,\mathrm{Ry}$>
	Initial guess for the lower bound of the eigenvalue spectrum of the Hamiltonian matrix, will be adjusted automatically if chosen unproperly.
\end{fdfentry}

\begin{fdfentry}{CheSS!evhighH}[energy]<$2.0\,\mathrm{Ry}$>
	Initial guess for the upper bound of the eigenvalue spectrum of the Hamiltonian matrix, will be adjusted automatically if chosen unproperly.
\end{fdfentry}

\begin{fdfentry}{CheSS!evlowS}[real]<$0.5$>
	Initial guess for the lower bound of the eigenvalue spectrum of the overlap matrix, will be adjusted automatically if chosen unproperly.
\end{fdfentry}

\begin{fdfentry}{CheSS!evhighS}[real]<$1.5$>
	Initial guess for the upper bound of the eigenvalue spectrum of the overlap matrix, will be adjusted automatically if chosen unproperly.
\end{fdfentry}

\subsection{The PEXSI solver}
\label{SolverPEXSI}
\index{PEXSI solver}

The PEXSI solver is based on the combination of the pole expansion of
the Fermi-Dirac function and the computation of only a selected
(sparse) subset of the elements of the matrices $(H-z_lS)^{-1}$ at
each pole $z_l$.

This solver can efficiently use the sparsity pattern of
the Hamiltonian and overlap matrices generated in SIESTA, and for
large systems has a much lower computational complexity than that
associated with the matrix diagonalization procedure. It is also
highly scalable.

The PEXSI technique can be used to evaluate the electron density, free
energy, atomic forces, density of states and local density of states
without computing any eigenvalue or eigenvector of the Kohn-Sham
Hamiltonian. It can achieve accuracy fully comparable to that obtained
from a matrix diagonalization procedure for general systems, including
metallic systems at low temperature.  

The current implementation of the PEXSI solver in \siesta\ makes
use of the full fine-grained-level interface in the PEXSI library
(\url{http://pexsi.org}), and can deal with spin-polarization, but it
is still restricted to $\Gamma$-point calculations. 

The following is a brief description of the input-file parameters
relevant to the workings of the PEXSI solver. For more background,
including a discussion of the conditions under which this solver is
competitive, the user is referred to the paper \citet{Lin2014}, and
references therein.

The technology involved in the PEXSI solver can also be used
to compute densities of states and ``local densities of
states''. These features are documented in this section and also
linked to in the relevant general sections.

\subsubsection{Pole handling}

Note that the temperature for the Fermi-Dirac distribution which is
pole-expanded is taken directly from the \fdf{ElectronicTemperature}
parameter (see Sec.~\ref{electronic-occupation}).

\begin{fdfentry}{PEXSI!NumPoles}[integer]<40>

  Effective number of poles used to expand the Fermi-Dirac function.
  
\end{fdfentry}

\begin{fdfentry}{PEXSI!deltaE}[energy]<$3\,\mathrm{Ry}$>
  
  In principle \fdf{PEXSI!deltaE} should be $E_{\max}-\mu$, where
  $E_{\max}$ is the largest eigenvalue for ($H$,$S$), and $\mu$ is the
  chemical potential. However, due to the fast decay of the
  Fermi-Dirac function, \fdf{PEXSI!deltaE} can often be chosen to be
  much lower.  In practice we set the default to be 3 Ryd.  This
  number should be set to be larger if the difference between
  $\Tr[\mathrm H\cdot\mathrm{DM}]$ and $\Tr[\mathrm S*\mathrm{EDM}]$
  (displayed in the output if \fdf{PEXSI!Verbosity} is at least 2)
  does not decrease with the increase of the number of poles.

\end{fdfentry}


\begin{fdfentry}{PEXSI!Gap}[energy]<$0\,\mathrm{Ry}$>

  Spectral gap. This can be set to be 0 in most cases.

\end{fdfentry}


\subsubsection{Parallel environment and control options}

\begin{fdfentry}{MPI!Nprocs.SIESTA}[integer]<\nonvalue{total processors}>

  Specifies the number of MPI processes to be used in those parts of
  the program (such as Hamiltonian setup and computation of forces)
  which are outside of the PEXSI solver itself. This is needed in
  large-scale calculations, for which the number of processors that
  can be used by the PEXSI solver is much higher than those needed by
  other parts of the code.
  
  Note that when the PEXSI solver is not used, this parameter will
  simply reduce the number of processors actually used by all parts of
  the program, leaving the rest idle for the whole calculation. This
  will adversely affect the computing budget, so take care not to use
  this option in that case.
  
\end{fdfentry}

\begin{fdfentry}{PEXSI!NP-per-pole}[integer]<4>

  Number of MPI processes used to perform the PEXSI computations in
  one pole. If the total number of MPI processes is smaller than this
  number times the number of poles (times the spin multiplicity), the
  PEXSI library will compute appropriate groups of poles in
  sequence. The minimum time to solution is achieved by increasing
  this parameter as much as it is reasonable for parallel efficiency,
  and using enough MPI processes to allow complete parallelization
  over poles. On the other hand, the minimum computational cost (in
  the sense of computing budget) is obtained by using the minimum
  value of this parameter which is compatible with the memory
  footprint. The additional parallelization over poles will be
  irrelevant for cost, but it will obviously affect the time to
  solution.

  Internally, \siesta\ computes the processor grid parameters
  \shell{nprow} and \shell{npcol} for the PEXSI library, with
  \shell{nprow} $>$= \shell{npcol}, and as similar as possible. So it
  is best to choose \fdf{PEXSI!NP-per-pole} as the product of two
  similar numbers.
  
  \note The total number of MPI processes must be divisible by
  \fdf{PEXSI!NP-per-pole}. In case of spin-polarized calculations, the
  total number of MPI processes must be divisible by
  \fdf{PEXSI!NP-per-pole} times 2.

\end{fdfentry}

\begin{fdfentry}{PEXSI!Ordering}[integer]<1>

  For large matrices, symbolic factorization should be performed in
  parallel to reduce the wall clock time.  This can be done using
  ParMETIS/PT-Scotch by setting \fdf{PEXSI!Ordering} to 0.  However,
  we have been experiencing some instability problem of the symbolic
  factorization phase when ParMETIS/PT-Scotch is used.  In such case,
  for relatively small matrices one can either use the sequential
  METIS (\fdf{PEXSI!Ordering} = 1) or set \fdf{PEXSI!NP-symbfact} to
  1.
  
\end{fdfentry}


\begin{fdfentry}{PEXSI!NP-symbfact}[integer]<1>

  Number of MPI processes used to perform the symbolic factorizations
  needed in the PEXSI procedure.  A default value should be given to
  reduce the instability problem.  From experience so far setting this
  to be 1 is most stable, but going beyond 64 does not usually improve
  much.

\end{fdfentry}

\begin{fdfentry}{PEXSI!Verbosity}[integer]<1>

  It determines the amount of information logged by the solver in
  different places. A value of zero gives minimal information.
  \begin{itemize}

    \item%
    In the files logPEXSI[0-9]+, the verbosity level is interpreted by
    the PEXSI library itself. In the latest version, when PEXSI is
    compiled in RELEASE mode, only logPEXSI0 is given in the output.
    This is because we have observed that simultaneous output for all
    processors can have very significant cost for a large number of
    processors ($>$10000).  

    \item%
    In the SIESTA output file, a verbosity level of 1 and above will
    print lines (prefixed by \shell{\&o}) indicating the various heuristics
    used at each scf step. A verbosity level of 2 and above will print
    extra information.

  \end{itemize}
  The design of the output logging is still in flux.
  
\end{fdfentry}

\subsubsection{Electron tolerance and the PEXSI solver}


\begin{fdfentry}{PEXSI!num-electron-tolerance}[real]<$10^{-4}$>

  Tolerance in the number of electrons for the PEXSI solver. At each
  iteration of the solver, the number of electrons is computed as the
  trace of the density matrix times the overlap matrix, and compared
  with the total number of electrons in the system. This tolerance can
  be fixed, or dynamically determined as a function of the degree of
  convergence of the self-consistent-field loop.
  
\end{fdfentry}

\begin{fdfentry}{PEXSI!num-electron-tolerance-lower-bound}[real]<$10^{-2}$>

  See \fdf{PEXSI!num-electron-tolerance-upper-bound}.

\end{fdfentry}

\begin{fdfentry}{PEXSI!num-electron-tolerance-upper-bound}[real]<$0.5$>

  The upper and lower bounds for the electron tolerance are used to
  dynamically change the tolerance in the PEXSI solver, following the
  simple algorithm:
\begin{verbatim}
  tolerance = Max(lower_bound,Min(dDmax, upper_bound))
\end{verbatim}
  The first scf step uses the upper bound of the tolerance range, and
  subsequent steps use progressively lower values, in correspondence
  with the convergence-monitoring variable \fdf*{dDmax}.
  
  \note This simple update schedule tends to work quite well. There is
  an experimental algorithm, documented only in the code itself, which
  allows a finer degree of control of the tolerance update.

\end{fdfentry}


\begin{fdfentry}{PEXSI!mu-max-iter}[integer]<$10$>

  Maximum number of iterations of the PEXSI solver. Note that in this
  implementation there is no fallback procedure if the solver fails to
  converge in this number of iterations to the prescribed
  tolerance. In this case, the resulting density matrix might still be
  re-normalized, and the calculation able to continue, if the
  tolerance for non normalized DMs is not set too tight. For example,
\begin{verbatim}
  # (true_no_electrons/no_electrons) - 1.0
    DM.NormalizationTolerance 1.0e-3  
\end{verbatim}
  will allow a 0.1\% error in the number of electrons. For obvious
  reasons, this feature, which is also useful in connection with the
  dynamic tolerance update, should not be abused.

  If the parameters of the PEXSI solver are adjusted correctly
  (including a judicious use of inertia-counting to refine the $\mu$
  bracket), we should expect that the maximum number of solver
  iterations needed is around 3

\end{fdfentry}

\begin{fdfentry}{PEXSI!mu}[energy]<$-0.6\,\mathrm{Ry}$>
  
  The starting guess for the chemical potential for the PEXSI
  solver. Note that this value does not affect the initial $\mu$
  bracket for the inertia-count refinement, which is controlled by
  \fdf{PEXSI!mu-min} and \fdf{PEXSI!mu-max}. After an inertia-count
  phase, $\mu$ will be reset, and further iterations inherit this
  estimate, so this parameter is only relevant if there is no
  inertia-counting phase.

\end{fdfentry}

\begin{fdfentry}{PEXSI!mu-pexsi-safeguard}[energy]<$0.05\,\mathrm{Ry}$>

  \note This feature has been deactivated for now. The condition for
  starting a new phase of inertia-counting is that the Newton
  estimation falls outside the current bracket. The bracket is
  expanded accordingly.
        
  The PEXSI solver uses Newton's method to update the estimate of
  $\mu$.  If the attempted change in $\mu$ is larger than
  \fdf{PEXSI!mu-pexsi-safeguard}, the solver cycle is stopped and a
  fresh phase of inertia-counting is started.

\end{fdfentry}

\subsubsection{Inertia-counting}


\begin{fdfentry}{PEXSI!Inertia-Counts}[integer]<3>

  In a given scf step, the PEXSI procedure can optionally employ a
  $\mu$ bracket-refinement procedure based on
  inertia-counting. Typically, this is used only in the first few scf
  steps, and this parameter determines how many. If positive,
  inertia-counting will be performed for exactly that number of scf
  steps. If negative, inertia-counting will be performed for at least
  that number of scf steps, and then for as long as the scf cycle is
  not yet deemed to be near convergence (as determined by the
  \fdf{PEXSI!safe-dDmax-no-inertia} parameter).

  \note Since it is cheaper to perform an inertia-count phase than to
  execute one iteration of the solver, it pays to call the solver only
  when the $\mu$ bracket is sufficiently refined.

\end{fdfentry}

\begin{fdfentry}{PEXSI!mu-min}[energy]<$-1\,\mathrm{Ry}$>

  The lower bound of the initial range for $\mu$ used in the
  inertia-count refinement. In runs with multiple geometry iterations,
  it is used only for the very first scf iteration at the first
  geometry step. Further iterations inherit possibly refined values of
  this parameter.

\end{fdfentry}

\begin{fdfentry}{PEXSI!mu-max}[energy]<$0\,\mathrm{Ry}$>

  The upper bound of the initial range for $\mu$ used in the
  inertia-count refinement. In runs with multiple geometry iterations,
  it is used only for the very first scf iteration at the first
  geometry step. Further iterations inherit possibly refined values of
  this parameter.

\end{fdfentry}

\begin{fdfentry}{PEXSI!safe-dDmax-no-inertia}[real]<0.05>

  During the scf cycle, the variable conventionally called
  \fdf*{dDmax} monitors how far the cycle is from convergence. If
  \fdf{PEXSI!Inertia-Counts} is negative, an inertia-counting phase
  will be performed in a given scf step for as long as \fdf*{dDmax} is
  greater than \fdf{PEXSI!safe-dDmax-no-inertia}.

  \note Even though \fdf*{dDmax} represents historically how far from
  convergence the density-matrix is, the same mechanism applies to
  other forms of mixing in which other magnitudes are monitored for
  convergence (Hamiltonian, charge density...).
  
\end{fdfentry}

\begin{fdfentry}{PEXSI!lateral-expansion-inertia}[energy]<$3\,\mathrm{eV}$>

  If the correct $\mu$ is outside the bracket provided to the
  inertia-counting phase, the bracket is expanded in the appropriate
  direction(s) by this amoount.

\end{fdfentry}

\begin{fdfentry}{PEXSI!Inertia-mu-tolerance}[energy]<$0.05\,\mathrm{Ry}$>

  One of the criteria for early termination of the inertia-counting
  phase.  The value of the estimated $\mu$ (basically the center of
  the resulting brackets) is monitored, and the cycle stopped if its
  change from one iteration to the next is below this parameter.
  
\end{fdfentry}

\begin{fdfentry}{PEXSI!Inertia-max-iter}[integer]<$5$>

  Maximum number of inertia-count iterations per cycle.

\end{fdfentry}

\begin{fdfentry}{PEXSI!Inertia-min-num-shifts}[integer]<$10$>
  
  Minimum number of sampling points for inertia counts.

\end{fdfentry}

\begin{fdfentry}{PEXSI!Inertia-energy-width-tolerance}[energy]<\fdfvalue{PEXSI!Inertia-mu-tolerance}>

  One of the criteria for early termination of the inertia-counting
  phase.  The cycle stops if the width of the resulting bracket is
  below this parameter.
  
\end{fdfentry}


\subsubsection{Re-use of \texorpdfstring{$\mu$}{u} information accross iterations}

This is an important issue, as the efficiency of the PEXSI procedure
depends on how close a guess of $\mu$ we have at our
disposal. There are two types of information re-use:

\begin{itemize}
  \item%
  Bracketing information used in the inertia-counting phase.
  
  \item%
  The values of $\mu$ itself for the solver.
\end{itemize}

\begin{fdfentry}{PEXSI!safe-width-ic-bracket}[energy]<$4\,\mathrm{eV}$>

  By default, the $\mu$ bracket used for the inertia-counting phase in
  scf steps other than the first is taken as an interval of width
  \fdf{PEXSI!safe-width-ic-bracket} around the latest estimate of
  $\mu$.
  
\end{fdfentry}


\begin{fdfentry}{PEXSI!safe-dDmax-ef-inertia}[real]<$0.1$>

  The change in $\mu$ from one scf iteration to the next can be
  crudely estimated by assuming that the change in the band structure
  energy (estimated as Tr$\Delta H$DM) is due to a rigid shift.  When
  the scf cycle is near convergence, this $\Delta\mu$ can be used to
  estimate the new initial bracket for the inertia-counting phase,
  rigidly shifting the output bracket from the previous scf step.  The
  cycle is assumed to be near convergence when the monitoring variable
  \fdf*{dDmax} is smaller than \fdf{PEXSI!safe-dDmax-ef-inertia}.

  \note Even though \fdf*{dDmax} represents historically how far from
  convergence the density-matrix is, the same mechanism applies to
  other forms of mixing in which other magnitudes are monitored for
  convergence (Hamiltonian, charge density...).

  NOTE: This criterion will lead in general to tighter brackets than
  the previous one, but oscillations in H in the first few iterations
  might make it more dangerous. More information from real use cases
  is needed to refine the heuristics in this area.

\end{fdfentry}

\begin{fdfentry}{PEXSI!safe-dDmax-ef-solver}[real]<0.05>
  
  When the scf cycle is near convergence, the $\Delta\mu$ estimated as
  above can be used to shift the initial guess for $\mu$ for the PEXSI
  solver.  The cycle is assumed to be near convergence when the
  monitoring variable \fdf*{dDmax} is smaller than \fdf{PEXSI!safe-dDmax-ef-solver}.

  \note Even though \fdf*{dDmax} represents historically how far from
  convergence the density-matrix is, the same mechanism applies to
  other forms of mixing in which other magnitudes are monitored for
  convergence (Hamiltonian, charge density...).

\end{fdfentry}

\begin{fdfentry}{PEXSI!safe-width-solver-bracket}[energy]<$4\,\mathrm{eV}$>

  In all cases, a ``safe'' bracket around $\mu$ is provided even in
  direct calls to the PEXSI solver, in case a fallback to executing
  internally a cycle of inertia-counting is needed. The size of the
  bracket is given by \fdf{PEXSI!safe-width-solver-bracket}

\end{fdfentry}

\subsubsection{Calculation of the density of states by
  inertia-counting}
\label{pexsi-dos}

The cumulative or integrated density of states (INTDOS) can be easily
obtained by inertia-counting, which involves a factorization of
$H-\sigma S$ for varying $\sigma$ (see SIESTA-PEXSI paper).  Apart
from the DOS-specific options below, the ``ordering'', ``symbolic
factorization'', and ``pole group size'' (re-interpreted as the number
of MPI processes dealing with a given $\sigma$) options are honored.

The current version of the code generates a file with the
energy-INTDOS information, \file{PEXSI\_INTDOS}, which can be later
processed to generate the DOS by direct numerical differentiation, or
a \siesta-style \sysfile{EIG} file (using the \program{Util/PEXSI/intdos2eig}
program).

\begin{fdflogicalF}{PEXSI!DOS}

  Whether to compute the DOS (actually, the INTDOS --- see above)
  using the PEXSI technology.
  
\end{fdflogicalF}

\begin{fdfentry}{PEXSI!DOS!Emin}[energy]<$-1\,\mathrm{Ry}$>

  Lower bound of energy window to compute the DOS in.

  See \fdf{PEXSI!DOS!Ef.Reference}.

\end{fdfentry}

\begin{fdfentry}{PEXSI!DOS!Emax}[energy]<$1\,\mathrm{Ry}$>

  Upper bound of energy window to compute the DOS in.

  See \fdf{PEXSI!DOS!Ef.Reference}.

\end{fdfentry}

\begin{fdflogicalT}{PEXSI!DOS!Ef.Reference}

  If this flag is true, the bounds of the energy window
  (\fdf{PEXSI!DOS!Emin} and \fdf{PEXSI!DOS!Emax}) are with respect to
  the Fermi level.

\end{fdflogicalT}

\begin{fdfentry}{PEXSI!DOS!NPoints}[integer]<200>

  The number of points in the energy interval at which the DOS is
  computed. It is rounded up to the nearest multiple of the number of
  available factorization groups, as the operations are perfectly
  parallel and there will be no extra cost involved.
  
\end{fdfentry}

\subsubsection{Calculation of the LDOS by selected-inversion}
\label{pexsi-ldos}

The local-density-of-states (LDOS) around a given reference energy
$\varepsilon$, representing the contribution to the charge density of
the states with eigenvalues in the vicinity of $\varepsilon$, can be
obtained formally by a ``one-pole expansion'' with suitable broadening
(see SIESTA-PEXSI paper).

Apart from the LDOS-specific options below, the ``ordering'',
``verbosity'', and ``symbolic factorization'' options are honored.

The current version of the code generates a real-space grid file with
extension \sysfile{LDSI}, and (if netCDF is compiled-in) a file
\file{Rho.grid.nc} (which unfortunately will overwrite any other
charge-density files produced in the same run).

NOTE: The LDOS computed with this procedure is not exactly the same as
the vanilla \siesta\ LDOS, which uses an explicit energy
interval. Here the broadening acts around a single value of the
energy.


\begin{fdflogicalF}{PEXSI!LDOS}

  Whether to compute the LDOS using the PEXSI technology.
  
\end{fdflogicalF}

\begin{fdfentry}{PEXSI!LDOS!Energy}[energy]<$0\,\mathrm{Ry}$>

  The (absolute) energy at which to compute the LDOS.

\end{fdfentry}

\begin{fdfentry}{PEXSI!LDOS!Broadening}[energy]<$0.01\,\mathrm{Ry}$>

  The broadening parameter for the LDOS.

\end{fdfentry}

\begin{fdfentry}{PEXSI!LDOS!NP-per-pole}[integer]<\fdfvalue{PEXSI!NP-per-pole}>

  The value of this parameter supersedes \fdf{PEXSI!NP-per-pole} for
  the calculation of the LDOS, which otherwise would keep idle all but
  \fdf{PEXSI!NP-per-pole} MPI processes, as it essentially consists of
  a ``one-pole'' procedure.

\end{fdfentry}

\subsection{Band-structure analysis}
\label{sec:band-structure}

This calculation of the band structure is performed optionally after
the geometry loop finishes, and the output information written
to the \sysfile{bands} file (see below for the format).

\begin{fdfentry}{BandLinesScale}[string]<pi/a>
  
  Specifies the scale of the $k$ vectors given in \fdf{BandLines}
  and \fdf{BandPoints} below.  The options are:
  \begin{fdfoptions}
    \option[pi/a]%
    k-vector coordinates are given in Cartesian coordinates, in units
    of $\pi/a$, where $a$ is the lattice constant

    \option[ReciprocalLatticeVectors]%
    $k$ vectors are given in reciprocal-lattice-vector coordinates
  \end{fdfoptions}

  \note you might need to define explicitly a LatticeConstant tag in
  your fdf file if you do not already have one, and make it consistent
  with the scale of the k-points and any unit-cell vectors you might
  have already defined.

\end{fdfentry}

\begin{fdfentry}{BandLines}[block]

  Specifies the lines along which band energies are calculated
  (usually along high-symmetry directions).  An example for an FCC
  lattice is:
  \begin{fdfexample}
     %block BandLines
       1  1.000  1.000  1.000  L        # Begin at L
      20  0.000  0.000  0.000  \Gamma   # 20 points from L to gamma
      25  2.000  0.000  0.000  X        # 25 points from gamma to X
      30  2.000  2.000  2.000  \Gamma   # 30 points from X to gamma
     %endblock BandLines
  \end{fdfexample}
  where the last column is an optional \LaTeX\ label for use in the
  band plot. If only given points (not lines) are required, simply
  specify 1 in the first column of each line. The first column of the
  first line must be always 1.

  \note this block is not used if \fdf{BandPoints} is present.

\end{fdfentry}

\begin{fdfentry}{BandPoints}[block]
  
  Band energies are calculated for the list of arbitrary $k$ points
  given in the block. Units defined by \fdf{BandLinesScale} as for
  \fdf{BandLines}. The generated \sysfile{bands} file will contain the
  $k$ point coordinates (in a.u.) and the corresponding band energies
  (in eV). Example:
  \begin{fdfexample}
     %block BandPoints
        0.000  0.000  0.000   # This is a comment. eg this is gamma
        1.000  0.000  0.000
        0.500  0.500  0.500
     %endblock BandPoints
  \end{fdfexample}

  See also \fdf{BandLines}.
\end{fdfentry}


\begin{fdflogicalF}{WriteKbands}
  \index{output!band $\vec k$ points}

  If \fdftrue, it writes the coordinates of the $\vec k$ vectors
  defined for band plotting, to the main output file.

\end{fdflogicalF}

\begin{fdflogicalF}{WriteBands}
  \index{output!band structure}%
  \index{band structure}

  If \fdftrue, it writes the Hamiltonian eigenvalues corresponding to
  the $\vec k$ vectors defined for band plotting, in the main output
  file. 

\end{fdflogicalF}


\subsubsection{Format of the .bands file}

\begin{verbatim}

FermiEnergy (all energies in eV) \\
kmin, kmax (along the k-lines path, i.e. range of k in the band plot) \\
Emin, Emax (range of all eigenvalues) \\
NumberOfBands, NumberOfSpins (1 or 2), NumberOfkPoints \\
k1, ((ek(iband,ispin,1),iband=1,NumberOfBands),ispin=1,NumberOfSpins) \\
k2, ek \\
 . \\
 . \\
 . \\
klast, ek \\
NumberOfkLines \\
kAtBegOfLine1, kPointLabel \\
kAtEndOfLine1, kPointLabel \\
  . \\
  . \\
  . \\
kAtEndOfLastLine, kPointLabel \\
\end{verbatim}

\noindent
The \program{gnubands}\index{gnubands@\texttt{gnubands}} postprocessing
utility program (found in the Util/Bands directory) reads the
\sysfile{bands} for plotting.  See the \fdf{BandLines} data
descriptor above for more information.


\subsubsection{Output of wavefunctions associated to bands}
\label{sec:wf-bands}

The user can optionally request that the wavefunctions corresponding
to the computed bands be written to file.  They are written to the
\sysfile{bands.WFSX} file.
The relevant options are:

\begin{fdflogicalF}{WFS.Write.For.Bands}
  \index{output of wave functions for bands}
  
  Instructs the program to compute and write the wave functions
  associated to the bands specified (by a \fdf{BandLines} or a
  \fdf{BandPoints} block) to the file \sysfile{WFSX}.

  The information in this file might be useful, among other things, to
  generate ``fatbands'' plots, in which both band eigenvalues and
  information about orbital projections is presented.
  \index{fatbands} See the \program{fat} program in the
  \texttt{Util/COOP} directory for details.

\end{fdflogicalF}

\begin{fdfentry}{WFS.Band.Min}[integer]<1>
  \index{output of wave functions for bands}
  
  Specifies the lowest band index of the wave-functions to be written
  to the file \sysfile{WFSX} for each $k$-point (all $k$-points in the
  band set are affected).

\end{fdfentry}

\begin{fdfentry}{WFS.Band.Max}[integer]<number of orbitals>
  \index{output of wave functions for bands}
  
  Specifies the highest band index of the wave-functions to be written
  to the file \sysfile{WFSX} for each $k$-point (all $k$-points in the
  band set are affected).

\end{fdfentry}

\subsection{Output of selected wavefunctions}
\label{sec:wf-output-user}

The user can optionally request that specific wavefunctions are
written to file. These wavefunctions are re-computed after the
geometry loop (if any) finishes, using the last (presumably converged)
density matrix produced during the last self-consistent field loop
(after a final mixing). They are written to the
\sysfile{selected.WFSX} file.

Note that the complete set of wavefunctions obtained during the last
iteration of the SCF loop will be written to SystemLabel.fullBZ.WFSX
if the \fdf{COOP.Write} option is in effect.

Note that the complete set of wavefunctions obtained during the last
iteration of the SCF loop will be written to a NetCDF file
\file{WFS.nc} if the \fdf{Diag!UseNewDiagk} option is in effect.

\begin{fdfentry}{WaveFuncKPointsScale}[string]<pi/a>
  
  Specifies the scale of the $k$ vectors given in
  \fdf{WaveFuncKPoints} below.  The options are:
  \begin{fdfoptions}
    \option[pi/a]%
    k-vector coordinates are given in Cartesian coordinates, in units
    of $\pi/a$, where $a$ is the lattice constant

    \option[ReciprocalLatticeVectors]%
    $k$ vectors are given in reciprocal-lattice-vector coordinates

  \end{fdfoptions}

\end{fdfentry}

\begin{fdfentry}{WaveFuncKPoints}[block]
  
  Specifies the $k$-points at which the electronic wavefunction
  coefficients are written.  An example for an FCC lattice is:
  \begin{fdfexample}
     %block WaveFuncKPoints
     0.000  0.000  0.000  from 1 to 10   # Gamma wavefuncs 1 to 10
     2.000  0.000  0.000  1 3 5          # X wavefuncs 1,3 and 5
     1.500  1.500  1.500                 # K wavefuncs, all
     %endblock WaveFuncKPoints
  \end{fdfexample}
  The index of a wavefunction is defined by its energy, so that the
  first one has lowest energy.

  The user can also narrow the energy-range used with the
  \fdf{WFS.Energy.Min} and \fdf{WFS.Energy.Max} options (both take
  an energy (with units) as extra argument -- see
  section~\ref{sec:coop}). Care should be taken to make sure that the
  actual values of the options make sense.

  The output of the wavefunctions in described in Section
  \ref{sec:wf-output-user}.

\end{fdfentry}


\begin{fdflogicalF}{WriteWaveFunctions}
  \index{output!wave functions}

  If \fdftrue, it writes to the output file a list of the
  wavefunctions actually written to the \sysfile{selected.WFSX} file,
  which is always produced.

\end{fdflogicalF}

The unformatted WFSX file contains the information of the
k-points for which wavefunctions coefficients are written, and the
energies and coefficients of each wavefunction which was specified in
the input file (see \fdf{WaveFuncKPoints} descriptor above). It also contains information
on the atomic species and the orbitals for postprocessing purposes.

\textbf{NOTE:} The \sysfile{WFSX} file is in a more compact
form than the old WFS, and the wavefunctions are output in single
precision. The \program{Util/WFS/wfsx2wfs} program can be used to
convert to the old format.

\noindent
The \program{readwf}\index{readwf} and
\program{readwfsx}\index{readwfsx} postprocessing utilities programs
(found in the Util/WFS directory) read the \sysfile{WFS} or
\sysfile{WFSX} files, respectively, and generate a readable file.



\subsection{Density of states}
\label{sec:dos}

\subsubsection{Total density of states}
There are several options to obtain the
total density of states:
\begin{itemize}
  \index{output!eigenvalues}
  
  \item The Hamiltonian eigenvalues for the SCF sampling $\vec k$ points can be
  dumped into \sysfile{EIG} in a format analogous to SystemLabel.bands,
  but without the kmin, kmax, emin, emax information, and without
  the abscissa. The \program{Eig2DOS}\index{Eig2DOS@\textsc{Eig2DOS}}
  postprocessing utility can be then used to obtain the density of
  states.\index{density of states}
  See the \fdf{WriteEigenvalues} descriptor.

  \item As a side-product of a partial-density-of-states calculation
  (see below)

  \item As one of the files produced by the \program{Util/COOP/mprop} during
  the off-line analysis of the electronic structure. This method
  allows the flexibility of specifying energy ranges and resolutions
  at will, without re-running \siesta\ See Sec.~\ref{sec:coop}.

  \item Using the inertia-counting routines in the PEXSI solver (see
  Sec.~\ref{pexsi-dos}).
  
\end{itemize}

The k-point specification for the partial and local density of states
calculations described in the following two sections may optionally be
given by 

\begin{fdfentry}{DOS.kgrid.?}<kgrid.?>

  The generic DOS k-grid specification.

  See Sec.~\ref{ssec:k-points} for details. If \emph{any} of
  \fdf*{DOS.kgrid.MonkhorstPack}, \fdf*{DOS.kgrid.Cutoff} or
  \fdf*{DOS.kgrid.File} is present, they will be used, otherwise fall
  back to the SCF k-point sampling (\fdf*{kgrid.?}).

  \note \fdf{DOS.kgrid.?} options are the default values for
  \fdf{ProjectedDensityOfStates} and \fdf{LocalDensityOfStates}, but
  they do not affect the sampling used to generate the \sysfile{EIG}
  file. This feature might be implemented in a later version.

\end{fdfentry}


\subsubsection{Partial (projected) density of states}

There are two options to obtain the partial density of states
\begin{itemize}
  
  \item Using the options below
  
  \item Using the \program{Util/COOP/mprop} program for the off-line analysis of
  the electronic structure in PDOS mode. This method allows the
  flexibility of specifying energy ranges, orbitals, and resolutions
  at will, without re-running \siesta. See Sec.~\ref{sec:coop}.
  
\end{itemize}

\begin{fdfentry}{ProjectedDensityOfStates}[block]
  \index{output!projected density of states}

  Instructs to write the Total Density Of States (Total DOS) and the
  Projected Density Of States (PDOS) on the basis orbitals, between
  two given energies, in files \sysfile{DOS} and \sysfile{PDOS},
  respectively.  The block must be a single line with the energies of
  the range for PDOS projection, (relative to the program's zero,
  i.e. the same as the eigenvalues printed by the program), the peak
  width (an energy) for broadening the eigenvalues, the number of
  points in the energy window, and the energy units.  An example is:
  \begin{fdfexample}
     %block ProjectedDensityOfStates
        -20.00  10.00  0.200  500  eV
     %endblock ProjectedDensityOfStates
  \end{fdfexample}
  Optionally one may start the line with \shell{EF} as this:
  \begin{fdfexample}
     %block ProjectedDensityOfStates
        EF -20.00  10.00  0.200  500  eV
     %endblock ProjectedDensityOfStates
  \end{fdfexample}
  This specifies the energies with respect to the Fermi-level.    

  By default the projected density of states is generated for the same
  grid of points in reciprocal space as used for the SCF calculation.
  However, a separate set of K-points, usually on a finer grid, can be
  generated by using \fdf{PDOS.kgrid.?} Note that if a gamma point
  calculation is being used in the SCF part, especially as part of a
  geometry optimisation, and this is then to be run with a grid of
  K-points for the PDOS calculation it is more efficient to run the
  SCF phase first and then restart to perform the PDOS evaluation
  using the density matrix saved from the SCF phase.

  \note the two energies of the range must be ordered, with lowest
  first.

  The total DOS is stored in a file called \sysfile{DOS}.  The format
  of this file is:
  \begin{shellexample}
   Energy value, Total DOS (spin up), Total DOS (spin down)
  \end{shellexample}

  The Projected Density Of States for all the orbitals in the unit
  cell is dumped sequentially into a file called
  \sysfile{PDOS}. This file is structured using spacing and
  xml tags. A machine-readable (but not very human readable) xml file
  \sysfile{PDOS.xml} is also produced. Both can be processed by the
  program in \program{Util/pdosxml}. The \sysfile{PDOS} file can be
  processed by utilites in \program{Util/Contrib/APostnikov}.

  In all cases, the units for the DOS are (number of states/eV), and the
  Total DOS, $g(\epsilon)$, is normalized as follows:
  \begin{equation}
    \int_{-\infty}^\infty g (\epsilon) d\epsilon =
    \text{number of basis orbitals in unit cell}
  \end{equation}

\end{fdfentry}

\begin{fdfentry}{PDOS.kgrid.?}<\fdfvalue{DOS.kgrid.?}>

  This is PDOS only specification for the k-points. I.e. if one wishes
  to use a specific k-point sampling. These options are equivalent to
  the \fdf{kgrid!Cutoff}, \fdf{kgrid!MonkhorstPack} and
  \fdf{kgrid!File} options. Refer to them for additional details.

  If \fdf{PDOS.kgrid.?} does not exist, then \fdf{DOS.kgrid.?} is
  checked, and if that does not exist then \fdf*{kgrid.?} options are
  used.

\end{fdfentry}


\subsubsection{Local density of states}

The LDOS is formally the DOS weighted by the amplitude of the
corresponding wavefunctions at different points in space, and is then
a function of energy and position. \siesta\ can output the LDOS
integrated over a range of energies. This information can be used to
obtain simple STM images in the Tersoff-Hamann approximation (See
\program{Util/STM/simple-stm}).

\begin{fdfentry}{LocalDensityOfStates}[block]
  \index{output!local density of states}
  
  Instructs to write the LDOS, integrated between two given energies,
  at the mesh used by DHSCF, in file \sysfile{LDOS}. This file can be
  read by routine IORHO, which may be used by an application program
  in later versions.  The block must be a single line with the
  energies of the range for LDOS integration (relative to the
  program's zero, i.e. the same as the eigenvalues printed by the
  program) and their units.  An example is:
  \begin{fdfexample}
     %block LocalDensityOfStates
        -3.50  0.00   eV
     %endblock LocalDensityOfStates
  \end{fdfexample}

  One may optionally write \shell{EF} as the first word to specify that
  the energies are with respect to the Fermi level
  \begin{fdfexample}
     %block LocalDensityOfStates
       EF -3.50  0.00   eV
     %endblock LocalDensityOfStates
  \end{fdfexample}
  would calculate the LDOS from $-3.5\,\mathrm{eV}$ below the
  Fermi-level up to the Fermi-level.

  One may use \fdf{LDOS.kgrid.?} to fine-tune the k-point sampling in
  the LDOS calculation.
  
  \note the two energies of the range must be ordered, with lowest
  first.

\end{fdfentry}

\begin{fdfentry}{LDOS.kgrid.?}<\fdfvalue{DOS.kgrid.?}>

  This is LDOS only specification for the k-points. I.e. if one wishes
  to use a specific k-point sampling. These options are equivalent to
  the \fdf{kgrid!Cutoff}, \fdf{kgrid!MonkhorstPack} and
  \fdf{kgrid!File} options. Refer to them for additional details.

  If \fdf{LDOS.kgrid.?} does not exist, then \fdf{DOS.kgrid.?} is
  checked, if that does not exist then \fdf*{kgrid.?} are used.

\end{fdfentry}



\subsection{Options for chemical analysis}

\subsubsection{Mulliken charges and overlap populations}

\begin{fdfentry}{WriteMullikenPop}[integer]<0>
  \index{Mulliken population analysis}%
  \index{output!Mulliken analysis}
  
  It determines the level of Mulliken population analysis printed:
  \begin{fdfoptions}
    \option[0]% 
    none

    \option[1]%
    atomic and orbital charges

    \option[2]%
    atomic, orbital and atomic overlap populations

    \option[3]%
    atomic, orbital, atomic overlap and orbital overlap populations
    
  \end{fdfoptions}
  The order of the orbitals in the population lists is defined by the
  order of atoms. For each atom, populations for PAO orbitals and
  double-$z$, triple-$z$, etc... derived from them are displayed first
  for all the angular momenta. Then, populations for perturbative
  polarization orbitals are written.  Within a $l$-shell be aware that
  the order is not conventional, being $y$, $z$, $x$ for $p$ orbitals,
  and $xy$, $yz$, $z^2$, $xz$, and $x^2-y^2$ for $d$ orbitals.

\end{fdfentry}


\begin{fdflogicalF}{MullikenInSCF}

  If \fdftrue, the Mulliken populations will be written for every SCF
  step at the level of detail specified in
  \fdf{WriteMullikenPop}. Useful when dealing with SCF problems,
  otherwise too verbose.
  
\end{fdflogicalF}


\begin{fdflogicalT}{SpinInSCF}

If true, the size and components of the (total) spin polarization will
be printed at every SCF step.  This is analogous to the
\fdf{MullikenInSCF} feature.  Enabled by default for calculations
involving spin.

\end{fdflogicalT}

\subsubsection{Voronoi and Hirshfeld atomic population analysis}


\begin{fdflogicalF}{Write!HirshfeldPop}
  \index{Hirshfeld population analysis}%
  \index{output!Hirshfeld analysis} 

  If \fdftrue, the program calculates and prints the Hirshfeld ``net''
  atomic populations on each atom in the system. For a definition of
  the Hirshfeld charges, see Hirshfeld, Theo Chem Acta \textbf{44},
  129 (1977) and Fonseca et al, J. Comp. Chem. \textbf{25}, 189
  (2003). Hirshfeld charges are more reliable than Mulliken charges,
  specially for large basis sets. Value (\code{dQatom}) is the total net
  charge of the atom: the variation from the neutral charge, in units
  of $|e|$: positive (negative) values indicate deficiency (excess) of
  electrons in the atom.

  The output (here shown for a non-collinear calculation) looks like
  this:
\begin{output}[fontsize=\footnotesize]
Hirshfeld Atomic Populations:
Atom #     dQatom  Atom pop         S        Sx        Sy        Sz  Species
     1    0.01003   7.98997   3.04744   0.18550   0.00000   3.04179  fe_nc
     2   -0.02008   8.02008   1.41240   1.41240   0.00000  -0.00000  fe_nc
     3    0.01003   7.98997   3.04744   0.18550   0.00000  -3.04179  fe_nc
-------------------------------------------------------------------
 Total                        1.78340   1.78340   0.00000   0.00000
\end{output}

Where the column \code{dQatom} is the net atomic charge as noted
above. Column \code{Atom pop} is the number of electrons on the atom
(comparable to Mulliken charges). Columns \code{S}, \code{Sx},
\code{Sy} and \code{Sz} are the accumulated spin components for the atom.

\end{fdflogicalF}

\begin{fdflogicalF}{Write!VoronoiPop}
  \index{Voronoi population analysis}%
  \index{output!Voronoi analysis} 

  If \fdftrue, the program calculates and prints the Voronoi ``net''
  atomic populations on each atom in the system. For a definition of
  the Voronoi charges, see Bickelhaupt et al, Organometallics
  \textbf{15}, 2923 (1996) and Fonseca et al,
  J. Comp. Chem. \textbf{25}, 189 (2003).  Voronoi charges are more
  reliable than Mulliken charges, specially for large basis
  sets. Value (\code{dQatom}) is the total net charge of the atom: the
  variation from the neutral charge, in units of $|e|$: positive
  (negative) values indicate deficiency (excess) of electrons in the
  atom.

  See \fdf{Write!HirshfeldPop} for detailed output explanation.

\end{fdflogicalF}

The Hirshfeld and Voronoi populations (partial charges) are computed
by default only at the end of the program (i.e., for the final
geometry, after self-consistency). The following options allow more
control:

\begin{fdflogicalF}{PartialChargesAtEveryGeometry}
  \index{Voronoi population analysis}%
  \index{output!Voronoi analysis} %
  \index{Hirshfeld population analysis} %
  \index{output!Hirshfeld analysis}

  The Hirshfeld and Voronoi populations are computed after
  self-consistency is achieved, for all the geometry steps.

\end{fdflogicalF}

\begin{fdflogicalF}{PartialChargesAtEverySCFStep}
  \index{Voronoi population analysis}%
  \index{output!Voronoi analysis}%
  \index{Hirshfeld population analysis}%
  \index{output!Hirshfeld analysis}

  The Hirshfeld and Voronoi populations are computed for every step of
  the self-consistency process.

\end{fdflogicalF}

\textbf{Performance note:}
The default behavior (computing at the end of the program) involves
an extra calculation of the charge density.



\subsubsection{Crystal-Orbital overlap and hamilton populations (COOP/COHP)}
\label{sec:coop}
\index{COOP/COHP curves}

These curves are quite useful to analyze the electronic structure to
get insight about bonding characteristics. See the \program{Util/COOP}
directory for more details. The \fdf{COOP.Write} option must be
activated to get the information needed.

References:
\begin{itemize}
  \item%
  Original COOP reference:
  Hughbanks, T.; Hoffmann, R., J. Am. Chem. Soc., 1983, 105, 3528.

  \item%
  Original COHP reference: Dronskowski, R.; Blöchl, P. E., J. Phys. Chem., 1993, 97, 8617.

  \item%
  A tutorial introduction: Dronskowski, R. Computational Chemistry of Solid State
  Materials; Wiley-VCH: Weinheim, 2005.

  \item%
  Online material maintained by R. Dronskowski's group: \url{http://www.cohp.de/}
\end{itemize}


\begin{fdflogicalF}{COOP.Write}
  \index{output!Information for COOP/COHP curves}
  
  Instructs the program to generate \sysfile{fullBZ.WFSX} (packed
  wavefunction file) and  \sysfile{HSX} (H, S and X\_~{ij} file),
  to be processed by \program{Util/COOP/mprop} to generate COOP/COHP curves,
  (projected) densities of states, etc.

  The \sysfile*{WFSX} file is in a more compact form than the usual
  \sysfile*{WFS}, and the wavefunctions are output in single
  precision. The \program{Util/wfsx2wfs} program can be used to
  convert to the old format.  The HSX file is in a more compact form
  than the usual HS, and the Hamiltonian, overlap matrix, and
  relative-positions array (which is always output, even for
  gamma-point only calculations) are in single precision.

  The user can narrow the energy-range used (and save some file space)
  by using the \fdf{WFS.Energy.Min} and \fdf{WFS.Energy.Max} options
  (both take an energy (with units) as extra argument), and/or the
  \fdf{WFS.Band.Min} and \fdf{WFS.Band.Max} options. Care should be
  taken to make sure that the actual values of the options make sense.

  Note that the band range options could also affect the output of
  wave-functions associated to bands (see section~\ref{sec:wf-bands}),
  and that the energy range options could also affect the output of
  user-selected wave-functions with the \fdf{WaveFuncKPoints} block
  (see section~\ref{sec:wf-output-user}).

\end{fdflogicalF}


\begin{fdfentry}{WFS.Energy.Min}[energy]<$-\infty$>
  
  Specifies the lowest value of the energy (eigenvalue) of the
  wave-functions to be written to the file
  \sysfile{fullBZ.WFSX} for each $k$-point (all $k$-points in
  the BZ sampling are affected).

\end{fdfentry}

\begin{fdfentry}{WFS.Energy.Max}[energy]<$\infty$>
  
  Specifies the highest value of the energy (eigenvalue) of the
  wave-functions to be written to the file \sysfile{fullBZ.WFSX} for
  each $k$-point (all $k$-points in the BZ sampling are affected).

\end{fdfentry}




\subsection{Optical properties}

\begin{fdflogicalF}{OpticalCalculation}
  \index{Dielectric function,optical absorption}

  If specified, the imaginary part of the dielectric function will be
  calculated and stored in a file called \sysfile{EPSIMG}. The
  calculation is performed using the simplest approach based on the
  dipolar transition matrix elements between different eigenfunctions
  of the self-consistent Hamiltonian. For molecules the calculation is
  performed using the position operator matrix elements, while for
  solids the calculation is carried out in the momentum space
  formulation. Corrections due to the non-locality of the
  pseudopotentials are introduced in the usual way.

\end{fdflogicalF}

\begin{fdfentry}{Optical.Energy.Minimum}[energy]<$0\,\mathrm{Ry}$>

  This specifies the minimum of the energy range in which the
  frequency spectrum will be calculated.

\end{fdfentry}

\begin{fdfentry}{Optical.Energy.Maximum}[energy]<$10\,\mathrm{Ry}$>

  This specifies the maximum of the energy range in which the
  frequency spectrum will be calculated.

\end{fdfentry}

\begin{fdfentry}{Optical.Broaden}[energy]<$0\,\mathrm{Ry}$>

  If this is value is set then a Gaussian broadening will be applied
  to the frequency values.

\end{fdfentry}

\begin{fdfentry}{Optical.Scissor}[energy]<$0\,\mathrm{Ry}$>

  Because of the tendency of DFT calculations to under estimate the
  band gap, a rigid shift of the unoccupied states, known as the
  scissor operator, can be added to correct the gap and thereby
  improve the calculated results. This shift is only applied to the
  optical calculation and no where else within the calculation.

\end{fdfentry}

\begin{fdfentry}{Optical.NumberOfBands}[integer]<all bands>
  
  This option controls the number of bands that are included in the
  optical property calculation. Clearly this number must be larger
  than the number of occupied bands and less than or equal to the
  number of basis functions (which determines the number of unoccupied
  bands available). Note, while including all the bands may be the
  most accurate choice this will also be the most expensive!

\end{fdfentry}

\begin{fdfentry}{Optical.Mesh}[block]

  This block contains 3 numbers that determine the mesh size used for
  the integration across the Brillouin zone. For example:
  \begin{fdfexample}
      %block  Optical.Mesh
        5 5 5
      %endblock  Optical.Mesh
  \end{fdfexample}
  The three values represent the number of mesh points in the
  direction of each reciprocal lattice vector.

\end{fdfentry}


\begin{fdflogicalF}{Optical.OffsetMesh}
  
  If set to true, then the mesh is offset away from the gamma point
  for odd numbers of points.

\end{fdflogicalF}

\begin{fdfentry}{Optical.PolarizationType}[string]<polycrystal>

  This option has three possible values that represent the type of
  polarization to be used in the calculation. The options are
  \begin{fdfoptions}
    \option[polarized]%
    implies the application of an electric field in a given direction

    \option[unpolarized]%
    implies the propagation of light in a given direction

    \option[polycrystal]%
    In the case of the first two options a direction in space must be
    specified for the electric field or propagation using the
    \fdf{Optical.Vector} data block.

  \end{fdfoptions}
  
\end{fdfentry}

\begin{fdfentry}{Optical.Vector}[block]
  
  This block contains 3 numbers that specify the vector direction for
  either the electric field or light propagation, for a polarized or
  unpolarized calculation, respectively. A typical block might look
  like:
  \begin{fdfexample}
      %block  Optical.Vector
        1.0 0.0 0.5
      %endblock  Optical.Vector
  \end{fdfexample}

\end{fdfentry}




\subsection{Macroscopic polarization}
\label{sec:macroscopic-polarization}

\begin{fdfentry}{PolarizationGrids}[block]
  \index{bulk polarization}%
  \index{Berry phase}

  If specified, the macroscopic polarization will be calculated using
  the geometric Berry phase approach (R.D. King-Smith, and
  D. Vanderbilt, PRB \textbf{47}, 1651 (1993)). In this method the
  electronic contribution to the macroscopic polarization, along a
  given direction, is calculated using a discretized version of the
  formula
  \begin{equation}
    \label{pol_formula}
    P_{e,\parallel}=\frac{ifq_e}{8\pi^3} \int_A d\mathbf{k}_\perp
    \sum_{n=1}^M \int_0^{|G_\parallel|} dk_{\parallel}
    \langle u_{\mathbf{k} n} |\frac\delta{\delta k_{\parallel}} |
    u_{\mathbf{k} n} \rangle
  \end{equation}
  where $f$ is the occupation (2 for a non-magnetic system), $q_e$ the
  electron charge, $M$ is the number of occupied bands (the system
  \textbf{must} be an insulator), and $u_{\mathbf{k} n}$ are the
  periodic Bloch functions. $\mathbf{G}_\parallel$ is the shortest
  reciprocal vector along the chosen direction.

  As it can be seen in formula \eqref{pol_formula}, to compute each
  component of the polarization we must perform a surface integration
  of the result of a 1-D integral in the selected direction.  The
  grids for the calculation along the direction of each of the three
  lattice vectors are specified in the block \fdf{PolarizationGrids}.
  \begin{fdfexample}
     %block PolarizationGrids
        10   3  4      yes
         2  20  2       no
         4   4 15
     %endblock PolarizationGrids
  \end{fdfexample}

  All three grids must be specified, therefore a $3\times3$ matrix of
  integer numbers must be given: the first row specifies the grid that
  will be used to calculate the polarization along the direction of
  the first lattice vector, the second row will be used for the
  calculation along the the direction of the second lattice vector,
  and the third row for the third lattice vector.  The numbers in the
  diagonal of the matrix specifie the number of points to be used in
  the one dimensional line integrals along the different
  directions. The other numbers specifie the mesh used in the surface
  integrals.  The last column specifies if the bidimensional grids are
  going to be diplaced from the origin or not, as in the
  Monkhorst-Pack algorithm (PRB \textbf{13}, 5188 (1976)).  This last
  column is optional.  If the number of points in one of the grids is
  zero, the calculation will not be performed for this particular
  direction.

  For example, in the given example, for the computation in the
  direction of the first lattice vector, 15 points will be used for
  the line integrals, while a $3\times4$ mesh will be used for the
  surface integration. This last grid will be displaced from the
  origin, so $\Gamma$ will not be included in the bidimensional
  integral. For the directions of the second and third lattice
  vectors, the number of points will be $20$ and $2\times2$, and $15$
  and $4\times4$, respectively.

  It has to be stressed that the macroscopic polarization can only be
  meaningfully calculated using this approach for insulators.
  Therefore, the presence of an energy gap is necessary, and no band
  can cross the Fermi level. The program performs a simple check of
  this condition, just by counting the electrons in the unit cell (
  the number must be even for a non-magnetic system, and the total
  spin polarization must have an integer value for spin polarized
  systems), however is the responsability of the user to check that
  the system under study is actually an insulator (for both spin
  components if spin polarized).

  The total macroscopic polarization, given in the output of the
  program, is the sum of the electronic contribution (calculated as
  the Berry phase of the valence bands), and the ionic contribution,
  which is simply defined as the sum of the atomic positions within
  the unit cell multiply by the ionic charges
  ($\sum_i^{N_a} Z_i \mathbf{r}_i$).  In the case of the magnetic
  systems, the bulk polarization for each spin component has been
  defined as
  \begin{equation}
    \mathbf{P}^\sigma = \mathbf{P}_e^\sigma +
    \frac12 \sum_i^{N_a}  Z_i \mathbf{r}_i
  \end{equation}
  $N_a$ is the number of atoms in the unit cell, and $\mathbf{r}_i$
  and $Z_i$ are the positions and charges of the ions.

  It is also worth noting, that the macroscopic polarization given by
  formula \eqref{pol_formula} is only defined modulo a ``quantum'' of
  polarization (the bulk polarization per unit cell is only well
  defined modulo $fq_e\mathbf{R}$, being $\mathbf{R}$ an arbitrary
  lattice vector). However, the experimentally observable quantities
  are associated to changes in the polarization induced by changes on
  the atomic positions (dynamical charges), strains (piezoelectric
  tensor), etc... The calculation of those changes, between different
  configurations of the solid, will be well defined as long as they
  are smaller than the ``quantum'', i.e. the perturbations are small
  enough to create small changes in the polarization.

\end{fdfentry}

\begin{fdflogicalF}{BornCharge}
  \index{Born effective charges}

  If true, the Born effective charge tensor is calculated for each
  atom by finite differences, by calculating the change in electric
  polarization (see \fdf{PolarizationGrids}) induced by the small
  displacements generated for the force constants calculation (see
  \fdf{MD.TypeOfRun:FC}):
  \begin{equation}
    \label{eq:effective_charge}
    Z^*_{i,\alpha,\beta}=\frac{\Omega_0}{e} \left. {\frac{\partial{P_\alpha}}
          {\partial{u_{i,\beta}}}}\right|_{q=0}
  \end{equation}
  where e is the charge of an electron and $\Omega_0$ is the unit cell
  volume.

  To calculate the Born charges it is necessary to specify both the
  Born charge flag and the mesh used to calculate the polarization,
  for example:
  \begin{fdfexample}
    %block PolarizationGrids
      7  3  3
      3  7  3
      3  3  7
    %endblock PolarizationGrids
    BornCharge True
  \end{fdfexample}

  The Born effective charge matrix is then written to the file
  \sysfile{BC}.

  The method by which the polarization is calculated may introduce an
  arbitrary phase (polarization quantum), which in general is far
  larger than the change in polarization which results from the atomic
  displacement. It is removed during the calculation of the Born
  effective charge tensor.

  The Born effective charges allow the calculation of LO-TO splittings
  and infrared activities. The version of the Vibra utility code in
  which these magnitudes are calculated is not yet distributed with
  \siesta, but can be obtained form Tom Archer (archert@tcd.ie).

\end{fdflogicalF}



\subsection[Maximally Localized Wannier Functions]%
{Maximally Localized Wannier Functions. \\
    Interface with the \textsc{wannier90} code}

\program{wannier90} (http://www.wannier.org) is a code to generate
maximally localized wannier functions according to the original
Marzari and Vanderbilt recipe.

It is strongly recommended to read the original papers on which this
method is based and the  documentation of \program{wannier90} code.
Here we shall focus only on those internal \siesta\ variables
required to produce the files that will be processed
by \program{wannier90}.

A complete list of examples and tests (including molecules, metals, 
semiconductors, insulators, magnetic systems, plotting of Fermi surfaces
or interpolation of bands), can be downloaded from

 http://personales.unican.es/junqueraj/Wannier-examples.tar.gz

\textbf{NOTE}: The Bloch functions produced by a first-principles code
      have arbitrary phases that depend on the number of processors
      used and other possibly non-reproducible details of the
      calculation. In what follows it is essential to maintain
      consistency in the handling of the overlap and Bloch-funcion
      files produced and fed to \program{wannier90}.


\begin{fdflogicalF}{Siesta2Wannier90.WriteMmn}
        
  This flag determines whether the overlaps between the periodic part
  of the Bloch states at neighbour k-points are computed and dumped
  into a file in the format required by \program{wannier90}.  These
  overlaps are defined in Eq. (27) in the paper by N. Marzari
  \textit{et al.}, Review of Modern Physics \textbf{84}, 1419 (2012),
  or Eq. (1.7) of the Wannier90 User Guide, Version 2.0.1.

  The k-points for which the overlaps will be computed are read from a
  \sysfile*{nnkp} file produced by \program{wannier90}. It is strongly
  recommended for the user to read the corresponding user guide.

  The overlap matrices are written in a file with extension
  \sysfile*{mmn}.

\end{fdflogicalF}

\begin{fdflogicalF}{Siesta2Wannier90.WriteAmn}
  
  This flag determines whether the overlaps between Bloch states and
  trial localized orbitals are computed and dumped into a file in the
  format required by \program{wannier90}.  These projections are
  defined in Eq. (16) in the paper by N. Marzari \textit{et al.},
  Review of Modern Physics \textbf{84}, 1419 (2012), or Eq. (1.8) of
  the Wannier90 User Guide, Version 2.0.1.

  The localized trial functions to use are taken from the
  \sysfile*{nnkp} file produced by \program{wannier90}. It is strongly
  recommended for the user to read the corresponding user guide.

  The overlap matrices are written in a file with extension
  \sysfile*{amn}.

\end{fdflogicalF}

\begin{fdflogicalF}{Siesta2Wannier90.WriteEig}
  
  Flag that determines whether the Kohn-Sham eigenvalues (in eV) at
  each point in the Monkhorst-Pack mesh required by
  \program{wannier90} are written to file.  This file is mandatory in
  \program{wannier90} if any of disentanglement, plot\_bands,
  plot\_fermi\_surface or hr\_plot options are set to true in the
  \program{wannier90} input file.

  The eigenvalues are written in a file with extension \sysfile*{eigW}.
  This extension is chosen to avoid name clashes with \siesta's
  standard eigenvalue file in case-insensitive filesystems.

\end{fdflogicalF}

\begin{fdflogicalF}{Siesta2Wannier90.WriteUnk}
  
  Produces \file{UNKXXXXX.Y} files which contain the periodic part
  of a Bloch function in the unit cell on a grid given by global
  unk\_nx, unk\_ny, unk\_nz variables.  The name of the output files
  is assumed to have the previous form, where the \texttt{XXXXXX}
  refer to the k-point index (from 00001 to the total number of
  k-points considered), and the \texttt{Y} refers to the spin
  component (1 or 2)

  The periodic part of the Bloch functions is defined by
  \begin{equation}
    u_{n \vec{k}} (\vec{r}) =
    \sum_{\vec{R} \mu} c_{n \mu}(\vec{k})
    e^{i \vec{k} \cdot ( \vec{r}_{\mu} + \vec{R} - \vec{r} )}
    \phi_{\mu} (\vec{r} - \vec{r}_{\mu} - \vec{R} ) ,
  \end{equation}
  where $\phi_{\mu} (\vec{r} - \vec{r}_{\mu} - \vec{R} )$ is a basis
  set atomic orbital centered on atom $\mu$ in the unit cell
  $\vec{R}$, and $c_{n \mu}(\vec{k})$ are the coefficients of the wave
  function. The latter must be identical to the ones used for
  wannierization in $M_{mn}$. (See the above comment about arbitrary
  phases.)

\end{fdflogicalF}

\begin{fdfentry}{Siesta2Wannier90.UnkGrid1}[integer]<\nonvalue{mesh
      points along $A$}>

  Number of points along the first lattice vector in the grid where
  the periodic part of the wave functions will be plotted.

\end{fdfentry}

\begin{fdfentry}{Siesta2Wannier90.UnkGrid2}[integer]<\nonvalue{mesh
      points along $B$}>

  Number of points along the second lattice vector in the grid where
  the periodic part of the wave functions will be plotted.

\end{fdfentry}

\begin{fdfentry}{Siesta2Wannier90.UnkGrid3}[integer]<\nonvalue{mesh
      points along $C$}>

  Number of points along the third lattice vector in the grid where
  the periodic part of the wave functions will be plotted.

\end{fdfentry}


\begin{fdflogicalT}{Siesta2Wannier90.UnkGridBinary}
  
  Flag that determines whether the periodic part of the wave function
  in the real space grid is written in binary format (default) or in
  ASCII format.

\end{fdflogicalT}

\begin{fdfentry}{Siesta2Wannier90.NumberOfBands}[integer]<occupied bands>
  
  In spin unpolarized calculations, number of bands that will be
  initially considered by \siesta\ to generate the information
  required by \program{wannier90}. Note that it should be at least as
  large as the index of the highest-lying band in the
  \program{wannier90} post-processing. For example, if the
  wannierization is going to involve bands 3 to 5, the \siesta\ number
  of bands should be at least 5. Bands 1 and 2 should appear in a
  ``excluded'' list.

  \note you are highly encouraged to explicitly specify the number of
  bands.

\end{fdfentry}

\begin{fdfentry}{Siesta2Wannier90.NumberOfBandsUp}[integer]<\fdfvalue{Siesta2Wannier90.NumberOfBands}>

  In spin-polarized calculations, number of bands with spin up that
  will be initially considered by \siesta\ to generate the information
  required by \program{wannier90}. 

\end{fdfentry}

\begin{fdfentry}{Siesta2Wannier90.NumberOfBandsDown}[integer]<\fdfvalue{Siesta2Wannier90.NumberOfBands}>

  In spin-polarized calculations, number of bands with spin down that
  will be initially considered by \siesta\ to generate the information
  required by \program{wannier90}.

\end{fdfentry}



\subsection{Systems with net charge or dipole, and electric fields}

\begin{fdfentry}{NetCharge}[real]<$0$>%
  \index{Charge of the system}
  \index{Doping}%
  \index{SCF!Doping}%

  Specify the net charge of the system (in units of $|e|$).  For
  charged systems, the energy converges very slowly versus cell
  size. For molecules or atoms, a Madelung correction term is applied
  to the energy to make it converge much faster with cell size (this
  is done only if the cell is SC, FCC or BCC). For other cells, or for
  periodic systems (chains, slabs or bulk), this energy correction
  term can not be applied, and the user is warned by the program. It
  is not advised to do charged systems other than atoms and molecules
  in SC, FCC or BCC cells, unless you know what you are doing.

  \textit{Use:} For example, the F$^-$ ion would have \fdf{NetCharge}
  \fdf*{-1} , and the Na$^+$ ion would have \fdf{NetCharge} \fdf*{1}.
  Fractional charges can also be used.

  \note Doing non-neutral charge calculations with
  \fdf{Slab.DipoleCorrection} is discouraged.
  
\end{fdfentry}


\begin{fdflogicalF}{SimulateDoping}
  \index{Slabs with net charge}

  This option instructs the program to add a background charge density
  to simulate doping.  The new ``doping'' routine calculates the net
  charge of the system, and adds a compensating background charge that
  makes the system neutral. This background charge is constant at
  points of the mesh near the atoms, and zero at points far from the
  atoms. This simulates situations like doped slabs, where the extra
  electrons (holes) are compensated by oposite charges at the material
  (the ionized dopant impurities), but not at the vacuum.  This serves
  to simulate properly doped systems in which there are large portions
  of vacuum, such as doped slabs.

  See \shell{Tests/sic-slab}.

\end{fdflogicalF}

\begin{fdfentry}{ExternalElectricField}[block]
  
  It specifies an external electric field for molecules, chains and
  slabs.  The electric field should be orthogonal to `bulk
  directions', like those parallel to a slab (bulk electric fields,
  like in dielectrics or ferroelectrics, are not allowed). If it is
  not, an error message is issued and the components of the field in
  bulk directions are suppressed automatically. The input is a vector
  in Cartesian coordinates, in the specified units. Example:
  \begin{fdfexample}
     %block ExternalElectricField
        0.000  0.000  0.500  V/Ang
     %endblock ExternalElectricField
  \end{fdfexample}

  Starting with version 4.0, applying an electric field perpendicular
  to a slab will by default enable the slab dipole correction, see
  \fdf{Slab.DipoleCorrection}. To reproduce older calculations, set
  this correction option explicitly to \fdffalse\ in the input file.

\end{fdfentry}

\begin{fdfentry}{Slab.DipoleCorrection}[string]<?|\fdftrue|\fdffalse|charge|vacuum|none>
  \fdfdepend{ExternalElectricField}%
  
  If not \fdffalse, \siesta\ calculates the electric field required to
  compensate the dipole of the system at every iteration of the
  self-consistent cycle.

  The dipole correction only works for Fourier transformed Poisson
  solutions of the Hartree potential since that will introduce a
  compensating field in the vacuum region to counter any inherent
  dipole in the system. Do not use this option together with
  \fdf{NetCharge} (charged systems).

  There are two ways of calculating the dipole of the system:
  \begin{fdfoptions}

    \option[charge|\fdftrue]%
    \fdfindex*{Slab.DipoleCorrection:charge}%

    The dipole of the system is calculated via
    \begin{equation}
      \mathbf D = - e \int(\mathbf r - \mathbf r_0) \delta\boldsymbol\rho(\mathbf r)
    \end{equation}
    where $\mathbf r_0$ is the dipole origin, see \fdf{Slab.DipoleCorrection!Origin},
    and $\delta\boldsymbol\rho$ is valence pseudocharge density minus
    the atomic valence pseudocharge densities.

    \option[vacuum]%
    \fdfindex*{Slab.DipoleCorrection:vacuum}%

    The electric field of the system is calculated via
    \begin{equation}
      \mathbf E \propto \left.\iint \mathrm d \mathbf r_{\perp \mathbf D} V(\mathbf
        r)\right|_{\mathbf r_{\mathrm{vacuum}}}
    \end{equation}
    where $\mathbf r_{\mathrm{vacuum}}$ is a point located in the
    vacuum region, see \fdf{Slab.DipoleCorrection!Vacuum}. Once the field is
    determined it is converted to an intrinsic system dipole.

    This feature is mainly intended for \fdf{Geometry!Charge}
    calculations where \fdf{Slab.DipoleCorrection:charge} may fail if the dipole
    center is determined incorrectly.

    For regular systems both this and \fdf*{charge} should yield
    approximately (down to numeric precision) the same dipole moments.

  \end{fdfoptions}

  The dipole correction should exactly compensate the electric field
  at the vacuum level thus allowing one to treat asymmetric slabs
  (including systems with an adsorbate on one surface) and compute
  properties such as the work funcion of each of the surfaces.

  \note If the program is fed a starting density matrix from an
  uncorrected calculation (i.e., with an exagerated dipole), the first
  iteration might use a compensating field that is too big, with the
  risk of taking the system out of the convergence basin. In that
  case, it is advisable to use the
  \fdf{SCF.Mix!First}\index{Slab dipole correction} option to request a mix of the input and
  output density matrices after that first iteration.

  \note \fdf*{charge} and \fdf*{vacuum} will for many systems yield
  the same result. If in doubt try both and see which one gives the
  best result.

  See \shell{Tests/sic-slab}, \shell{Tests/h2o\_2\_dipol\_gate}.

  This will default to \fdftrue\ if an external field is applied to a
  slab calculation, otherwise it will default to \fdffalse.

\end{fdfentry}

\begin{fdfentry}{Slab.DipoleCorrection!Origin}[block]
  \fdfdepend{Slab.DipoleCorrection:charge}

  Specify the origin of the dipole in the calculation of the dipole
  from the charge distribution.

  Its format is
  \begin{fdfexample}
     %block Slab.DipoleCorrection.Origin
        0.000  10.000  0.500  Ang
     %endblock
  \end{fdfexample}

  If this block is not specified the origin of the dipole will be the
  average position of the atoms.

  \note this will only be read if \fdf{Slab.DipoleCorrection:charge}
  is used.
  %
  \note this should only affect calculations with
  \fdf{Geometry!Charge} due to the non-trivial dipole origin, see
  e.g. \shell{Tests/h2o\_2\_dipol\_gate} and try and see if you can
  manually place the dipole origin to achieve similar results as the
  vacuum method.

\end{fdfentry}

\begin{fdfentry}{Slab.DipoleCorrection!Vacuum}[block]
  \fdfdepend{Slab.DipoleCorrection:vacuum}

  Options for the vacuum field determination.

  \begin{fdfoptions}

    \option[direction]%

    Mandatory input for chain and molecule calculations.

    Specify along which direction we should determine the electric
    field/dipole.

    For slabs this defaults to the non-bulk direction.

    \option[position]%

    Specify a point in the vacuum region.

    Defaults to the vacuum region based on the atomic coordinates.

    \option[tolerance]%

    Tolerance for determining whether we are in a vacuum region.
    The premise of the electric field calculation in the vacuum region
    is that the derivative of the potential ($\mathbf E$) is
    flat. When the electric field changes by more than this tolerance
    the region is not vacuum anymore and the point is disregarded.

    Defaults to $10^{-4}\,\mathrm{eV/Ang/e}$.

  \end{fdfoptions}

  Its format is
  \begin{fdfexample}
     %block Slab.DipoleCorrection.Vacuum
        # this is optional
        # default position is the center of system + 0.5 lattice vector
        # along 'direction'
        position 0.000  10.000  0.500  Ang
        # this is optional
        # default is 1e-4 eV/Ang/e
        tolerance 0.001 eV/Ang/e
        # this is mandatory
        direction 0.000  1.000  0.
     %endblock
  \end{fdfexample}

  \note this will only be read if \fdf{Slab.DipoleCorrection:vacuum} is used.

\end{fdfentry}


\begin{fdfentry}{Geometry!Hartree}[block]%
  \index{SCF!Potential}%
  \index{Gate}%
  
  Allows introduction of regions with changed Hartree
  potential. Introducing a potential can act as a repulsion
  (positive value) or attraction (negative value) region.

  The regions are defined as geometrical objects and there are no
  limits to the number of defined geometries.

  Details regarding this implementation may be found in
  \citet{Papior2016a}.

  Currently 4 different kinds of geometries are allowed:
  \begin{fdfoptions}
    

     \option[Infinite plane]%
    \index{Gate!infinite plane}%

    Define a geometry by an infinite plane which cuts the unit-cell.

    This geometry is defined by a single point which is in the plane
    and a vector normal to the plane.

    This geometry has 3 different settings:
    \begin{fdfoptions}
      \option[delta] %
      An infinite plane with $\delta$-height.

      \option[gauss] %
      An infinite plane with a Gaussian distributed height profile.

      \option[exp] %
      An infinite plane with an exponentially distributed height
      profile.

    \end{fdfoptions}


    \option[Bounded plane] %
    \index{Gate!bounded plane}%

    Define a geometric plane which is bounded, i.e. not infinite.

    This geometry is defined by an origo of the bounded plane and two
    vectors which span the plane, both originating in the respective
    origo.

    This geometry has 3 different settings:
    \begin{fdfoptions}

      \option[delta] %
      A plane with $\delta$-height.

      \option[gauss] %
      A plane with a Gaussian distributed height profile.

      \option[exp] %
      A plane with an exponentially distributed height profile.

    \end{fdfoptions}


    \option[Box]%
    \index{Gate!box}%

    This geometry is defined by an origo of the box and three vectors
    which span the box, all originating from the respective origo.

    This geometry has 1 setting:
    \begin{fdfoptions}
      \option[delta] %
      No decay-region outside the box.
      
    \end{fdfoptions}


    \option[Spheres]%
    \index{Gate!spheres}%
    
    This geometry is defined by a list of spheres and a common radii.
    
    This geometry has 2 settings:
    \begin{fdfoptions}
      
      \option[gauss] %
      All spheres have an gaussian distribution about their centre.
      
      \option[exp] %
      All spheres have an exponential decay.
      
    \end{fdfoptions}
    
  \end{fdfoptions}

  Here is a list of all options combined in one block:
  \begin{fdfexample}
%block Geometry.Hartree
 plane   1. eV       # The lifting potential on the geometry
   delta
    1.0 1.0 1.0 Ang  # An intersection point, in the plane
    1.0 0.5 0.2      # The normal vector to the plane
 plane  -1. eV       # The lifting potential on the geometry
   gauss 1. 2.  Ang  # the std. and the cut-off length
    1.0 1.0 1.0 Ang  # An intersection point, in the plane
    1.0 0.5 0.2      # The normal vector to the plane
 plane   1. eV       # The lifting potential on the geometry
   exp 1. 2. Ang     # the half-length and the cut-off length
    1.0 1.0 1.0 Ang  # An intersection point, in the plane
    1.0 0.5 0.2      # The normal vector to the plane
 square  1. eV       # The lifting potential on the geometry
   delta
    1.0 1.0 1.0 Ang  # The starting point of the square
    2.0 0.5 0.2 Ang  # The first spanning vector
    0.0 2.5 0.2 Ang  # The second spanning vector
 square  1. eV       # The lifting potential on the geometry
   gauss 1. 2. Ang   # the std. and the cut-off length
    1.0 1.0 1.0 Ang  # The starting point of the square
    2.0 0.5 0.2 Ang  # The first spanning vector
    0.0 2.5 0.2 Ang  # The second spanning vector
 square  1. eV       # The lifting potential on the geometry
   exp 1. 2. Ang     # the half-length and the cut-off length
    1.0 1.0 1.0 Ang  # The starting point of the square
    2.0 0.5 0.2 Ang  # The first spanning vector
    0.0 2.5 0.2 Ang  # The second spanning vector
 box  1. eV          # The lifting potential on the geometry
   delta
    1.0 1.0 1.0 Ang  # Origo of the box
    2.0 0.5 0.2 Ang  # The first spanning vector
    0.0 2.5 0.2 Ang  # The second spanning vector
    0.0 0.5 3.2 Ang  # The third spanning vector
 coords 1. eV        # The lifting potential on the geometry
    gauss 2. 4. Ang  # First is std. deviation, second is cut-off radii
       2 spheres     # How many spheres in the following lines
       0.0 4. 2. Ang # The centre coordinate of 1. sphere
       1.3 4. 2. Ang # The centre coordinate of 2. sphere
 coords 1. eV        # The lifting potential on the geometry
    exp 2. 4. Ang    # First is half-length, second is cut-off radii
       2 spheres     # How many spheres in the following lines
       0.0 4. 2. Ang # The centre coordinate of 1. sphere
       1.3 4. 2. Ang # The centre coordinate of 2. sphere
%endblock Geometry.Hartree
     \end{fdfexample}
     
\end{fdfentry}

\begin{fdfentry}{Geometry!Charge}[block]%
  \index{SCF!Doping}%
  \index{Doping}%
  \index{Charge of the system}%

  This is similar to the \fdf{Geometry!Hartree} block. However,
  instead of specifying a potential, one defines the total charge that
  is spread on the geometry. 

  To see how the input should be formatted, see
  \fdf{Geometry!Hartree} and remove the unit-specification. Note that
  the input value is number of electrons (similar to \fdf{NetCharge},
  however this method ensures charge-neutrality).

  Details regarding this implementation may be found in
  \citet{Papior2016a}.

\end{fdfentry}




\subsection{Output of charge densities and potentials on the grid}

\siesta\ represents these magnitudes on the real-space grid. The
following options control the generation of the appropriate files,
which can be processed by the programs in the \program{Util/Grid}
directory, and also by Andrei Postnikov's utilities in
\program{Util/Contrib/APostnikov}. See also \program{Util/Denchar} for
an alternative way to plot the charge density (and wavefunctions).

\begin{fdflogicalF}{SaveRho}
  \index{output!charge density}

  Instructs to write the valence pseudocharge density at the mesh used
  by DHSCF, in file \sysfile{RHO}.

  \note file \sysfile*{RHO} is only written, not read, by siesta.
  This file can be read by routine IORHO, which may be used by other
  application programs.

  If netCDF support is compiled in, the file \file{Rho.grid.nc} is
  produced.
  
\end{fdflogicalF}

\begin{fdflogicalF}{SaveDeltaRho}
  \index{output!$\delta \rho(\vec r)$}

  Instructs to write
  $\delta \rho(\vec r) = \rho(\vec r) - \rho_{atm}(\vec r)$, i.e., the
  valence pseudocharge density minus the sum of atomic valence
  pseudocharge densities. It is done for the mesh points used by DHSCF
  and it comes in file \sysfile{DRHO}. This file can be
  read by routine IORHO, which may be used by an application program
  in later versions.

  \note file \sysfile*{DRHO} is only written, not read, by siesta.

  If netCDF support is compiled in, the file \file{DeltaRho.grid.nc}
  is produced.

\end{fdflogicalF}

\begin{fdflogicalF}{SaveRhoXC}
  \index{output!charge density}
  
  Instructs to write the valence pseudocharge density at the mesh,
  including the nonlocal core corrections used to calculate the
  exchange-correlation energy, in file \sysfile{RHOXC}.

  \textit{Use:} File \sysfile*{RHOXC} is only written, not read, by
  siesta.

  If netCDF support is compiled in, the file \file{RhoXC.grid.nc} is produced.

\end{fdflogicalF}

\begin{fdflogicalF}{SaveElectrostaticPotential}
  \index{output!electrostatic potential}

  Instructs to write the total electrostatic potential, defined as the
  sum of the hartree potential plus the local pseudopotential, at the
  mesh used by DHSCF, in file \sysfile{VH}. This file can be read by
  routine IORHO, which may be used by an application program in later
  versions.

  \textit{Use:} File \sysfile*{VH} is only written, not read, by
  siesta.

  If netCDF support is compiled in, the file
  \file{ElectrostaticPotential.grid.nc} is produced.

\end{fdflogicalF}

\begin{fdflogicalF}{SaveNeutralAtomPotential}
  \index{output!electrostatic potential}

  Instructs to write the neutral-atom potential, defined as the sum of
  the hartree potential of a ``pseudo atomic valence charge'' plus the
  local pseudopotential, at the mesh used by DHSCF, in file
  \sysfile{VNA}. It is written at the start of the
  self-consistency cycle, as this potential does not change.

  \textit{Use:} File \sysfile*{VNA} is only written, not read, by
  siesta.

  If netCDF support is compiled in, the file \file{Vna.grid.nc} is
  produced.

\end{fdflogicalF}

\begin{fdflogicalF}{SaveTotalPotential}
  \index{output!total potential}

  Instructs to write the valence total effective local potential
  (local pseudopotential + Hartree + Vxc), at the mesh used by DHSCF,
  in file \sysfile{VT}. This file can be read by routine
  IORHO, which may be used by an application program in later
  versions.

  \textit{Use:} File \sysfile*{VT} is only written, not read, by
  siesta.

  If netCDF support is compiled in, the file
  \file{TotalPotential.grid.nc} is produced.

  \note a side effect; the vacuum level, defined as the effective
  potential at grid points with zero density, is printed in the
  standard output whenever such points exist (molecules, slabs) and
  either \fdf{SaveElectrostaticPotential} or \fdf{SaveTotalPotential}
  are \fdftrue.  In a symetric (nonpolar) slab, the work function can
  be computed as the difference between the vacuum level and the Fermi
  energy.

\end{fdflogicalF}

\begin{fdflogicalF}{SaveIonicCharge}
  \index{output!ionic charge}

  Instructs to write the soft diffuse ionic charge at the mesh used by
  DHSCF, in file \sysfile{IOCH}. This file can be read by routine
  IORHO, which may be used by an application program in later
  versions. Remember that, within the \siesta\ sign convention, the
  electron charge density is positive and the ionic charge density is
  negative.

  \textit{Use:} File \sysfile*{IOCH} is only written, not read, by siesta.

  If netCDF support is compiled in, the file \file{Chlocal.grid.nc} is produced.

\end{fdflogicalF}

\begin{fdflogicalF}{SaveTotalCharge}
  \index{output!total charge}

  Instructs to write the total charge density (ionic+electronic) at
  the mesh used by DHSCF, in file \sysfile{TOCH}. This file
  can be read by routine IORHO, which may be used by an application
  program in later versions.  Remember that, within the \siesta\ sign
  convention, the electron charge density is positive and the ionic
  charge density is negative.

  \textit{Use:} File \sysfile*{TOCH} is only written, not read, by
  siesta.

  If netCDF support is compiled in, the file
  \file{TotalCharge.grid.nc} is produced. 

\end{fdflogicalF}

\begin{fdflogicalF}{SaveBaderCharge}
  \index{output!Bader charge}

  Instructs the program to save the charge density for further
  post-processing by a Bader-analysis program.  This ``Bader
  charge'' is the sum of the electronic valence charge density and a
  set of ``model core charges'' placed at the atomic sites. For a
  given atom, the model core charge is a generalized Gaussian, but
  confined to a radius of 1.0 Bohr (by default), and integrating to
  the total core charge ($Z$-$Z_{\mathrm{val}}$). These core charges are
  needed to provide local maxima for the charge density at the atomic
  sites, which are not guaranteed in a pseudopotential calculation.
  For hydrogen, an artificial core of 1 electron is added,
  with a confinement radius of 0.6 Bohr by default. The Bader
  charge is projected on the grid points of the mesh used by DHSCF,
  and saved in file \sysfile{BADER}. This file can be
  post-processed by the program \program{Util/grid2cube} to convert it to
  the ``cube'' format, accepted by several Bader-analysis programs
  (for example, see \url{http://theory.cm.utexas.edu/bader/}).  Due to
  the need to represent a localized core charge, it is advisable to
  use a moderately high Mesh!Cutoff when invoking this option (300-500
  Ry). The size of the ``basin of attraction'' around each atom in
  the Bader analysis should be monitored to check that the model core
  charge is contained in it.

  The radii for the model core charges can be specified in the input
  fdf file. For example:

    \begin{fdfexample}
       bader-core-radius-standard  1.3 Bohr
       bader-core-radius-hydrogen  0.4 Bohr
    \end{fdfexample}
  
  The suggested way to run the Bader analysis with the Univ. of
  Texas code is to use both the RHO and BADER files (both in
  ``cube'' format), with the BADER file providing the ``reference''
  and the RHO file the actual significant valence charge data which
  is important in bonding. (See the notes for pseudopotential codes
  in the above web page.) For example, for the h2o-pop example:

  \begin{shellexample}
    bader h2o-pop.RHO.cube -ref h2o-pop.BADER.cube
  \end{shellexample}

  If netCDF support is compiled in, the file \file{BaderCharge.grid.nc}
  is produced.

\end{fdflogicalF}


\begin{fdflogicalF}{AnalyzeChargeDensityOnly}
 \index{output!charge density}

  If \fdftrue, the program optionally generates charge density files
  and computes partial atomic charges (Hirshfeld, Voronoi, Bader) from
  the information in the input density matrix, and stops.  This is
  useful to analyze the properties of the charge density without a
  diagonalization step, and with a user-selectable mesh cutoff.  Note
  that the \fdf{DM.UseSaveDM} option should be active.  Note also that
  if an initial density matrix (DM file) is used, it is not
  normalized. All the relevant fdf options for charge-density file
  production and partial charge calculation can be used with this option.

\end{fdflogicalF}

\begin{fdflogicalF}{SaveInitialChargeDensity}
  \fdfdeprecatedby{AnalyzeChargeDensityOnly}
  \index{output!charge density}

  If \fdftrue, the program generates a \sysfile{RHOINIT}
  file (and a \file{RhoInit.grid.nc} file if netCDF support is
  compiled in) containing the charge density used to start the first
  self-consistency step, and it stops. Note that if an initial density
  matrix (DM file) is used, it is not normalized. This is useful to
  generate the charge density associated to ``partial'' DMs, as
  created by progras such as \program{dm\_creator} and
  \program{dm\_filter}.

  (This option is to be deprecated in favor of \fdf{AnalyzeChargeDensityOnly}).
\end{fdflogicalF}
  

\subsection{Auxiliary Force field}
\fdfindex{MM}

It is possible to supplement the DFT interactions with a limited
set of force-field options, typically useful to simulate dispersion
interactions. It is not yet possible to turn off DFT and base the
dynamics only on the force field. The \program{GULP} program should be
used for that.

\begin{fdfentry}{MM!Potentials}[block]

  This block allows the input
  of molecular mechanics potentials between species. The following
  potentials are currently implemented:
  \begin{itemize}
    \item%
    C6, C8, C10 powers of the Tang-Toennes damped dispersion
    potential.

    \item%
    A harmonic interaction.

    \item%
    A dispersion potential of the Grimme type (similar to the C6
    type but with a different damping function). (See S. Grimme,
    J. Comput. Chem. Vol 27, 1787-1799 (2006)). See also
    \fdf{MM!Grimme.D} and \fdf{MM!Grimme.S6} below. 

  \end{itemize}

  The format of the input is the two species numbers that are to
  interact, the potential name (C6, C8, C10, harm, or Grimme), followed
  by the potential parameters. For the damped dispersion potentials the
  first number is the coefficient and the second is the exponent of the
  damping term (i.e., a reciprocal length). A value of zero for the
  latter term implies no damping. For the harmonic potential the force
  constant is given first, followed by r0. For the Grimme potential C6
  is given first, followed by the (corrected) sum of the van der Waals
  radii for the interacting species (a real length). Positive values of
  the C6, C8, and C10 coefficients imply attractive potentials.

  \begin{fdfexample}
    %block MM.Potentials
      1 1 C6 32.0 2.0
      1 2 harm 3.0 1.4
      2 3 Grimme 6.0 3.2
    %endblock MM.Potentials
  \end{fdfexample}

  To automatically create input for Grimme's method, please see the
  utility: \program{Util/Grimme} which can read an fdf file and create
  the correct input for Grimme's method.

\end{fdfentry}

\begin{fdfentry}{MM!Cutoff}[length]<$30\,\mathrm{Bohr}$>

  Specifies the distance out to which molecular mechanics
  potential will act before being treated as going to zero.
  
\end{fdfentry}

\begin{fdfentry}{MM!UnitsEnergy}[unit]<eV>

  Specifies the units to be used for energy in the
  molecular mechanics potentials.
  
\end{fdfentry}

\begin{fdfentry}{MM!UnitsDistance}[unit]<Ang>
  
  Specifies the units to be used for distance in the
  molecular mechanics potentials.

\end{fdfentry}

\begin{fdfentry}{MM!Grimme.D}[real]<$20.0$>

  Specifies the scale factor $d$ for the scaling function
  in the Grimme dispersion potential (see above).

\end{fdfentry}

\begin{fdfentry}{MM!Grimme.S6}[real]<$1.66$>

  Specifies the overall fitting factor $s_6$ for the
  Grimme dispersion potential (see above). This number depends on the
  quality of the basis set, the exchange-correlation functional, and the
  fitting set.
  
\end{fdfentry}


\subsection{Parallel options}


\begin{fdfentry}{BlockSize}[integer]<\nonvalue{automatic}>

  The orbitals are distributed over the processors when running in
  parallel using a 1-D block-cyclic algorithm. \fdf{BlockSize} is
  the number of consecutive orbitals which are located on a given
  processor before moving to the next one. Large values of this
  parameter lead to poor load balancing, while small values can lead
  to inefficient execution. The performance of the parallel code can
  be optimised by varying this parameter until a suitable value is
  found.

\end{fdfentry}

\begin{fdfentry}{ProcessorY}[integer]<\nonvalue{automatic}>
  
  The mesh points are divided in the Y and Z directions (more
  precisely, along the second and third lattice vectors) over the
  processors in a 2-D grid. \fdf{ProcessorY} specifies the
  dimension of the processor grid in the Y-direction and must be a
  factor of the total number of processors. Ideally the processors
  should be divided so that the number of mesh points per processor
  along each axis is as similar as possible.

  Defaults to a multiple of number of processors.

\end{fdfentry}


\subsubsection{Parallel decompositions for O(N)}
\label{parallel-on}

Apart from the default block-cyclic decomposition of the orbital data,
O(N) calculations can use other schemes which should be more
efficient: spatial decomposition (based on atom proximity), and domain
decomposition (based on the most efficient abstract partition of the
interaction graph of the Hamiltonian). 


\begin{fdflogicalF}{UseDomainDecomposition}
  
  This option instructs the program to employ a graph-partitioning
  algorithm (using the \program{METIS} library (See
  \url{www.cs.umn.edu/~metis}) to find an efficient distribution of
  the orbital data over processors.  To use this option (meaningful
  only in parallel) the program has to be compiled with the
  preprocessor option \shell{SIESTA\_\_METIS} (or the deprecated
  \shell{ON\_DOMAIN\_DECOMP}) and the \program{METIS} library has to
  be linked in.

\end{fdflogicalF}

\begin{fdflogicalF}{UseSpatialDecomposition}

  When performing a parallel order N calculation, this option
  instructs the program to execute a spatial decomposition algorithm
  in which the system is divided into cells, which are then assigned,
  together with the orbitals centered in them, to the different
  processors. The size of the cells is, by default, equal to the
  maximum distance at which there is a non-zero matrix element in the
  Hamiltonian between two orbitals, or the radius of the Localized
  Wannier function - which ever is the larger. If this is the case,
  then an orbital will only interact with other orbitals in the same
  or neighbouring cells. However, by decreasing the cell size and
  searching over more cells it is possible to achieve better load
  balance in some cases. This is controlled by the variable
  \fdf{RcSpatial}.

  \note the distribution algorithm is quite fragile and a careful
  tuning of \fdf{RcSpatial} might be needed. This option is therefore
  not enabled by default.

\end{fdflogicalF}

\begin{fdfentry}{RcSpatial}[length]<\nonvalue{maximum orbital range}>
  
  Controls the cell size during the spatial decomposition.

\end{fdfentry}


\subsection{Efficiency options}

\begin{fdflogicalF}{DirectPhi}

  The calculation of the matrix elements on the mesh requires the
  value of the orbitals on the mesh points. This array represents one
  of the largest uses of memory within the code. If set to true this
  option allows the code to generate the orbital values when needed
  rather than storing the values. This obviously costs more computer
  time but will make it possible to run larger jobs where memory is
  the limiting factor.

  This controls whether the values of the orbitals at the mesh points
  are stored or calculated on the fly.
  
\end{fdflogicalF}


\subsection{Memory, CPU-time, and Wall time accounting options}

\begin{fdfentry}{AllocReportLevel}[integer]<$0$>

  Sets the level of the allocation report, printed in file
  \sysfile{alloc}. However, not all the allocated arrays are included
  in the report (this will be corrected in future versions). The
  allowed values are:
  \begin{itemize}
    \item%
    level 0 : no report at all (the default)
    \item%
    level 1 : only total memory peak and where it occurred
    \item%
    level 2 : detailed report printed only at
    normal program termination
    \item%
    level 3 : detailed report printed at every new memory peak
    \item%
    level 4 : print every individual (re)allocation or deallocation
  \end{itemize}

  \note In MPI runs, only node-0 peak reports are produced.
  
\end{fdfentry}


\begin{fdfentry}{AllocReportThreshold}[real]<$0.$>

Sets the minimum size (in bytes) of the arrays whose memory use is
individually printed in the detailed allocation reports (levels 2 and
3). It does not affect the reported memory sums and peaks, which
always include all arrays.
  
\end{fdfentry}

\begin{fdfentry}{TimerReportThreshold}[real]<$0.$>

  Sets the minimum fraction, of total CPU time, of the subroutines or
  code sections whose CPU time is individually printed in the detailed
  timer reports. To obtain the accounting of MPI communication times
  in parallel executions, you must compile with option
  \shell{-DMPI\_TIMING}\index{compile!pre-processor!-DMPI\_TIMING}.
  In serial execution, the CPU times are printed at the end of the
  output file. In parallel execution, they are reported in a separated
  file named \sysfile{times}.
  
\end{fdfentry}

\begin{fdflogicalF}{UseTreeTimer}

  Enable an experimental timer which is based on wall time on the
  master node and is aware of the tree-structure of the timed
  sections. At the end of the program, a report is generated in the
  output file, and a \file{time.json} file in JSON format is also
  written. \index{JSON timing report} This file can be used by
  third-party scripts to process timing data.

  \note, if used with the PEXSI solver (see Sec.~\ref{SolverPEXSI})
  this defaults to \fdftrue.
  
\end{fdflogicalF}


\begin{fdflogicalT}{UseParallelTimer}

  Determine whether timings are performed in parallel. This may
  introduce slight overhead.

  \note, if used with the PEXSI solver (see Sec.~\ref{SolverPEXSI})
  this defaults to \fdffalse.
  
\end{fdflogicalT}

\begin{fdflogicalF}{TimingSplitScfSteps}

  The timings for individual scf steps will be recorded separately.
  
  NOTE: The 'tree' timer should be used to make meaningful use of this
  information. It is enabled by default if this variable is \fdftrue.

\end{fdflogicalF}

\begin{fdfentry}{MaxWalltime}[real time]<Infinity>

  Set an internal limit to the wall time allotted to the
  program's execution. Typically this is related to the external limit
  imposed by queuing systems. The code checks its wall time periodically
  and will abort if nearing the limit, with some slack left for clean-up
  operations (proper closing of files, emergency output...), as determined
  by \fdf{MaxWalltime!Slack}. See Sec.~\ref{sec:fdf-units} for available
  units of time (\textbf{s}, \textbf{mins}, \textbf{hours}, \textbf{days}).

\end{fdfentry}


\begin{fdfentry}{MaxWalltime!Slack}[real time]<5 s>

  The code checks its wall time $T_{\mathrm{wall}}$ periodically and will
  abort if $T_{\mathrm{wall}} > T_{\mathrm{max}} - T_{\mathrm{slack}}$, so that
  some slack is left for any clean-up operations.

\end{fdfentry}


\subsection{The catch-all option UseSaveData}
\index{reading saved data}

This is a dangerous feature, and is deprecated, but retained for
historical compatibility. Use the individual options instead.


\begin{fdflogicalF}{UseSaveData}
  \index{reading saved data!all} 

  Instructs to use as much information as possible stored from
  previous runs in files \sysfile{XV}, \sysfile{DM} and \sysfile{LWF},

  \note if the files are not existing it will read the information
  from the fdf file.

\end{fdflogicalF}



\subsection{Output of information for Denchar}
\index{denchar}

The program \program{denchar} in \program{Util/Denchar} can generate
charge-density and wavefunction information in real space.

\begin{fdflogicalF}{Write!Denchar}
  \index{output!charge density and/or wfs for DENCHAR code} 

  Instructs to write information needed by the utility program DENCHAR
  (by J. Junquera and P. Ordej\'on) to generate valence charge
  densities and/or wavefunctions in real space (see
  \program{Util/Denchar}). The information is written in files
  \sysfile{PLD} and \sysfile{DIM}.

  To run DENCHAR you will need, apart from the \sysfile*{PLD} and \sysfile*{DIM} files,
  the Density-Matrix (DM) file and/or a wavefunction (\sysfile*{WFSX})
  file, and the .ion files containing the information about the basis
  orbitals.

\end{fdflogicalF}


\subsection{NetCDF (CDF4) output file}
\label{cdf-output}

\note this requires \siesta\ compiled with CDF4 support.

To unify and construct a simple output file for an entire \siesta\
calculation a generic NetCDF file will be created if \siesta\ is
compiled with \program{ncdf} support, see Sec.~\ref{sec:libs} and the
\program{ncdf} section\index{compile!pre-processor!-DNCDF\_4}.

Generally all output to NetCDF flags,
\fdf{SaveElectrostaticPotential}, etc. apply to this file as well.

One may control the output file with compressibility and parallel I/O,
if needed.

\begin{fdflogicalF}{CDF!Save}

  Create the \sysfile{nc} file which is a NetCDF file.
  
  This file will be created with a large set of \emph{groups} which
  make separating the quantities easily. Also it will inherently
  denote the units for the stored quantities.

  \note this option is not available for MD/relaxations, only for
  force constant runs.
  
\end{fdflogicalF}


\begin{fdfentry}{CDF!Compress}[integer]<$0$>

  Integer between 0 and 9. The former represents \emph{no} compressing
  and the latter is the highest compressing. 

  The higher the number the more computation time is spent on
  compressing the data. A good compromise between speed and
  compression is $3$.

  \note if one requests parallel I/O (\fdf{CDF!MPI}) this will
  automatically be set to $0$. One cannot perform parallel IO and
  compress the data simultaneously.

  \note instead of using \siesta\ for compression you may compress
  after execution by:
  \begin{shellexample}
    nccopy -d 3 -s noncompressed.nc compressed.nc
  \end{shellexample}
  
\end{fdfentry}


\begin{fdflogicalF}{CDF!MPI}

  Write \sysfile{nc} in parallel using MPI for increased
  performance. This has almost no memory overhead but may for very
  large number of processors saturate the file-system.

  \note this is an experimental flag.

\end{fdflogicalF}

\begin{fdfentry}{CDF!Grid.Precision}[string]<single|double>

  At which precision should the real-space grid quantities be stored,
  such as the density, electrostatic potential etc.
  
\end{fdfentry}



\vspace{5pt}
\section{STRUCTURAL RELAXATION, PHONONS, AND MOLECULAR DYNAMICS}

This functionality is not \siesta-specific, but is implemented to
provide a more complete simulation package. The program has an outer
geometry loop: it computes the electronic structure (and
thus the forces and stresses) for a given geometry, updates the
atomic positions (and maybe the cell vectors) accordingly and moves on
to the next cycle.
%
If there are molecular dynamics options missing you are highly
recommend to look into \fdf{MD.TypeOfRun:Lua} or
\fdf{MD.TypeOfRun:Master}. 


Several options for MD and structural optimizations are
implemented, selected by
\begin{fdfentry}{MD.TypeOfRun}[string]<CG>

  \begin{fdfoptions}
    
    \option[CG]%
    \fdfindex*{MD.TypeOfRun:CG}%

    Coordinate optimization by conjugate gradients). Optionally (see
    variable \fdf{MD.VariableCell} below), the optimization can include the
    cell vectors.

    \option[Broyden]%
    \fdfindex*{MD.TypeOfRun:Broyden}%
    Coordinate optimization by a modified Broyden scheme). Optionally,
    (see variable \fdf{MD.VariableCell} below), the optimization can
    include the cell vectors.

    \option[FIRE]%
    \fdfindex*{MD.TypeOfRun:FIRE}%
    Coordinate optimization by Fast Inertial Relaxation Engine (FIRE)
    (E. Bitzek et al, PRL 97, 170201, (2006)).  Optionally, (see
    variable \fdf{MD.VariableCell} below), the optimization can
    include the cell vectors.

    \option[Verlet]%
    \fdfindex*{MD.TypeOfRun:Verlet}%
    Standard Verlet algorithm MD

    \option[Nose]%
    \fdfindex*{MD.TypeOfRun:Nose}%
    MD with temperature controlled  by means of a Nos\'e
    thermostat

    \option[ParrinelloRahman]%
    \fdfindex*{MD.TypeOfRun:ParrinelloRahman}%
    MD with pressure controlled by the Parrinello-Rahman method

    \option[NoseParrinelloRahman]%
    \fdfindex*{MD.TypeOfRun:NoseParrinelloRahman}%
    MD with temperature controlled by means of a Nos\'e thermostat and
    pressure controlled by the Parrinello-Rahman method

    \option[Anneal]%
    \fdfindex*{MD.TypeOfRun:Anneal}%
    MD with annealing to a desired temperature and/or pressure (see
    variable \fdf{MD.AnnealOption} below)

    \option[FC]%
    \fdfindex*{MD.TypeOfRun:FC}%
    Compute force constants matrix\index{Force Constants Matrix} for
    phonon calculations.

    \option[Master|Forces]%
    \fdfindex*{MD.TypeOfRun:Master}%
    \fdfindex*{MD.TypeOfRun:Forces}%
    Receive coordinates from, and return forces to, an external
    driver program, using MPI, Unix pipes, or Inet sockets for
    communication. The routines in module \program{fsiesta} allow the
    user's program to perform this communication transparently, as if
    \siesta\ were a conventional force-field subroutine. See
    \shell{Util/SiestaSubroutine/README} for details. WARNING: if this
    option is specified without a driver program sending data, siesta
    may hang without any notice.
    
    See directory Util/Scripting \index{Scripting} for other driving
    options.

    \option[Lua]%
    \fdfindex*{MD.TypeOfRun:Lua}%
    Fully control the MD cycle and convergence path using an external
    Lua script. 

    With an external Lua script one may control nearly everything from
    a script. One can query \emph{any} internal data-structures in
    \siesta\ and, similarly, return \emph{any} data thus overwriting
    the internals. A list of ideas which may be implemented in such a
    Lua script are:
    \begin{itemize}
      \item New geometry relaxation algorithms

      \item NEB calculations

      \item New MD routines

      \item Convergence tests of \fdf{Mesh!Cutoff} and
      \fdf{kgrid.MonkhorstPack}, or other parameters (currently basis
      set optimizations cannot be performed in the Lua script).

    \end{itemize}
    Sec.~\ref{sec:lua} for additional details (and a description of
    \program{flos} which implements some of the above mentioned items).

    Using this option requires the compilation of \siesta\ with the
    \program{flook} library.%
    \index{flook}\index{External library!flook}%
    If \siesta\ is not compiled as prescribed in Sec.~\ref{sec:libs}
    this option will make \siesta\ die.
    
    \option[TDED]%
    \fdfindex*{MD.TypeOfRun:TDED}%

    New option to perform time-dependent electron dynamics simulations
    (TDED) within RT-TDDFT. For more details see
    Sec.~\ref{sec:tddft}.

    The second run of \siesta\ uses this option with the files
    \sysfile{TDWF} and \sysfile{TDXV} present in the working
    directory.  In this option ions and electrons are assumed to move
    simultaneously. The occupied electronic states are time-evolved
    instead of the usual SCF calculations in each step.  Choose this
    option even if you intend to do only-electron dynamics. If you
    want to do an electron dynamics-only calculation set
    \fdf{MD.FinalTimeStep} equal to $1$. For optical response
    calculations switch off the external field during the second
    run. The \fdf{MD.LengthTimeStep}, unlike in the standard MD
    simulation, is defined by mulitpilication of \fdf{TDED!TimeStep}
    and \fdf{TDED!Nsteps}. In TDDFT calculations, the user defined
    \fdf{MD.LengthTimeStep} is ignored.

     
  \end{fdfoptions}
  
  \note if \fdf{Compat!Pre-v4-Dynamics} is \fdftrue\ this will default
  to \fdf*{Verlet}.

  Note that some options specified in later variables (like quenching)
  modify the behavior of these MD options.

  Appart from being able to act as a force subroutine for a driver
  program that uses module fsiesta, \siesta\ is also prepared to
  communicate with the i-PI code (see
  \texttt{http://epfl-cosmo.github.io/gle4md/index.html?page=ipi}).
  To do this, \siesta\ must be started after i-PI (it acts as a client
  of i-PI, communicating with it through Inet or Unix sockets), and
  the following lines must be present in the .fdf data file:
  \begin{fdfexample}
     MD.TypeOfRun      Master     # equivalent to 'Forces'
     Master.code       i-pi       # ( fsiesta | i-pi )
     Master.interface  socket     # ( pipes | socket | mpi )
     Master.address    localhost  # or driver's IP, e.g. 150.242.7.140
     Master.port       10001      # 10000+siesta_process_order
     Master.socketType inet       # ( inet | unix )
  \end{fdfexample}

\end{fdfentry}



\subsection{Compatibility with pre-v4 versions}
\index{Backward compatibility}

Starting in the summer of 2015, some changes were made to the behavior
of the program regarding default dynamics options and choice of
coordinates to work with during post-processing of the electronic
structure. The changes are:

\begin{itemize}
  \item %
  The default dynamics option is ``CG'' instead of ``Verlet''.

  \item%
  The coordinates, if moved by the dynamics routines, are reset to
  their values at the previous step for the analysis of the electronic 
  structure (band structure calculations, DOS, LDOS, etc).

  \item%
  Some output files reflect the values of the ``un-moved''
  coordinates.

  \item%
  The default convergence criteria is now \emph{both} density and
  Hamiltonian convergence, see \fdf{SCF.DM!Converge} and
  \fdf{SCF.H!Converge}.

\end{itemize}

To recover the previous behavior, the user can turn on the
compatibility switch \fdf*{Compat!Pre-v4-Dynamics}, which is off by
default.

Note that complete compatibility cannot be perfectly guaranteed.

\subsection{Structural relaxation}

In this mode of operation, the program moves the atoms (and optionally
the cell vectors) trying to minimize the forces (and stresses) on
them.

These are the options common to all relaxation methods. If the Zmatrix
input option is in effect (see Sec.~\ref{sec:Zmatrix}) the
Zmatrix-specific options take precedence.  The 'MD' prefix is
misleading but kept for historical reasons.

\begin{fdflogicalF}{MD.VariableCell}
  \index{cell relaxation} 

  If \fdftrue, the lattice is relaxed together with the atomic
  coordinates. It allows to target hydrostatic pressures or arbitrary
  stress tensors. See \fdf{MD.MaxStressTol},
  \fdf{Target!Pressure}, \fdf{Target!Stress.Voigt},
  \fdf{Constant!Volume}, and
  \fdf{MD.PreconditionVariableCell}.

  \note only compatible with \fdf{MD.TypeOfRun:CG},
  \fdf*{Broyden} or \fdf*{fire}.

\end{fdflogicalF}


\begin{fdflogicalF}{Constant!Volume}
  \fdfindex*{MD.ConstantVolume}
  \fdfdeprecates{MD.ConstantVolume}%
  \index{constant-volume cell relaxation} 

  If \fdftrue, the cell volume is kept constant in a variable-cell
  relaxation: only the cell shape and the atomic coordinates are
  allowed to change.  Note that it does not make much sense to specify
  a target stress or pressure in this case, except for anisotropic
  (traceless) stresses.  See \fdf{MD.VariableCell},
  \fdf{Target.Stress.Voigt}.

  \note only compatible with \fdf{MD.TypeOfRun:CG},
  \fdf*{Broyden} or \fdf*{fire}.

\end{fdflogicalF}

\begin{fdflogicalF}{MD.RelaxCellOnly}
  \index{relaxation of cell parameters only}

  If \fdftrue, only the cell parameters are relaxed (by the Broyden or
  FIRE method, not CG). The atomic coordinates are re-scaled to the
  new cell, keeping the fractional coordinates constant. For
  \fdf{Zmatrix} calculations, the fractional position of the first
  atom in each molecule is kept fixed, and no attempt is made to
  rescale the bond distances or angles.

  \note only compatible with \fdf{MD.TypeOfRun:Broyden} or \fdf*{fire}.

\end{fdflogicalF}

\begin{fdfentry}{MD.MaxForceTol}[force]<$0.04\,\mathrm{eV/Ang}$>
  
  Force tolerance in coordinate optimization.
  Run stops if the maximum atomic force is
  smaller than \fdf{MD.MaxForceTol} (see \fdf{MD.MaxStressTol}
  for variable cell).

\end{fdfentry}

\begin{fdfentry}{MD.MaxStressTol}[pressure]<$1\,\mathrm{GPa}$>
  
  Stress tolerance in variable-cell CG optimization. Run stops if the
  maximum atomic force is smaller than \fdf{MD.MaxForceTol} and the
  maximum stress component is smaller than \fdf{MD.MaxStressTol}.

  Special consideration is needed if used with Sankey-type basis sets,
  since the combination of orbital kinks at the cutoff radii and the
  finite-grid integration originate discontinuities in the stress
  components, whose magnitude depends on the cutoff radii (or energy
  shift) and the mesh cutoff. The tolerance has to be larger than the
  discontinuities to avoid endless optimizations if the target stress
  happens to be in a discontinuity.

\end{fdfentry}

\begin{fdfentry}{MD.Steps}[integer]<0>
  \fdfindex*{MD.NumCGsteps}
  \fdfdeprecates{MD.NumCGsteps}
  
  Maximum number of steps in a minimization routine
  (the minimization will stop if tolerance is reached before; see
  \fdf{MD.MaxForceTol} below).

  \note The old flag \fdf{MD.NumCGsteps} will remain for historical
  reasons.

\end{fdfentry}

\begin{fdfentry}{MD.MaxDispl}[length]<$0.2\,\mathrm{Bohr}$>
  \fdfindex*{MD.MaxCGDispl}
  \fdfdeprecates{MD.MaxCGDispl}
  
  Maximum atomic displacements in an optimization move.

  In the Broyden optimization method, it is also possible to limit
  indirectly the \textit{initial\/} atomic displacements using
  \fdf{MD.Broyden.Initial.Inverse.Jacobian}. For the \fdf*{FIRE} method, the
  same result can be obtained by choosing a small time step.

  Note that there are Zmatrix-specific options that override this option.

  \note The old flag \fdf{MD.MaxCGDispl} will remain for historical
  reasons.

\end{fdfentry}

\begin{fdfentry}{MD.PreconditionVariableCell}[length]<$5\,\mathrm{Ang}$>
  
  A length to multiply to the strain components in a variable-cell
  optimization. The strain components enter the minimization on the
  same footing as the coordinates. For good efficiency, this length
  should make the scale of energy variation with strain similar to the
  one due to atomic displacements. It is also used for the application
  of the \fdf{MD.MaxDispl} value to the strain components.

\end{fdfentry}


\begin{fdfentry}{ZM.ForceTolLength}[force]<$0.00155574\,\mathrm{Ry/Bohr}$>
  
  Parameter that controls the convergence with respect to forces on
  Z-matrix lengths

\end{fdfentry}


\begin{fdfentry}{ZM.ForceTolAngle}[torque]<$0.00356549\,\mathrm{Ry/rad}$>
  
  Parameter that controls the convergence with respect to forces on
  Z-matrix angles

\end{fdfentry}

\begin{fdfentry}{ZM.MaxDisplLength}[length]<$0.2\,\mathrm{Bohr}$>
  
  Parameter that controls the maximum change in a Z-matrix length
  during an optimisation step.

\end{fdfentry}

\begin{fdfentry}{ZM.MaxDisplAngle}[angle]<$0.003\,\mathrm{rad}$>
  
  Parameter that controls the maximum change in a Z-matrix angle
  during an optimisation step.

\end{fdfentry}



\subsubsection{Conjugate-gradients optimization}

This was historically the default geometry-optimization method, and
all the above options were introduced specifically for it, hence their
names. The following pertains only to this method:

\index{Conjugate-gradient history information}
\begin{fdflogicalF}{MD.UseSaveCG}
  \index{reading saved data!CG}

  Instructs to read the conjugate-gradient hystory information stored
  in file \sysfile{CG} by a previous run.

  \note to get actual continuation of iterrupted CG runs, use
  together with \fdf{MD.UseSaveXV} \fdftrue\ with the \sysfile*{XV}
  file generated in the same run as the CG file.  If the required file
  does not exist, a warning is printed but the program does not
  stop. Overrides \fdf{UseSaveData}.

  \note no such feature exists yet for a Broyden-based relaxation.

\end{fdflogicalF}

\subsubsection{Broyden optimization}

It uses the modified Broyden algorithm to
build up the Jacobian matrix. (See D.D. Johnson, PRB 38, 12807
(1988)). (Note: This is not BFGS.)

\begin{fdfentry}{MD.Broyden!History.Steps}[integer]<$5$>
  \index{Broyden optimization}

  Number of relaxation steps during which the modified Broyden
  algorithm builds up the Jacobian matrix.

\end{fdfentry}

\begin{fdflogicalT}{MD.Broyden!Cycle.On.Maxit}

  Upon reaching the maximum number of history data sets which are kept
  for Jacobian estimation, throw away the oldest and shift the rest to
  make room for a new data set. The alternative is to re-start the
  Broyden minimization algorithm from a first step of a diagonal
  inverse Jacobian (which might be useful when the minimization is
  stuck).

\end{fdflogicalT}

\begin{fdfentry}{MD.Broyden!Initial.Inverse.Jacobian}[real]<$1$>

  Initial inverse Jacobian for the optimization procedure. (The units
  are those implied by the internal Siesta usage. The default value
  seems to work well for most systems.

\end{fdfentry}



\subsubsection{FIRE relaxation}

Implementation of the Fast Inertial Relaxation Engine (FIRE) method
(E. Bitzek et al, PRL 97, 170201, (2006) in a manner compatible with
the CG and Broyden modes of relaxation. (An older implementation
activated by the \fdf*{MD.FireQuench} variable is still available).

\begin{fdfentry}{MD.FIRE.TimeStep}[time]<\fdfvalue{MD.LengthTimeStep}>
  
  The (fictitious) time-step for FIRE relaxation.  This is the main
  user-variable when the option \fdf*{FIRE} for
  \fdf{MD.TypeOfRun} is active.

  \note the default value is encouraged to be changed as the link to
  \fdf{MD.LengthTimeStep} is misleading.

  There are other low-level options tunable by the user (see the
  routines \texttt{fire\_optim} and \texttt{cell\_fire\_optim} for
  more details.

\end{fdfentry}


\ifdeprecated
% The below options are deprecated in favor of:
% MD.TypeOfRun fire

\subsubsection{Quenched MD}

These methods are really based on molecular dynamics, but are used for
structural relaxation.

Note that the Zmatrix input option (see Sec.~\ref{sec:Zmatrix}) is not
compatible with molecular dynamics. The initial geometry can be
specified using the Zmatrix format, but the Zmatrix generalized
coordinates will not be updated.

Note also that the force and stress tolerances have no effect on
the termination conditions of these methods. They run for the number
of MD steps requested (this is arguably a bug).

\begin{fdflogicalF}{MD.Quench}

  Logical option to perform a power quench during the molecular
  dynamics.  In the power quench, each velocity component is set to
  zero if it is opposite to the corresponding force of that
  component. This affects atomic velocities, or unit-cell velocities
  (for cell shape optimizations).

  \note only applicable for \fdf{MD.TypeOfRun:Verlet} or
  \fdf*{ParrinelloRahman}.
  %
  It is incompatible with Nose thermostat options.  

  \note \fdf{MD.Quench} is superseded by \fdf{MD.FireQuench} (see
  below).

\end{fdflogicalF}


\begin{fdflogicalF}{MD.FireQuench}
  
  See the new option \fdf*{FIRE} for \fdf{MD.TypeOfRun}.

  Logical option to perform a FIRE quench during a Verlet molecular
  dynamics run, as described by Bitzek \textit{et al.} in
  Phys. Rev. Lett. \textbf{97}, 170201 (2006). It is a relaxation
  algorithm, and thus the dynamics are of no interest per se: the
  initial time-step can be played with (it uses
  \fdf{MD.LengthTimeStep} as initial $\Delta t$), as well as the
  initial temperature (recommended 0) and the atomic masses
  (recommended equal). Preliminary tests seem to indicate that the
  combination of $\Delta t = 5$ fs and a value of 20 for the atomic
  masses works reasonably. The dynamics stops when the force tolerance
  is reached (\fdf{MD.MaxForceTol}). The other parameters
  controlling the algorithm (initial damping, increase and decrease
  thereof etc.) are hardwired in the code, at the recommended values
  in the cited paper, including $\Delta t_{max} = 10$ fs.

  Only available for \fdf{MD.TypeOfRun:Verlet}
  It is incompatible with Nose thermostat options. No variable
  cell option implemented for this at this stage.
  \fdf{MD.FireQuench} supersedes \fdf{MD.Quench}. This option is
  deprecated. The new option \fdf*{FIRE} for \fdf{MD.TypeOfRun} should be
  used instead.

\end{fdflogicalF}


\fi


\subsection{Target stress options}

Useful for structural optimizations and constant-pressure molecular
dynamics.

\begin{fdfentry}{Target!Pressure}[pressure]<$0\,\mathrm{GPa}$>
  \fdfindex*{MD.TargetPressure}
  \fdfdeprecates{MD.TargetPressure}
  
  Target pressure for Parrinello-Rahman method, variable cell
  optimizations, and annealing options.

  \note this is only compatible with 
  \fdf{MD.TypeOfRun} \fdf*{ParrinelloRahman}, \fdf*{NoseParrinelloRahman},
  \fdf*{CG}, \fdf*{Broyden} or \fdf*{FIRE} (variable cell), or \fdf*{Anneal}
  (if \fdf{MD.AnnealOption} \fdf*{Pressure} or \fdf*{TemperatureandPressure}).
  
\end{fdfentry}


\begin{fdfentry}{Target!Stress.Voigt}[block]<$-1$ $-1$ $-1$ $0$ $0$ $0$>
  \fdfdeprecates{MD.TargetStress}
  
  External or target stress tensor for variable cell optimizations.
  Stress components are given in a line, in the Voigt order \texttt{xx, yy,
      zz, yz, xz, xy}. In units of \fdf{Target!Pressure}, but
  with the opposite sign. For example, a uniaxial compressive stress
  of 2 GPa along the 100 direction would be given by
  \begin{fdfexample}
     Target.Pressure  2. GPa
     %block Target.Stress.Voigt
         -1.0  0.0  0.0  0.0  0.0  0.0
     %endblock
  \end{fdfexample}

  Only used if \fdf{MD.TypeOfRun} is \fdf*{CG}, \fdf*{Broyden} or
  \fdf*{FIRE} and \fdf{MD.VariableCell} is \fdftrue.

\end{fdfentry}

\begin{fdfentry}{MD.TargetStress}[block]<$-1$ $-1$ $-1$ $0$ $0$ $0$>
  \fdfdeprecatedby{Target!Stress.Voigt}

  Same as \fdf{Target!Stress.Voigt} but the order is same as older
  \siesta\ version (prior to 4.1). Order is \texttt{xx, yy, zz, xy,
      xz, yz}.

\end{fdfentry}


\begin{fdflogicalF}{MD.RemoveIntramolecularPressure}
  \index{removal of intramolecular pressure}

  If \fdftrue, the contribution to the stress coming from the internal
  degrees of freedom of the molecules will be subtracted from the
  stress tensor used in variable-cell optimization or variable-cell
  molecular-dynamics.  This is done in an approximate manner, using
  the virial form of the stress, and assumming that the ``mean force''
  over the coordinates of the molecule represents the
  ``inter-molecular'' stress. The correction term was already computed
  in earlier versions of \siesta\ and used to report the ``molecule
  pressure''. The correction is now computed molecule-by-molecule if
  the Zmatrix format is used.

  If the intra-molecular stress is removed, the corrected static and
  total stresses are printed in addition to the uncorrected items.
  The corrected Voigt form is also printed.

  \note versions prior to 4.1 (also 4.1-beta releases) printed the
  Voigt stress-tensor in this format: \shell{[x, y, z, xy, yz,
      xz]}. In 4.1 and later \siesta\ \emph{only} show the correct
  Voigt representation: \shell{[x, y, z, yz, xz, xy]}.

\end{fdflogicalF}



\subsection{Molecular dynamics}

In this mode of operation, the program moves the atoms (and optionally
the cell vectors) in response to the forces (and stresses), using the
classical equations of motion.

Note that the \fdf{Zmatrix} input option (see Sec.~\ref{sec:Zmatrix}) is not
compatible with molecular dynamics. The initial geometry can be
specified using the Zmatrix format, but the Zmatrix generalized
coordinates will not be updated.


\begin{fdfentry}{MD.InitialTimeStep}[integer]<$1$>
  
  Initial time step of the MD simulation.  In the current version of
  \siesta\ it must be 1.

  Used only if \fdf{MD.TypeOfRun} is not \fdf*{CG} or \fdf*{Broyden}.

\end{fdfentry}

\begin{fdfentry}{MD.FinalTimeStep}[integer]<\fdfvalue{MD.Steps}>

  Final time step of the MD simulation.

\end{fdfentry}


\begin{fdfentry}{MD.LengthTimeStep}[time]<$1\,\mathrm{fs}$>

  Length of the time step of the MD simulation.

\end{fdfentry}

\begin{fdfentry}{MD.InitialTemperature}[temperature/energy]<$0\,\mathrm K$>
  
  Initial temperature for the MD run. The atoms are assigned random
  velocities drawn from the Maxwell-Bolzmann distribution with the
  corresponding temperature. The constraint of zero center of mass
  velocity is imposed.

  \note only used if \fdf{MD.TypeOfRun} \fdf*{Verlet}, \fdf*{Nose},
  \fdf*{ParrinelloRahman}, \fdf*{NoseParrinelloRahman} or
  \fdf*{Anneal}. 

\end{fdfentry}

\begin{fdfentry}{MD.TargetTemperature}[temperature/energy]<$0\,\mathrm K$>

  Target temperature for Nose thermostat and annealing options.

  \note only used if \fdf{MD.TypeOfRun} \fdf*{Nose},
  \fdf*{NoseParrinelloRahman} or
  \fdf*{Anneal} if \fdf{MD.AnnealOption} is \fdf*{Temperature} or
  \fdf*{TemperatureandPressure}.

\end{fdfentry}

\begin{fdfentry}{MD.NoseMass}[moment of inertia]<$100\,\mathrm{Ry\,fs^2}$>
  
  Generalized mass of Nose variable.  This determines the time scale
  of the Nose variable dynamics, and the coupling of the thermal bath
  to the physical system.

  Only used for Nose MD runs.

\end{fdfentry}

\begin{fdfentry}{MD.ParrinelloRahmanMass}[moment of inertia]<$100\,\mathrm{Ry\,fs^2}$>

  Generalized mass of Parrinello-Rahman variable.  This determines the
  time scale of the Parrinello-Rahman variable dynamics, and its
  coupling to the physical system.

  Only used for Parrinello-Rahman MD runs.

\end{fdfentry}

\begin{fdfentry}{MD.AnnealOption}[string]<TemperatureAndPressure>
  
  Type of annealing MD to perform. The target temperature or pressure
  are achieved by velocity and unit cell rescaling, in a given time
  determined by the variable \fdf{MD.TauRelax} below.
  \begin{fdfoptions}
    \option[Temperature]%
    Reach a target temperature by velocity rescaling

    \option[Pressure]%
    Reach a target pressure by scaling of the unit cell size and shape

    \option[TemperatureandPressure]%
    Reach a target temperature and pressure by velocity rescaling and
    by scaling of the unit cell size and shape
  \end{fdfoptions}

  Only applicable for \fdf{MD.TypeOfRun:Anneal}.

\end{fdfentry}

\begin{fdfentry}{MD.TauRelax}[time]<$100\,\mathrm{fs}$>
  
  Relaxation time to reach target temperature and/or pressure in
  annealing MD. Note that this is a ``relaxation time'', and as such
  it gives a rough estimate of the time needed to achieve the given
  targets. As a normal simulation also exhibits oscillations, the
  actual time needed to reach the \emph{averaged} targets will be
  significantly longer.

  Only applicable for \fdf{MD.TypeOfRun:Anneal}.

\end{fdfentry}

\begin{fdfentry}{MD.BulkModulus}[pressure]<$100\,\mathrm{Ry/Bohr^3}$>  

  Estimate (may be rough) of the bulk modulus of the system.  This is
  needed to set the rate of change of cell shape to reach target
  pressure in annealing MD.

  Only applicable for \fdf{MD.TypeOfRun} \fdf*{Anneal}, when
  \fdf{MD.AnnealOption} is \fdf*{Pressure} or \fdf*{TemperatureAndPressure}

\end{fdfentry}



\subsection{Output options for dynamics}

Every time the atoms move, either during coordinate relaxation or
molecular dynamics, their positions \textbf{predicted for next step}
and \textbf{current} velocities are stored in file \sysfile{XV}. The
shape of the unit cell and its associated 'velocity' (in
Parrinello-Rahman dynamics) are also stored in this file.

\begin{fdflogicalT}{WriteCoorInitial}
  \index{output!atomic coordinates!initial}

  It determines whether the initial atomic coordinates of the
  simulation are dumped into the main output file. These coordinates
  correspond to the ones actually used in the first step (see the
  section on precedence issues in structural input) and are output in
  Cartesian coordinates in Bohr units.

  It is not affected by the setting of \fdf{LongOutput}.

\end{fdflogicalT}


\begin{fdflogicalF}{WriteCoorStep}
  \index{output!atomic coordinates!in a dynamics step}
  
  If \fdftrue, it writes the atomic coordinates to standard
  output at every MD time step or relaxation step. The coordinates are
  always written in the \sysfile{XV} file, but overriden at
  every step. They can be also accumulated in the \sysfile*{MD}
  or \sysfile{MDX} files depending on \fdf{WriteMDHistory}.

\end{fdflogicalF}

\begin{fdflogicalF}{WriteForces}
  \index{output!forces} 

  If \fdftrue, it writes the atomic forces to the output file at every
  MD time step or relaxation step.  Note that the forces of the last
  step can be found in the file \sysfile{FA}. If constraints are used,
  the file \sysfile{FAC} is also written.

\end{fdflogicalF}

\begin{fdflogicalF}{WriteMDHistory}
  \index{output!molecular dynamics!history}

  If \fdftrue, \siesta\ accumulates the molecular dynamics trajectory
  in the following files:
  \begin{itemize}
    \item%
    \sysfile{MD} : atomic coordinates and velocities (and lattice
    vectors and their time derivatives, if the dynamics implies
    variable cell). The information is stored unformatted for
    postprocessing with utility programs to analyze the MD trajectory.

    \item%
    \sysfile{MDE} : shorter description of the run, with energy,
    temperature, etc., per time step.

  \end{itemize}
  These files are accumulative even for different runs.

  \index{output!molecular dynamics!history}

  The trajectory of a molecular dynamics run (or a conjugate gradient
  minimization) can be accumulated in different files: SystemLabel.MD,
  SystemLabel.MDE, and SystemLabel.ANI. The first file keeps the whole
  trajectory information, meaning positions and velocities at every time
  step, including lattice vectors if the cell varies. NOTE that the
  positions (and maybe the cell vectors) stored at each time step are
  the \textbf{predicted} values for the next step. Care should be taken if
  joint position-velocity correlations need to be computed from this
  file.  The second gives global information (energy, temperature, etc),
  and the third has the coordinates in a form suited for XMol animation.
  See the \fdf{WriteMDHistory} and \fdf{WriteMDXmol} data descriptors
  above for information. \siesta\ always appends new information on
  these files, making them accumulative even for different runs.

  The \program{iomd} subroutine can generate both an unformatted file
  \sysfile*{MD} (default) or ASCII formatted files \sysfile*{MDX} and
  \sysfile*{MDC} containing the atomic and lattice trajectories,
  respectively. Edit the file to change the settings if desired.

\end{fdflogicalF}


\begin{fdflogicalT}{Write.OrbitalIndex}

  If \fdftrue\ it causes the writing of an extra file
  named \sysfile{ORB\_INDX} containing all orbitals used in the
  calculation.

  Its formatting is clearly specified at the end of the file.
  
\end{fdflogicalT}


\subsection{Restarting geometry optimizations and MD runs}

Every time the atoms move, either during coordinate relaxation or
molecular dynamics, their \textbf{positions predicted for next step} and
\textbf{current velocities} are stored in file SystemLabel.XV, where
SystemLabel is the value of that \fdflib\ descriptor (or 'siesta' by
default).  The shape of the unit cell and its associated 'velocity'
(in Parrinello-Rahman dynamics) are also stored in this file. For MD
runs of type Verlet, Parrinello-Rahman, Nose, 
Nose-Parrinello-Rahman, or Anneal, a file named SystemLabel.VERLET\_RESTART,
SystemLabel.PR\_RESTART, SystemLabel.NOSE\_RESTART, 
SystemLabel.NPR\_RESTART, or SystemLabel.ANNEAL\_RESTART, 
respectively, is created to hold the values
of auxiliary variables needed for a completely seamless
continuation. 

If the restart file is not available, a simulation can still make use
of the XV information, and ``restart'' by basically repeating the
last-computed step (the positions are shifted backwards by using a
single Euler-like step with the current velocities as derivatives).
While this feature does not result in seamless continuations, it
allows cross-restarts (those in which a simulation of one kind (e.g.,
Anneal) is followed by another (e.g., Nose)), and permits 
to re-use dynamical information from old runs.

This restart fix is not satisfactory from a fundamental point of view,
so the MD subsystem in Siesta will have to be redesigned
eventually. In the meantime, users are reminded that the scripting
hooks being steadily introduced (see Util/Scripting) might be used to
create custom-made MD scripts.


\subsection{Use of general constraints}

\textbf{Note:} The Zmatrix format (see Sec.~\ref{sec:Zmatrix}) provides
an alternative constraint formulation which can be useful for system
involving molecules.

\begin{fdfentry}{Geometry!Constraints}[block]
  \index{constraints in relaxations}

  Constrains certain atomic coordinates or cell parameters in a
  consistent method.

  There are a high number of configurable parameters that may be used
  to control the relaxation of the coordinates.

  \note \siesta\ prints out a small section of how the constraints are
  recognized. 

  \begin{fdfoptions}
    \option[atom|position]%
    Fix certain atomic coordinates. 

    This option takes a variable number of integers which each
    correspond to the atomic index (or input sequence) in
    \fdf{AtomicCoordinatesAndAtomicSpecies}.

    \fdf*{atom} is now the preferred input option while
    \fdf*{position} still works for backwards compatibility.

    One may also specify ranges of atoms according to:

    \begin{fdfoptions}
      \option[{atom \emph{A} [\emph{B} [\emph{C} [\dots]]]}]%
      A sequence of atomic indices which are constrained. 

      % Generic input (compatible with the <= 4.0)
      \option[{atom from \emph{A} to \emph{B} [step \emph{s}]}]%
      Here atoms \emph{A} up to and including \emph{B} are
      constrained.
      %
      If \fdf*{step <s>} is given, the range
      \emph{A}:\emph{B} will be taken in steps of \emph{s}.

      \begin{fdfexample}
        atom from 3 to 10 step 2
      \end{fdfexample}
      will constrain atoms 3, 5, 7 and 9.

      \option[{atom from \emph{A} plus/minus \emph{B} [step
          \emph{s}]}]%
      Here atoms \emph{A} up to and including $\emph{A}+\emph{B}-1$
      are constrained.  
      %
      If \fdf*{step <s>} is given, the range
      \emph{A}:$\emph{A}+\emph{B}-1$ will be taken in steps of
      \emph{s}.

      % Generic input (compatible with the <= 4.0)
      \option[atom {[\emph{A}, \emph{B} -\mbox{}- \emph{C} [step \emph{s}], \emph{D}]}]%
      Equivalent to \fdf*{from \dots to} specification, however in a
      shorter variant. Note that the list may contain arbitrary number
      of ranges and/or individual indices.

      \begin{fdfexample}
        atom [2, 3 -- 10 step 2, 6]
      \end{fdfexample}
      will constrain atoms 2, 3, 5, 7, 9 and 6.

      \begin{fdfexample}
        atom [2, 3 -- 6, 8]
      \end{fdfexample}
      will constrain atoms 2, 3, 4, 5, 6 and 8.

      \option[atom all]%
      Constrain all atoms. 
      
    \end{fdfoptions}

    \note these specifications are apt for \emph{directional}
    constraints. 

    \option[Z]%
    Equivalent to \fdf*{atom} with all indices of the atoms that
    have atomic number equal to the specified number.

    \note this specification is apt for \emph{directional}
    constraints. 

    \option[species-i]%
    Equivalent to \fdf*{atom} with all indices of the atoms that
    have species according to the \fdf{ChemicalSpeciesLabel} and
    \fdf{AtomicCoordinatesAndAtomicSpecies}.

    \note this specification is apt for \emph{directional}
    constraints. 


    \option[center]%
    One may retain the coordinate center of a
    range of atoms (say molecules or other groups of atoms).

    Atomic indices may be specified according to \fdf*{atom}.

    \note this specification is apt for \emph{directional}
    constraints. 


    \option[rigid|molecule]%
    Move a selection of atoms together as though they where one atom.

    The forces are summed and averaged to get a net-force on the
    entire molecule.

    Atomic indices may be specified according to \fdf*{atom}.

    \note this specification is apt for \emph{directional}
    constraints. 


    \option[rigid-max|molecule-max]%
    Move a selection of atoms together as though they where one atom.

    The maximum force acting on one of the atoms in the selection will
    be expanded to act on all atoms specified.

    Atomic indices may be specified according to \fdf*{atom}.


    \option[cell-angle]%
    Control whether the cell angles ($\alpha$, $\beta$, $\gamma$) may
    be altered.

    This takes either one or more of
    \fdf*{alpha}/\fdf*{beta}/\fdf*{gamma} as argument.

    \fdf*{alpha} is the angle between the 2nd and 3rd cell vector.

    \fdf*{beta} is the angle between the 1st and 3rd cell vector.

    \fdf*{gamma} is the angle between the 1st and 2nd cell vector.

    \note currently only one angle can be constrained at a time and it
    forces only the spanning vectors to be relaxed.


    \option[cell-vector]%
    Control whether the cell vectors ($A$, $B$, $C$) may be altered.

    This takes either one or more of \fdf*{A}/\fdf*{B}/\fdf*{C} as
    argument.

    Constraining the cell-vectors are only allowed if they only have a
    component along their respective Cartesian
    direction. I.e. \fdf*{B} must only have a $y$-component.


    \option[stress]%
    Control which of the 6 stress components are constrained.

    Numbers $1\le i\le6$ where $1$ corresponds
    to the \emph{XX} stress-component, $2$ is \emph{YY}, $3$ is
    \emph{ZZ}, $4$ is \emph{YZ}/\emph{ZY}, $5$ is \emph{XZ}/\emph{ZX}
    and $6$ is \emph{XY}/\emph{YX}.

    The text specifications are also allowed.


    \option[routine]%
    This calls the \program{constr} routine specified in the file:
    \file{constr.f}. Without having changed the corresponding source
    file, this does nothing.
    See details and comments in the source-file.


    \option[clear]%
    Remove constraints on selected atoms from all previously specified
    constraints.

    This may be handy when specifying constraints via \fdf*{Z} or
    \fdf*{species-i}.

    Atomic indices may be specified according to \fdf*{atom}.


    \option[clear-prev]
    Remove constraints on selected atoms from the \emph{previous} specified
    constraint.

    This may be handy when specifying constraints via \fdf*{Z} or
    \fdf*{species-i}.

    Atomic indices may be specified according to \fdf*{atom}.

    \note two consecutive \fdf*{clear-prev} may be used in conjunction
    as though the atoms where specified on the same line.

  \end{fdfoptions}

  It is instructive to give an example of the input options presented.

  Consider a benzene molecule ($\mathrm{C}_6\mathrm{H}_6$) and we wish
  to relax all Hydrogen atoms (and no stress in $x$ and $y$
  directions). This may be accomplished with this
  \begin{fdfexample}
    %block Geometry.Constraints
      Z 6
      stress 1 2
    %endblock
  \end{fdfexample}
  Or as in this example
  \begin{fdfexample}
    %block AtomicCoordinatesAndAtomicSpecies
      ... ... ... 1   # C 1
      ... ... ... 2   # H 2
      ... ... ... 1   # C 3
      ... ... ... 2   # H 4
      ... ... ... 1   # C 5
      ... ... ... 2   # H 6
      ... ... ... 1   # C 7
      ... ... ... 2   # H 8
      ... ... ... 1   # C 9
      ... ... ... 2   # H 10
      ... ... ... 1   # C 11
      ... ... ... 2   # H 12
      stress XX YY
    %endblock
    %block Geometry.Constraints
      atom from 1 to 12 step 2
      stress XX YY
    %endblock
    %block Geometry.Constraints
      atom [1 -- 12 step 2]
      stress XX 2
    %endblock
    %block Geometry.Constraints
      atom all
      clear-prev [2 -- 12 step 2]
      stress 1 YY
    %endblock
  \end{fdfexample}
  where the 3 last blocks all create the same result.

  Finally, the \emph{directional} constraint is an important and often
  useful feature. 
  %
  When relaxing complex structures it may be advantageous to first
  relax along a given direction (where you expect the stress to be
  largest) and subsequently let it fully relax. Another example would
  be to relax the binding distance between a molecule and a surface,
  before relaxing the entire system by forcing the molecule and
  adsorption site to relax together.
  %
  To use directional constraint one may provide an additional 3
  \emph{reals} after the \fdf*{atom}/\fdf*{rigid}.
  For instance in the previous example (benzene) one may first relax
  all Hydrogen atoms along the $y$ and $z$ Cartesian vector by
  constraining the $x$ Cartesian vector
  \begin{fdfexample}
    %block Geometry.Constraints
      Z 6 # constrain Carbon
      Z 1 1. 0. 0. # constrain Hydrogen along x Cartesian vector
    %endblock
  \end{fdfexample}
  Note that you \emph{must} append a ``.'' to denote it a real. The
  vector specified need not be normalized. Also, if you want it to
  be constrained along the $x$-$y$ vector you may do
  \begin{fdfexample}
    %block Geometry.Constraints
      Z 6
      Z 1 1. 1. 0.
    %endblock
  \end{fdfexample}
  
\end{fdfentry}



\subsection{Phonon calculations}

If \fdf{MD.TypeOfRun} is \fdf*{FC}, \siesta\ sets up a special outer
geometry loop that displaces individual atoms along the coordinate
directions to build the force-constant matrix.
\index{output!molecular dynamics!Force Constants Matrix}

The output (see below) can be analyzed to extract phonon frequencies
and vectors with the VIBRA\index{VIBRA} package in the \program{Util/Vibra}
directory. For computing the Born effective charges together with the
force constants, see \fdf{BornCharge}.

\begin{fdfentry}{MD.FCDispl}[length]<$0.04\,\mathrm{Bohr}$>
  
  Displacement to use for the computation of the force constant
  matrix\index{Force Constants Matrix} for phonon calculations.

\end{fdfentry}

\begin{fdfentry}{MD.FCFirst}[integer]<$1$>
  
  Index of first atom to displace for the computation of the force
  constant matrix\index{Force Constants Matrix} for phonon
  calculations.

\end{fdfentry}

\begin{fdfentry}{MD.FCLast}[integer]<\fdfvalue{MD.FCFirst}>

  Index of last atom to displace for the computation of the force
  constant matrix\index{Force Constants Matrix} for phonon
  calculations.

\end{fdfentry}

The force-constants matrix is written in file \sysfile{FC}.  The
format is the following: for the displacement of each atom in each
direction, the forces on each of the other atoms is writen (divided by
the value of the displacement), in units of eV/\AA$^2$. Each line has
the forces in the $x$, $y$ and $z$ direction for one of the atoms.

If constraints are used, the file \sysfile{FCC} is also written.


\section{LDA+U}
\label{sec:lda+u}

Important note: Current implementation is based on the simplified
rotationally invariant LDA+U formulation of Dudarev and collaborators
[see, Dudarev \textit{et al.}, Phys. Rev. B \textbf{57}, 1505 (1998)].
Although the input allows to define independent values of the U and J
parameters for each atomic shell, in the actual calculation the two
parameters are combined to produce an effective Coulomb repulsion
$U_{\mathrm{eff}}=U-J$. $U_{\mathrm{eff}}$ is the parameter actually
used in the calculations for the time being.

For large or intermediate values of $U_{\mathrm{eff}}$ the convergence
is sometimes difficult. A step-by-step increase of the
value of $U_{\mathrm{eff}}$ can be advisable in such cases.

Currently, the LDA+U implementation does not support non-colinear, nor
spin-orbit coupling.

\begin{fdfentry}{LDAU.ProjectorGenerationMethod}[integer]<2>
  
  Generation method of the LDA+U projectors. The LDA+U projectors are
  the localized functions used to calculate the local populations used
  in a Hubbard-like term that modifies the LDA Hamiltonian and
  energy. It is important to recall that LDA+U projectors should be
  quite localized functions.  Otherwise the calculated populations
  loose their atomic character and physical meaning. Even more
  importantly, the interaction range can increase so much that
  jeopardizes the efficiency of the calculation.

  Two methods are currently implemented:
  \begin{fdfoptions}
    \option[1]%
    Projectors are slightly-excited numerical atomic orbitals similar
    to those used as an automatic basis set by \siesta.  The radii of
    these orbitals are controlled using the parameter
    \fdf{LDAU.EnergyShift} and/or the data included in the block
    \fdf{LDAU.Proj} (quite similar to the data block \fdf{PAO.Basis}
    used to specify the basis set, see below).

    \option[2]%
    Projectors are exact solutions of the pseudoatomic problem (and,
    in principle, are not strictly localized) which are cut using a
    Fermi function $1/\{1+\exp[(r-r_c)\omega]\}$.  The values of $r_c$
    and $\omega$ are controlled using the parameter \fdf{LDAU.CutoffNorm}
    and/or the data included in the block \fdf{LDAU.Proj}.

  \end{fdfoptions}

\end{fdfentry}

\begin{fdfentry}{LDAU.EnergyShift}[energy]<$0.05\,\mathrm{Ry}$>
  
  Energy increase used to define the localization radius of the LDA+U
  projectors (similar to the parameter \fdf{PAO.EnergyShift}).

  \note only used when \fdf{LDAU.ProjectorGenerationMethod} is \fdf*{1}.

\end{fdfentry}

\begin{fdfentry}{LDAU.CutoffNorm}[real]<$0.9$>
  
  Parameter used to define the value of $r_c$ used in the Fermi
  distribution to cut the LDA+U projectors generated according to
  generation method 2 (see above). \fdf{LDAU.CutoffNorm} is the norm of the
  original pseudoatomic orbital contained inside a sphere of radius
  equal to $r_c$.

  \note only used when \fdf{LDAU.ProjectorGenerationMethod} is \fdf*{2}.

\end{fdfentry}


\begin{fdfentry}{LDAU.Proj}[block]
  
  Data block used to specify the LDA+U projectors.

  \begin{itemize}
    \item%
    If \fdf{LDAU.ProjectorGenerationMethod} is \fdf*{1}, the
    syntax is as follows:
    \begin{fdfexample}
%block LDAU.Proj      # Define LDA+U projectors
 Fe    2              # Label, l_shells
  n=3 2  E 50.0 2.5   # n (opt if not using semicore levels),l,Softconf(opt)
      5.00  0.35      # U(eV), J(eV) for this shell
      2.30            # rc (Bohr)
      0.95            # scaleFactor (opt)
      0               #    l
      1.00  0.05      # U(eV), J(eV) for this shell
      0.00            # rc(Bohr) (if 0, automatic r_c from LDAU.EnergyShift)
%endblock LDAU.Proj
   \end{fdfexample}

    \item%
    If \fdf{LDAU.ProjectorGenerationMethod} is \fdf*{2}, the
    syntax is as follows:
    \begin{fdfexample}
%block LDAU.Proj      # Define LDAU projectors
 Fe    2              # Label, l_shells
  n=3 2  E 50.0 2.5   # n (opt if not using semicore levels),l,Softconf(opt)
      5.00  0.35      # U(eV), J(eV) for this shell
      2.30  0.15      # rc (Bohr), \omega(Bohr) (Fermi cutoff function)
      0.95            # scaleFactor (opt)
      0               #    l
      1.00  0.05      # U(eV), J(eV) for this shell
      0.00  0.00      # rc(Bohr), \omega(Bohr) (if 0 r_c from LDAU.CutoffNorm
%endblock LDAU.Proj   #                         and \omega from default value)
    \end{fdfexample}
  \end{itemize}
  
  Certain of the quantites have default values:

  \begin{tabular}{cc}
    U & \fdf*{0.0 eV} \\
    J & \fdf*{0.0 eV} \\
    $\omega$ & \fdf*{0.05 Bohr} \\
    Scale factor & \fdf*{1.0}
  \end{tabular}

  $r_c$ depends on \fdf{LDAU.EnergyShift} or \fdf{LDAU.CutoffNorm}
  depending on the generation method.

\end{fdfentry}

\begin{fdflogicalF}{LDAU.FirstIteration}
  
  If \fdftrue, local populations are calculated and Hubbard-like term
  is switch on in the first iteration.  Useful if restarting a
  calculation reading a converged or an almost converged density
  matrix from file.

\end{fdflogicalF}

\begin{fdfentry}{LDAU.ThresholdTol}[real]<$0.01$>
  
  Local populations only calculated and/or updated if the change in the
  density matrix elements (dDmax) is lower than \fdf{LDAU.ThresholdTol}.

\end{fdfentry}

\begin{fdfentry}{LDAU.PopTol}[real]<$0.001$>
  
  Convergence criterium for the LDA+U local populations.  In the
  current implementation the Hubbard-like term of the Hamiltonian is
  only updated (except for the last iteration) if the variations of
  the local populations are larger than this value.

\end{fdfentry}

\begin{fdflogicalF}{LDAU.PotentialShift}

  If set to \fdftrue, the value given to the $U$ parameter in the
  input file is interpreted as a local potential shift. Recording the
  change of the local populations as a function of this potential
  shift, we can calculate the appropriate value of U for the system
  under study following the methology proposed by Cococcioni and
  Gironcoli in Phys. Rev. B \textbf{71}, 035105 (2005).

\end{fdflogicalF}

\section{RT-TDDFT}\index{RT-TDDFT}\index{TDDFT}\label{sec:tddft}
Now it is possible to perform Real-Time Time-Dependent Density
Functional Theory (RT-TDDFT)-based calculations using the
\siesta\ method. This section includes a brief introduction to the
TDDFT method and implementation, shows how to run the TDDFT-based
calculations, and provides a reference guide to the additional input
options.

\subsection{Brief description}
The basic features of the TDDFT have been implemented 
within the \siesta\ code. Most of the 
details can be found in the paper Phys. Rev. B
\textbf{66} 235416 (2002), by A. Tsolakidis,
D. S\'anchez-Portal and, Richard M. Martin.
However, the practical implementation of the present version 
is very different 
from the initial version. The present implementation of the TDDFT has been 
programmed with the primary aim of calculating the optical 
response of clusters and solids, however, it has been successfully used to
calculate the electronic stopping power of solids as well.
 
For the calculation of the optical response of the electronic
systems a perturbation 
to the system is applied at time step 0, and the system is 
allowed to reach a self-consistent solution. Then, the perturbation 
is switched off for all subsequent time steps, and the electrons
are allowed to evolve according to time-dependent Kohn-Sham
equations. For the case of a cluster the perturbation 
is a finite (small) electric field. For the case of bulk 
material (not yet fully implemented) the initial perturbation
is different but the main strategy is similar.

The present version of the RT-TDDFT implementation is also capable of
performing a simultaneous  dynamics of electrons and
ions but this is limited to the cases in which forces on the ions are within
ignorable limit.

The general working scheme is as following. First, the system is
allowed to reach a self-consistent solution for some initial
conditions (for example an initial ionic configuration or an applied
external field). The occupied Kohn-Sham orbitals (KSOs) are then selected
and stored in memory.  The occupied KSOs are then made to evolve in
time, and the Hamiltonian is recalculated for each time step.

\subsection{Partial Occupations}

This is a note of caution. This implementation of RT-TDDFT can not propagate partially occupied orbitals. While partial occupation of states is a common occurrence, they must be avoided. The issue of partially occupied states becomes, particularly,  tricky when dealing with metals and k-point sampling at the same time. The code tries to detect partial occupations and stops during the first run but it is not guarantied. Consequently, it can lead to additional or missing charge. Ultimately it is users' responsibility to make sure that the system has no partial occupations and missing or added charge. There are different ways to avoid partial occupations depending on the system and simulation parameters; for example changing spin-polarization and/or adding some k-point shift to k-points.

\subsection{Input options for RT-TDDFT}

A TDDFT calculation requires two runs of \siesta. In the first run
with appropriate flags it calculates the self-consistent initial
state, i.e., only occupied initial KSOs stored in \sysfile{TDWF}
file. The second run uses this file and the structure 
file \sysfile{TDXV} as input and evolves the occupied
KSOs.

\begin{fdflogicalF}{TDED!WF.Initialize}

If set to \fdftrue\ in a standard self-consistent \siesta\ calculation, it
makes the program save the KSOs after reaching
self-consistency. This constitutes the first run.

\end{fdflogicalF}

\begin{fdfentry}{TDED!Nsteps}[integer]<1>

Number of electronic time steps between each atomic movement. It can not be less
than $1$.

\end{fdfentry}
\begin{fdfentry}{TDED!TimeStep}[time]<$0.001\,\mathrm{fs}$>
Length of time for each electronic step. The default value is only suggestive. Users must
determine an appropriate value for the electronic time step. 

\end{fdfentry}

\begin{fdflogicalF}{TDED!Extrapolate}
  An extrapolated Hamiltonian is applied to evolve KSOs for \fdf{TDED!Extrapolate.Substeps} number of substeps within a sinlge electronic step without re-evaluating the Hamiltonian.
\end{fdflogicalF}

\begin{fdfentry}{TDED!Extrapolate.Substeps}[integer]<3>
  Number of electronic substeps when an extrapolated Hamiltonian is applied
  to propogate the KSOs. Effective only when \fdf{TDED!Extrapolate} set to be true.
\end{fdfentry}


\begin{fdflogicalT}{TDED!Inverse.Linear}
  
  If \fdftrue\ the inverse of matrix
  \begin{equation}
    \mathbf{S}+\mathrm{i}\mathbf{H}(t)\frac{\mathrm dt}{2}
  \end{equation}
  is calculated by solving a system of linear equations which
  implicitly multiplies the inverted matrix to the right hand side
  matrix. The alternative is explicit inversion and multiplication.
  The two options may differ in performance.
  
\end{fdflogicalT}

\begin{fdflogicalF}{TDED!WF.Save}

  Option to save wavefunctions at the end of a simulation for a
  possible restart or analysis. Wavefunctions are saved in file
  \sysfile{TDWF}. A TDED restart requires \sysfile{TDWF},
  \sysfile{TDXV}, and \sysfile{VERLET\_RESTART} from the previous
  run.  The first step of the restart is same as the last of the
  previous run.
    
\end{fdflogicalF}

 \begin{fdflogicalT}{TDED!Write.Etot}

If \fdftrue\ the total energy for every time step is stored in the file
\sysfile{TDETOT}.

\end{fdflogicalT}

 \begin{fdflogicalF}{TDED!Write.Dipole}

If \fdftrue\ a file \sysfile{TDDIPOL} is created that can be
further processed to calculate polarizability.

\end{fdflogicalF}

 \begin{fdflogicalF}{TDED!Write.Eig}

If \fdftrue\ the quantities $\langle \phi(t)|H(t)|\phi(t)\rangle$ in every
time step are calculated and stored in the file
\sysfile{TDEIG}. This is not trivial, hence can increase computational time. 

\end{fdflogicalF}

\begin{fdflogicalF}{TDED!Saverho}

  If \fdftrue\ the instantaneous time-dependent density is saved to
  \file{<istep>.TDRho} after every \fdf{TDED!Nsaverho} number of
  steps.

\end{fdflogicalF}

\begin{fdfentry}{TDED!Nsaverho}[integer]<100>

Fixes the number of steps of ion-electron dynamics after which the
instantaneous time-dependent density is saved. May require a lot of
disk space.

\end{fdfentry}

\section{External control of \texorpdfstring{\siesta}{SIESTA}}
\label{sec:lua}

Since \siesta\ 4.1 an additional method of controlling the convergence
and MD of \siesta\ is enabled through external scripting
capability. The external control comes in two variants:
\begin{itemize}
  \item Implicit control of MD through updating/changing parameters
  and optimizing forces. For instance one may use a \fdf*{Verlet} MD
  method but additionally update the forces through some external
  force-field to amend limitations by the \fdf*{Verlet} method for
  your particular case. In the implicit control the molecular dynamics
  is controlled by \siesta.

  \item Explicit control of MD. In this mode the molecular dynamics
  \emph{must} be controlled in the external Lua script and the
  convergence of the geometry should also be controlled via this
  script.

\end{itemize}

The implicit control is in use if \fdf{MD.TypeOfRun} is something
other than \fdf*{lua}, while if the option is \fdf*{lua} the explicit
control is in use.

For examples on the usage of the Lua scripting engine and the power
you may find the library \program{flos}\footnote{This library is
    implemented by Nick R. Papior to further enhance the
    inter-operability with \siesta\ and external contributions.}, see
\url{https://github.com/siesta-project/flos}. At the time of writing
the \program{flos} library already implements new geometry/cell
relaxation schemes and new force-constants algorithms. You are highly
encouraged to use the new relaxation schemes as they may provide
faster convergence of the relaxation.

\begin{fdfentry}{Lua!Script}[file]<\nonvalue{none}>
  
  Specify a Lua script file which may be used to control the internal
  variables in \siesta. Such a script file must contain at least one
  function named \program{siesta\_comm} with no arguments.

  An example file could be this (note this is Lua code):
  \begin{codeexample}
-- This function (siesta_comm) is REQUIRED
function siesta_comm()
   
   -- Define which variables we want to retrieve from SIESTA
   get_tbl = {"geom.xa", "E.total"}

   -- Signal to SIESTA which variables we want to explore
   siesta.receive(get_tbl)

   -- Now we have the required variables,
   -- convert to a simpler variable name (not nested tables)
   -- (note the returned quantities are in SIESTA units (Bohr, Ry)
   xa = siesta.geom.xa
   Etot = siesta.E.total

   -- If we know our energy is wrong by 0.001 Ry we may now
   -- change the total energy
   Etot = Etot - 0.001

   -- Return to SIESTA the total energy such that
   -- it internally has the "correct" energy.

   siesta.E.total = Etot
   ret_tbl = {"E.total"}

   siesta.send(ret_tbl)

end
\end{codeexample}

  Within this function there are certain \emph{states} which defines
  different execution points in \siesta:
  \begin{fdfoptions}

    \option[Initialization]%
    This is right after \siesta\ has read the options from the FDF
    file. Here you may query some of the FDF options (and even change
    them) for your particular problem.

    \note \program{siesta.state == siesta.INITIALIZE}.

    \option[Initialize-MD]%
    Right before the SCF step starts. This point is somewhat
    superfluous, but is necessary to communicate the actual meshcutoff
    used\footnote{Remember that the \fdf{Mesh!Cutoff} defined is the
        minimum cutoff used.}.

    \note \program{siesta.state == siesta.INIT\_MD}.

    \option[SCF]%
    Right after \siesta\ has calculated the output density matrix, and
    just after \siesta\ has performed mixing.

    \note \program{siesta.state == siesta.SCF\_LOOP}.

    \option[Forces]%
    This stage is right after \siesta\ has calculated the forces.

    \note \program{siesta.state == siesta.FORCES}.

    \option[Move]%
    This state will \emph{only} be reached if \fdf{MD.TypeOfRun} is
    \fdf*{lua}.

    If one does not return updated atomic coordinates \siesta\ will
    reuse the same geometry as just analyzed.

    \note \program{siesta.state == siesta.MOVE}.
    
    \option[Analysis]% 
    Just before \siesta\ completes and exits. 

    \note \program{siesta.state == siesta.ANALYSIS}.

  \end{fdfoptions}

  Beginning with implementations of Lua scripts may be cumbersome. It
  is recommended to start by using \program{flos}, see
  \url{https://github.com/siesta-project/flos} which contains several
  examples on how to start implementing your own scripts.
  Currently \program{flos} implements a larger variety of relaxation
  schemes, for instance:
  \begin{codeexample}
    local flos = require "flos"
    LBFGS = flos.LBFGS()
    function siesta_comm()
       LBFGS:SIESTA(siesta)
    end
  \end{codeexample}
  which is the most minimal example of using the L-BFGS algorithm for
  geometry relaxation. Note that \program{flos} reads the parameters
  \fdf{MD.MaxDispl} and \fdf{MD.MaxForceTol} through \siesta\
  automatically. 

  \note The number of available variables continues to grow and to
  find which quantities are accessible in Lua you may add this small
  code in your Lua script:
  \begin{codeexample}
    siesta.print_allowed()
  \end{codeexample}
  which prints out a list of all accessible variables (note they are
  not sorted).
  
  If there are any variables you require which are not in the list,
  please contact the developers.

  If you want to stop \siesta\ from Lua you can use the following:
  \begin{codeexample}
    siesta.Stop = true
    siesta.send({"Stop"})
  \end{codeexample}
  which will abort \siesta.

  Remark that since \emph{anything} may be changed via Lua one may
  easily make \siesta\ crash due to inconsistencies in the internal
  logic. This is because \siesta\ does not check what has changed, it
  accepts everything \emph{as is} and continues. Hence, one should be
  careful what is changed.
  
\end{fdfentry}

\begin{fdflogicalF}{Lua!Debug}

  Debug the Lua script mode by printing out (on stdout) information
  everytime \siesta\ communicates with Lua.
  
\end{fdflogicalF}

\begin{fdflogicalF}{Lua!Debug.MPI}

  Debug all nodes (if in a parallel run).
  
\end{fdflogicalF}

\begin{fdflogicalF}{Lua!Interactive}

  Start an interactive Lua session at all the states in the program
  and ask for user-input.
  %
  This is primarily intended for debugging purposes. The interactive
  session is executed just \emph{before} the \code{siesta\_comm}
  function call (if the script is used).
  
  For serial runs \code{siesta.send} may be used. For parallel runs do
  \emph{not} use \code{siesta.send} as the code is only
  executed on the first MPI node.

  There are various commands that are caught if they are the only
  content on a line:
  \begin{fdfoptions}
    \option[/debug]%
    Turn on/off debugging information.

    \option[/show]%
    Show the currently collected lines of code.

    \option[/clear]%
    Clears the currently collected lines of code.

    \option[;]%
    Run the currently collected lines of code and continue collecting
    lines.

    \option[/run]%
    Same as \code{;}.

    \option[/cont]%
    Run the currently collected lines of code and continue \siesta.

    \option[/stop]%
    Run the currently collected lines of code and stop all future
    interactive Lua sessions.

  \end{fdfoptions}

  Currently this only works if \fdf{Lua!Script} is having a valid Lua
  file (note the file may be empty).
  
\end{fdflogicalF}



\subsection{Examples of Lua programs}

Please look in the \program{Tests/lua\_*} folders where examples of
basic Lua scripts are found. Below is a description of the \program{*}
examples.


\begin{description}
  \item[h2o] Changes the mixing weight continuously in
  the SCF loop. This will effectively speed up convergence time if one
  can attain the best mixing weight per SCF-step.

  \item[si111] Change the mixing method based on certain convergence
  criteria. I.e. after a certain convergence one can switch to a more
  aggressive mixing method.
  
\end{description}

A combination of the above two examples may greatly improve
convergence, however, creating a generic method to adaptively change
the mixing parameters may be very difficult to implement. If you do
create such a Lua script, please share it on the mailing list.


\subsection{External MD/relaxation methods}

Using the Lua interface allows a very easy interface for creating
external MD and/or relaxation methods.

A public library (\program{flos},
\url{https://github.com/siesta-project/flos}) already implements a
wider range of relaxation methods than intrinsically enabled in
\siesta. Secondly, by using external scripting mechanisms one can
customize the routines to a much greater extend while simultaneously
create custom constraints.

You are \emph{highly} encouraged to try out the \program{flos} library
(please note that \program{flook} is required, see installation
instructions above).



\section{TRANSIESTA}
\label{sec:transiesta}

\newcommand\Nelec{N_{\mathfrak{E}}}


\siesta\ includes the possibility of performing calculations of
electronic transport properties using the \tsiesta\ method. This
Section describes how to use these
capabilities, and a reference guide to the relevant \fdflib\
options. We describe here only the additional options available for
\tsiesta\ calculations, while the rest of the Siesta functionalities
and variables are described in the previous sections of this User's
Guide.

An accompanying Python toolbox is available which will assist with
\tsiesta\ calculations. Please use (and cite) \sisl\cite{sisl}.


\subsection{Source code structure}

In this implementation, the \tsiesta\ routines have been grouped in a
set of modules whose file names begin with \texttt{m\_ts} or
\texttt{ts}.

\subsection{Compilation}

Prior to \siesta\ 4.1 \tsiesta\ was a separate executable. Now
\tsiesta\ is fully incorporated into \siesta. \emph{Only} compile
\siesta\ and the full functionality is present.
Sec.~\ref{sec:compilation} for details on compiling \siesta.

\subsection{Brief description}

The \tsiesta\ method is a procedure to solve the electronic
structure of an open system formed by a finite structure sandwiched
between semi-infinite metallic leads. A finite bias can be applied
between leads, to drive a finite current. The method is described
in detail in \citet{Brandbyge2002,Papior2017}. In practical terms,
calculations using \tsiesta\ involve the solution of the
electronic density from the DFT Hamiltonian using Greens functions
techniques, instead of the usual diagonalization procedure. Therefore,
\tsiesta\ calculations involve a \siesta\ run, in which a
set of routines are invoked to solve the Greens functions and the
charge density for the open system. These routines are packed in a set
of modules, and we will refer to it as the '\tsiesta\ module'
in what follows.

\tsiesta\ was originally developed by Mads Brandbyge, Jos\'e-Luis
Mozos, Pablo Ordej\'on, Jeremy Taylor and Kurt
Stokbro\cite{Brandbyge2002}. It consisted, mainly, in setting up an
interface between \siesta\ and the (tight-binding) transport codes
developed by M. Brandbyge and K. Stokbro. Initially everything was
written in Fortran-77. As \siesta\ started to be translated to
Fortran-90, so were the \tsiesta\ parts of the code. This was
accomplished by Jos\'e-Luis Mozos, who also worked on the
parallelization of \tsiesta.
%
Subsequently Frederico D. Novaes extended \tsiesta\ to allow $k$-point
sampling for transverse directions. Additional extensions was
added by Nick R. Papior during 2012.

The current \tsiesta\ module has been completely rewritten by Nick
R. Papior and encompass highly advanced inversion algorithms as well
as allowing $N\geq1$ electrode setups among many new
features. Furthermore, the utility \tbtrans\ has also been fully
re-coded (by Nick R. Papior) to be a generic tight-binding code
capable of analyzing physics from the Greens function perspective in
$N\ge1$ setups\cite{Papior2017}.


\begin{itemize}
  \item%
  Transport calculations involve \emph{electrode} (EL) calculations,
  and subsequently the Scattering Region (SR) calculation. The
  \emph{electrode} calculations are usual \siesta\ calculations, but
  where files \sysfile{TSHS}, and optionally \sysfile{TSDE}, are
  generated. These files contain the information necessary for
  calculation of the self-energies. If any electrodes have identical
  structures (see below) the same files can and should be used to
  describe those. In general, however, electrodes can be different and
  therefore two different \sysfile{TSHS} files must be generated. The
  location of these electrode files must be specified in the \fdflib\
  input file of the SR calculation, see \fdf{TS!Elec.<>!HS}.

  \item %
  For the SR, \tsiesta\ starts with the usual \siesta\ procedure,
  converging a Density Matrix (DM) with the usual Kohn-Sham scheme for
  periodic systems. It uses this solution as an initial input for the
  Greens function self consistent cycle. Effectively you will start a
  \tsiesta\ calculation from a fully periodic calculation. This is why
  the $0\, V$ calculation should be the only calculation where you start
  from \siesta.

  \tsiesta\ stores the SCF DM in a file named \sysfile{TSDE}. In a rerun of
  the same system (meaning the same \fdf{SystemLabel}), if the
  code finds a \sysfile{TSDE} file in the directory, it will take this
  DM as the initial input and this is then considered a continuation
  run. In this case it does not perform an initial \siesta\ run. It
  must be clear that when starting a calculation from scratch, in the
  end one will find both files, \sysfile{DM} and \sysfile{TSDE}.
  %
  The first one stores the \siesta\ density matrix (periodic boundary
  conditions in all directions and no voltage), and the latter the
  \tsiesta\ solution.

  \item %
  When performing several bias calculations, it is heavily advised to
  run different bias' in different directories. To drastically improve
  convergence (and throughput) one should copy the \sysfile{TSDE} from
  the closest, previously, calculated bias to the current bias.
  
  \item %
  The \sysfile{TSDE} may be read equivalently as the
  \sysfile{DM}. Thus, it may be used by fx. \program{denchar} to
  analyze the non-equilibrium charge density. Alternatively one can
  use \sisl\cite{sisl} to interpolate the DM and EDM to speed up
  convergence.

  \item %
  As in the case of \siesta\ calculations, what \tsiesta\ does is to
  obtain a converged DM, but for open boundary conditions and possibly
  a finite bias applied between electrodes. The corresponding
  Hamiltonian matrix (once self consistency is achieved) of the SR is
  also stored in a \sysfile{TSHS} file. Subsequently, transport
  properties are obtained in a post-processing procedure using the
  \tbtrans\ code (located in the \program{Util/TS/TBtrans}
  directory). We note that the \sysfile{TSHS} files contain all the
  needed structural information (atomic positions, matrix elements,
  \ldots), and so the input (fdf) flags for the geometry and basis
  have no influence of the subsequent \tbtrans\ calculations.

  \item %
  When the non-equilibrium calculation uses different electrodes one
  should use so-called \emph{buffer} atoms behind the electrodes to act
  as additional screening regions when calculating the initial guess
  (using \siesta) for \tsiesta. Essentially they may be used to
  achieve a better ``bulk-like'' environment at the electrodes in the
  SR calculation.

  
  \item%
  An important parameter is the lower bound of the energy contours. It
  is a good practice, to start with a \siesta\ calculation for the SR
  and look at the eigenvalues of the system. The lower bound of the
  contours must be \emph{well} below the lowest eigenvalue.

  \item%
  Periodic boundary conditions are assumed in 2 cases.

  \begin{enumerate}
    \item For $\Nelec\neq 2$ all lattice vectors are periodic, users
    \emph{must} manually define \fdf{TS!kgrid!MonkhorstPack}

    \item For $\Nelec=2$ \tsiesta\ will auto-detect if both electrodes
    are semi-infinite along the same lattice vector. If so, only 1 $k$
    point will be used along that lattice vector.
  \end{enumerate}

  \item%
  The default algorithm for matrix inversion is the BTD method, before
  starting a \tsiesta\ calculation please run with the analyzation
  step \fdf{TS!Analyze} (note this is very fast and can be done on any
  desktop computer, regardless of system size).

  \item%
  Importantly(!) the $k$-point sampling need typically be much higher
  in a \tbtrans\ calculation to achieve a converged transmission
  function.

  \item%
  Energies from \tsiesta\ are \emph{not} to be trusted since the open
  boundaries complicates the energy calculation. Therefore care needs
  to be taken when comparing energies between different calculations
  and/or different bias'.

  \item%
  Always ensure that charges are preserved in the scattering region
  calculation. Doing the SCF an output like the following will be shown:
  \begin{output}[fontsize=\footnotesize]
ts-q:         D        E1        C1        E2        C2        dQ
ts-q:   436.147   392.146     3.871   392.146     3.871  7.996E-3
  \end{output}
  Always ensure the last column (\code{dQ}) is a very small fraction of
  the total number of electrons. Ideally this should be $0$.
  % 
  For $0$ bias calculations this should be very small, typically less
  than $0.1\,\%$ of the total charge in the system. If this is not the
  case, it probably means that there is not enough screening towards the
  electrodes which can be solved by adding more electrode layers between
  the electrode and the scattering region. This layer thickness is
  \emph{very} important to obtain a correct open boundary calculation.

  \item%
  Do \emph{not} perform \tsiesta\ calculations using semi-conducting
  electrodes. The basic premise of \tsiesta\ calculations is that the
  electrodes \emph{behave like bulk} in the electrode regions of the
  SR. This means that the distance between the electrode and the
  perturbed must equal the screening length of the electrode.

  This is problematic for semi-conducting systems since they
  intrinsically have a very long screening length.

  In addition, the Fermi-level of semi-conductors are not well-defined
  since it may be placed anywhere in the band gap.

\end{itemize}


\subsection{Electrodes}

To calculate the electronic structure of a system under external bias,
\tsiesta\ attaches the system to semi-infinite electrodes which extend
to their respective semi-infinite directions. Examples of electrodes
would include surfaces, nanowires, nanotubes or fully infinite
regions. The electrode must be large enough (in the semi-infinite
direction) so that orbitals within the unit cell only interact with a
single nearest neighbor cell in the semi-infinite direction (the size
of the unit cell can thus be derived from the range of support for the
orbital basis functions). \tsiesta\ will stop if this is not
enforced. The electrodes are generated by a separate \tsiesta\ run on
a bulk system. This implies that the proper bulk properties are
obtained by a sufficiently high $k$-point sampling. If in doubt, use
100 $k$-points along the semi-infinite direction. The results are
saved in a file with extension \sysfile{TSHS} which contains a
description of the electrode unit cell, the position of the atoms
within the unit cell, as well as the Hamiltonian and overlap matrices
that describe the electronic structure of the lead. One can generate a
variety of electrodes and the typical use of \tsiesta\ would involve
reusing the same electrode for several setups. At runtime, the
\tsiesta\ coordinates are checked against the electrode coordinates
and the program stops if there is a mismatch to a certain precision
($10^{-4}\,\mathrm{Bohr}$). Note that the atomic coordinates are
compared relatively. Hence the \emph{input} atomic coordinates of the
electrode and the device need not be the same (see e.g. the tests in
the \shell{Tests}\index{Tests} directory.

To run an electrode calculation one should do:
\begin{shellexample}
  siesta --electrode RUN.fdf
\end{shellexample}
or define these options in the electrode fdf files:
\fdf{TS!HS.Save} and \fdf{TS!DE.Save} to \fdf*{true} (the above
\code{--electrode} is a shorthand to forcefully define the two options).


\subsubsection{Matching coordinates}

Here are some rules required to successfully construct the appropriate
coordinates of the scattering region. Contrary to versions prior to
4.1, the order of atoms is largely irrelevant. One may define all
electrodes, then subsequently the device, or vice versa. Similarly,
buffer atoms are not restricted to be the first/last atoms.

However, atoms in any given electrode \emph{must} be consecutive in
the device file. I.e. if an electrode input option is given by:
\begin{fdfexample}
  %block TS.Elec.<>
    HS ../elec-<>/siesta.TSHS
    bloch 1 3 1
    used-atoms 4
    electrode-position 10
    ...
  %endblock
\end{fdfexample}
then the atoms from $10$ to $10+4*3-1$ must coincide with the atoms of
the calculation performed in the \program{../elec-<>/}
subdirectory. The above options will be discussed in the following
section.

When using the Bloch expansion (highly recommended if your system
allows it) it is advised to follow the \emph{tiling} method. However
both of the below sequences are allowed.

\paragraph{Tile} \fdfindex{TS!Elec.<>!Bloch}%
Here the atoms are copied and displaced by the full
electrode. Generally this expansion should be preferred over the
\emph{repeat} expansion due to much faster execution.
\begin{fdfexample}
  iaD = 10 ! as per the above input option
  do iC = 0 , nC - 1
  do iB = 0 , nB - 1
  do iA = 0 , nA - 1
    do iaE = 1 , na_u
      xyz_device(:, iaD) = xyz_elec(:, iaE) + &
          cell_elec(:, 1) * iA + &
          cell_elec(:, 2) * iB + &
          cell_elec(:, 3) * iC
      iaD = iaD + 1
    end do
  end do
  end do
  end do
\end{fdfexample}

By using \sisl\cite{sisl} one can achieve the tiling scheme
by using the following command-line utility on an input
\program{ELEC.fdf} structure with the minimal electrode:
\begin{codeexample}
  sgeom -tx 1 -ty 3 -tz 1 ELEC.fdf DEVICE_ELEC.fdf
\end{codeexample}

\paragraph{Repeat} \fdfindex{TS!Elec.<>!Bloch}%
Here the atoms are copied individually. Generally
this expansion should \emph{not} be used since it is much slower than
tiling.
\begin{fdfexample}
  iaD = 10 ! as per the above input option
  do iaE = 1 , na_u
    do iC = 0 , nC - 1
    do iB = 0 , nB - 1
    do iA = 0 , nA - 1
      xyz_device(:, iaD) = xyz_elec(:, iaE) + &
          cell_elec(:, 1) * iA + &
          cell_elec(:, 2) * iB + &
          cell_elec(:, 3) * iC
      iaD = iaD + 1
    end do
    end do
    end do
  end do
\end{fdfexample}

By using \sisl\cite{sisl} one can achieve the repeating scheme by
using the following command-line utility on an input
\program{ELEC.fdf} structure with the minimal electrode:
\begin{codeexample}
  sgeom -rz 1 -ry 3 -rx 1 ELEC.fdf DEVICE_ELEC.fdf
\end{codeexample}



\subsubsection{Principal layer interactions} %
\index{transiesta!electrode!principal layer}%

It is \emph{extremely} important that the electrodes only interact
with one neighboring supercell due to the self-energy
calculation\cite{Sancho1985}. \tsiesta\ will print out a block as this
(\shell{<>} is the electrode name):
\begin{verbatim}
 <> principal cell is perfect!
\end{verbatim}
if the electrode is correctly setup and it only interacts with its
neighboring supercell.
%
In case the electrode is erroneously setup, something similar to the
following will be shown in the output file.
\begin{verbatim}
 <> principal cell is extending out with 96 elements:
    Atom 1 connects with atom 3
    Orbital 8 connects with orbital 26
    Hamiltonian value: |H(8,6587)|@R=-2 =  0.651E-13 eV
    Overlap          :  S(8,6587)|@R=-2 =   0.00    
\end{verbatim}
It is imperative that you have a \emph{perfect} electrode as otherwise
nonphysical results will occur. This means that you need to add more
layers in your electrode calculation (and hence also in your
scattering region). An example is an ABC stacking electrode. If the
above error is shown one \emph{has} to create an electrode with ABCABC
stacking in order to retain periodicity.

By default \tsiesta\ will die if there are connections beyond the
principal cell. One may control whether this is allowed or not by
using \fdf{TS!Elecs!Neglect.Principal}.




\subsection{Convergence of electrodes and scattering regions}

For successful \tsiesta\ calculations it is imperative that the
electrodes and scattering regions are well-converged.
%
The basic principle is equivalent to the \siesta\ convergence, see
Sec.~\ref{sec:scf}.

The steps should be something along the line of (only done at
$0\, V$).
\begin{enumerate}

  \item%
  Converge electrodes and find optimal \fdf{Mesh!Cutoff},
  \fdf{kgrid!MonkhorstPack} etc.

  Electrode $k$ points should be very high along the semi-infinite
  direction. The default is $100$, but at least $>50$ should easily be
  reachable.


  \item%
  Use the parameters from the electrodes and also converge the
  same parameters for the scattering region SCF.

  This is an iterative process since the scattering region forces the
  electrodes to use equivalent $k$ points (see
  \fdf{TS!Elec.<>!check-kgrid}).

  Note that $k$ points should be limited in the \tsiesta\ run, see
  \fdf{TS!kgrid!MonkhorstPack}.

  One should always use the same parameters in both the electrode and
  scattering region calculations, except the number of $k$ points for
  the electrode calculations along their respective semi-infinite
  directions.


  \item%
  Once \tsiesta\ is completed one should also converge the
  number of $k$ points for \tbtrans. Note that $k$ point sampling in
  \tbtrans\ should generally be much denser but \emph{always} fulfill
  $N_k^{\tsiesta}\geq N_k^\tbtrans$
  
\end{enumerate}

The converged parameters obtained at $0\,\mathrm V$ should be used for
all subsequent bias calculations. Remember to copy the \sysfile{TSDE}
from the closest, previously, calculated bias for restart and much
faster convergence.


\tsiesta\ is also more difficult to converge during the SCF
steps. This may be due to several interrelated problems:
%
\begin{itemize}
  
  \item%
  A too short screening distance between the scattering atoms
  and the electrode layers.

  
  \item%
  In case buffer atoms (\fdf{TS!Atoms.Buffer}) are used with
  vacuum on the backside it may be that there are too few buffer atoms
  to accurately screen off the vacuum region for a sufficiently good
  initial guess. This effect is only true for $0\,\mathrm V$
  calculations.

  
  \item%
  The mixing parameters may need to be smaller than for \siesta,
  see Sec.~\ref{sec:scf:mix} and it is never guaranteed that it will
  converge. It is \emph{always} a trial and error method, there are
  \emph{no} omnipotent mixing parameters.

  
  \item%
  Very high bias' may be extremely difficult to
  converge. Generally one can force bias convergence by doing smaller
  steps of bias. E.g. if problems arise at $0.5\,\mathrm V$ with an
  initial DM from a $0.25\,\mathrm V$ calculation, one could try and
  $0.3\,\mathrm V$ first.

  
  \item%
  If a particular bias point is hard to converge, even by doing
  the previous step, it may be related to an eigenstate close to the
  chemical potentials of either electrode (e.g. a molecular eigenstate
  in the junction). In such cases one could try an even higher bias
  and see if this converges more smoothly.

\end{itemize}



\subsection{\texorpdfstring{\tsiesta\ }{TranSIESTA} Options}

The fdf options shown here are only to be used at the input file for
the scattering region. When using \tsiesta\ for electrode
calculations, only the usual \siesta\ options are relevant.
%
Note that since \tsiesta\ is a generic $\Nelec$ electrode NEGF code the
input options are heavily changed compared to versions prior to 4.1.

\subsubsection{Quick and dirty}

Since 4.1, \tsiesta\ has been fully re-implemented. And so have
\emph{every} input fdf-flag. To accommodate an easy transition between
previous input files and the new version format a small utility called
\program{ts2ts}. It may be compiled in \program{Util/TS/ts2ts}. It is
recommended that you use this tool if you are familiar with previous
\tsiesta\ versions.

%
One may input options as in the old \tsiesta\ version and then run
\begin{fdfexample}
  ts2ts OLD.fdf > NEW.fdf
\end{fdfexample}
which translates all keys to the new, equivalent, input format. If you
are familiar with the old-style flags this is highly recommendable
while becoming comfortable with the new input format. Please note that
some defaults have changed to more conservative values in the newer
release.

If one does not know the old flags and wish to get a basic example of
an input file, a script \program{Util/TS/tselecs.sh} exists that can
create the basic input for $\Nelec$ electrodes. One may call it like:
\begin{shellexample}
  tselecs.sh -2 > TWO_ELECTRODE.fdf
  tselecs.sh -3 > THREE_ELECTRODE.fdf
  tselecs.sh -4 > FOUR_ELECTRODE.fdf
  ...
\end{shellexample}
where the first call creates an input fdf for 2 electrode setups, the
second for a 3 electrode setup, and so on. See the help (\program{-h})
for the program for additional options.

Before endeavoring on large scale calculations you are advised to run
an analyzation of the system at hand, you may run your system as
\begin{shellexample}
  siesta -fdf TS.Analyze RUN.fdf > analyze.out
\end{shellexample}
which will analyze the sparsity pattern and print out several
different pivoting schemes. Please see \fdf{TS!Analyze} for more
information.


\subsubsection{General options}

One have to set \fdf{SolutionMethod} to \fdf*{transiesta} to enable
\tsiesta.

\begin{fdfentry}{TS!SolutionMethod}[string]<btd|mumps|full>

  Control the algorithm used for calculating the Green
  function. Generally the BTD method is the fastest and this option
  need not be changed.

  \begin{fdfoptions}
    \option[BTD]%
    \fdfindex*{TS!SolutionMethod:BTD}%
    Use the block-tri-diagonal algorithm for matrix inversion.

    This is generally the recommended method.

    \option[MUMPS]%
    \fdfindex*{TS!SolutionMethod:MUMPS}%
    Use sparse matrix inversion algorithm (MUMPS). This requires
    \tsiesta\ to be compiled with MUMPS.
    \index{MUMPS}%
    \index{External library!MUMPS}%

    \option[full]%
    \fdfindex*{TS!SolutionMethod:full}%
    Use full matrix inversion algorithm (LAPACK). Generally only
    usable for debugging purposes.
    
  \end{fdfoptions}
  
\end{fdfentry}

\begin{fdfentry}{TS!Voltage}[energy]<$0\,\mathrm{eV}$>

  Define the reference applied bias. For $\Nelec=2$ electrode calculations
  this refers to the actual potential drop between the electrodes,
  while for $\Nelec\neq2$ this is a reference bias. In the latter case it
  \emph{must} be equivalent to the maximum difference between the
  chemical potential of any two electrodes.

  \note Specifying \shell{-V}\fdfindex{Command line options:-V} on the
  command-line overwrites the value in the fdf file.
  
\end{fdfentry}

\begin{fdfentry}{TS!kgrid!MonkhorstPack}[block]<\fdfvalue{kgrid!MonkhorstPack}>

  $k$ points used for the \tsiesta\ calculation.

  For $\Nelec\neq2$ this should always be defined. Always take care to
  use only 1 $k$ point along non-periodic lattice vectors. An
  electrode semi-infinite region is considered non-periodic since it
  is integrated out through the self-energies.

  This defaults to \fdf{kgrid!MonkhorstPack}.
  
\end{fdfentry}

\begin{fdfentry}{TS!Atoms.Buffer}[block/list]
  \fdfindex{TS.BufferAtomsLeft|see TS!Atoms.Buffer}%
  \fdfindex{TS.BufferAtomsRight|see TS!Atoms.Buffer}%

  Specify atoms that will be removed in the \tsiesta\ SCF. They are
  not considered in the calculation and may be used to improve the
  initial guess for the Hamiltonian.


  An intended use for buffer atoms is to ensure a bulk behavior in the
  electrode regions when electrodes are different. As an example: a 2
  electrode calculation with left consisting of Au atoms and the right
  consisting of Pt atoms. In such calculations one cannot create a
  periodic geometry along the transport direction. One needs to add
  vacuum between the Au and Pt atoms that comprise the
  electrodes. However, this creates an artificial edge of the
  electrostatic environment for the electrodes since in \siesta\ there
  is vacuum, whereas in \tsiesta\ the effective Hamiltonian sees a
  bulk environment. To ensure that \siesta\ also exhibits a bulk
  environment on the electrodes we add \emph{buffer} atoms towards the
  vacuum region to screen off the electrode region. These
  \emph{buffer} atoms is thus a technicality that has no influence on
  the \tsiesta\ calculation but they are necessary to ensure the
  electrode bulk properties.

  The above discussion is even more important when doing $\Nelec$-electrode
  calculations.
  
  \note all lines are additive for the buffer atoms and the input
  method is similar to that of \fdf{Geometry!Constraints} for the
  \fdf*{atom} line(s).

  \begin{fdfexample}
    %block TS.Atoms.Buffer
       atom [ 1 -- 5 ]
    %endblock
    # Or equivalently as a list   
    TS.Atoms.Buffer [1 -- 5]
  \end{fdfexample}
  will remove atoms [1--5] from the calculation.

\end{fdfentry}

\begin{fdfentry}{TS!ElectronicTemperature}[energy]<\fdfvalue{ElectronicTemperature}>

  Define the temperature used for the Fermi distributions for the
  chemical potentials. 
  %
  See \fdf{TS!ChemPot.<>!ElectronicTemperature}.

\end{fdfentry}

\begin{fdfentry}{TS!SCF!DM.Tolerance}[real]<\fdfvalue{SCF.DM!Tolerance}>%
  \fdfdepend{SCF.DM!Tolerance}

  The density matrix tolerance for the \tsiesta\ SCF cycle.

\end{fdfentry}

\begin{fdfentry}{TS!SCF!H.Tolerance}[energy]<\fdfvalue{SCF.H!Tolerance}>%
  \fdfdepend{SCF.H!Tolerance}

  The Hamiltonian tolerance for the \tsiesta\ SCF cycle.

\end{fdfentry}

\begin{fdfentry}{TS!Q.Tolerance}[real]<$\mathrm{Q(device)}\cdot 10^{-3}$>%

  Once the SCF is completed this is a final check that the charge is
  within the specified tolerance.

  The charge is not stable in \tsiesta\ calculations and this flag
  ensures that one does not, by accident, do post-processing of files
  where there charge distribution is completely wrong.

  A too high tolerance may heavily influence the electrostatics of the
  simulation.

\end{fdfentry}

\begin{fdfentry}{TS!SCF.Initialize}[string]<diagon|transiesta>%

  Control which initial guess should be used for \tsiesta. The general
  way is the \fdf*{diagon} solution method (which is preferred),
  however, one can start a \tsiesta\ run immediately. If you start
  directly with \tsiesta\ please refer to these flags:
  \fdf{TS!Elecs!DM.Init} and \fdf{TS!Fermi.Initial}.
  
  \note Setting this to \fdf*{transiesta} is highly experimental and
  convergence may be extremely poor.

\end{fdfentry}

\begin{fdfentry}{TS!Fermi.Initial}[energy]<$\sum^{N_E}_iE_F^i/N_E$>

  Manually set the initial Fermi level to a predefined value. 

  \note this may also be used to change the Fermi level for
  calculations where you restart calculations. Using this feature is
  highly experimental.
  
\end{fdfentry}

\begin{fdfentry}{TS!Weight.Method}[string]<orb-orb|[[un]correlated+][sum|tr]-atom-[atom|orb]|mean>
  
  Control how the NEGF weighting scheme is conducted. Generally one
  should only use the \fdf*{orb-orb} while the others are present for
  more advanced usage. They refer to how the weighting coefficients of
  the different non-equilibrium contours are performed. In the
  following the weight are denoted in a two-electrode setup while they
  are generalized for multiple electrodes.

  \def\mypropto{\,\oppropto^{||}\,} %
  \def\mn{{\mu\nu}} %
  Define the normalised geometric mean as $\mypropto$ via
  \begin{equation}
    w\mypropto \langle\cdot^L\rangle\equiv
    \frac{\langle\cdot^L\rangle}{\langle\cdot^L\rangle+\langle\cdot^R\rangle}.
  \end{equation}

  When applying a bias, \tsiesta\ will printout the following during
  the SCF cycle:
\begin{output}[fontsize=\footnotesize]
ts-err-D: ij(  447,   447), M =  1.8275, ew = -.257E-2, em = 0.258E-2. avg_em = 0.542E-06
ts-err-E: ij(  447,   447), M = -6.7845, ew = 0.438E-3, em = -.439E-3. avg_em = -.981E-07
ts-w-q:               qP1       qP2
ts-w-q:           219.150   216.997
ts-q:         D        E1        C1        E2        C2        dQ
ts-q:   436.147   392.146     3.871   392.146     3.871  7.996E-3
  \end{output}
  % 
  The extra output corresponds to fine details in the integration
  scheme.
  \begin{description}[labelindent=3em, leftmargin=4.5em]
    \itemsep 10pt
    \parsep 0pt
    
    \item[\texttt{ts-err-*}] are estimated error outputs from the
    different integrals, for the density matrix (\texttt{D}) and the
    energy density matrix (\texttt{E}), see Eq.~(12) in
    \cite{Papior2017}. All values (except \texttt{avg\_em}) are for
    the given orbital site

    \begin{description}
      \itemsep 4pt
      \parsep 0pt

      \item[\texttt{ij(A,B)}] refers to the matrix element between orbital
      \texttt{A} and \texttt{B}

      \item[\texttt{M}] is the weighted matrix element value,
      $\sum_{\elec}w_\elec\DM^\elec$

      \item[\texttt{ew}] is the maximum difference between
      $\sum_{\elec}w_\elec\DM^\elec-\DM^\elec$ for all $\elec$.

      \item[\texttt{em}] is the maximum difference between
      $\DM^{\elec'}-\DM^\elec$ for all combinations of $\elec$ and
      $\elec'$.

      \item[\texttt{avg\_em}] is the averaged difference of \texttt{em} for all
      orbital sites.

    \end{description}

    \item[\texttt{ts-w-q}] is the Mulliken charge from the different
    integrals: $\Tr[w_\elec\DM^\elec\SO]$
    
  \end{description}

  \begin{fdfoptions}

    \option[orb-orb]%
    \fdfindex*{TS!Weight.Method:orb-orb}%
    Weight each orbital-density matrix element individually.

    \option[tr-atom-atom]%
    \fdfindex*{TS!Weight.Method:tr-atom-atom}%
    Weight according to the trace of the atomic density matrix sub-blocks
    \begin{equation}
      w_{ij}^{\Tr} \mypropto
      \sqrt{
          % First the i'th atom
          \sum_{\in\{i\}}(\Delta\rho_{\mu\mu}^L)^2
          \; % ensure a little space between them
          % second the j'th atom
          \sum_{\in\{j\}}(\Delta\rho_{\mu\mu}^L)^2
      }
    \end{equation}

    \option[tr-atom-orb]%
    \fdfindex*{TS!Weight.Method:tr-atom-orb}%
    
    Weight according to the trace of the atomic density matrix
    sub-block times the weight of the orbital weight
    \begin{equation}
      w_{ij,\mn}^{\Tr} \mypropto
      \sqrt{
          w_{ij}^{\Tr} 
          w_{ij,\mn}
      }
    \end{equation}

    \option[sum-atom-atom]%
    \fdfindex*{TS!Weight.Method:sum-atom-atom}%
    
    Weight according to the total sum of the atomic density matrix
    sub-blocks
    \begin{equation}
      w_{ij,\mn}^{\Sigma} \mypropto
      \sqrt{
          % First the i'th atom
          \sum_{\in\{i\}}(\Delta\rho_{\mn}^L)^2
          \; % ensure a little space between them
          % second the j'th atom
          \sum_{\in\{j\}}(\Delta\rho_{\mn}^L)^2
      }
    \end{equation}

    \option[sum-atom-orb]%
    \fdfindex*{TS!Weight.Method:sum-atom-orb}%
    
    Weight according to the total sum of the atomic density matrix
    sub-block times the weight of the orbital weight
    \begin{equation}
      w_{ij,\mn}^{\Sigma} \mypropto
      \sqrt{
          w_{ij}^{\Sigma} 
          w_{ij,\mn}
      }
    \end{equation}

    \option[mean]%
    \fdfindex*{TS!Weight.Method:mean}%
    
    A standard average.
    
  \end{fdfoptions}


  Each of the methods (except \fdf*{mean}) comes in a correlated and
  uncorrelated variant where $\sum$ is either outside or inside the
  square, respectively.

\end{fdfentry}

\begin{fdfentry}{TS!Weight.k.Method}[string]<correlated|uncorrelated>

  Control weighting \emph{per} $k$-point or the full sum. I.e. if
  \fdf*{uncorrelated} is used it will weight $n_k$ times if there are
  $n_k$ $k$-points in the Brillouin zone.
  
\end{fdfentry}

\begin{fdflogicalT}{TS!Forces}
  
  Control whether the forces are calculated. If \emph{not} \tsiesta\
  will use slightly less memory and the performance slightly
  increased, however the final forces shown are incorrect.

  If this is \fdftrue\ the file \sysfile{TSFA} (and possibly the
  \sysfile{TSFAC}) will be created. They contain forces for the atoms
  that are having updated density-matrix elements
  (\fdf{TS!Elec.<>!DM-update:all}). 

  Generally one should not expect good forces close to the
  electrode/device interface since this typically has some
  electrostatic effects that are inherent to the \tsiesta\ method.
  Forces on atoms \emph{far} from the electrode can safely be
  analyzed.

\end{fdflogicalT}

\begin{fdfentry}{TS!dQ}[string]<none|buffer|fermi>
  \fdfindex*{TS!dQ:fermi}

  Any excess/deficiency of charge can be re-adjusted after each
  \tsiesta\ cycle to reduce charge fluctuations in the cell.

  The non-neutral charge in \tsiesta\ cycles is an expression of one
  of the following things:
  \begin{enumerate}
    \item An incorrect screening towards the electrodes. To check
    this, simply add more electrode layers towards the device at each
    electrode and see how the charge evolves. It should tend to zero.

    The best way to check this is to follow these steps:
    \begin{enumerate}
      \item%
      Perform a \siesta-only calculation (the resulting DM
      should be used as the starting point for both following
      calculations)

      \item%
      Perform a \tsiesta\ calculation with the option
      \fdf{TS!Elecs!DM.Init:diagon} (please note that the electrode
      option has precedence, so remove any entry from the
      \fdf{TS!Elec.<>} block)

      \item%
      Perform a \tsiesta\ calculation with the option
      \fdf{TS!Elec.<>!DM-init:bulk} (please note that the electrode
      option has precedence, so remove any entry from the
      \fdf{TS!Elec.<>} block)

    \end{enumerate}

    Now compare the final output and the initial charge distribution,
    e.g.:
    \begin{output}
>>> TS.Elecs.DM.Init diagon
transiesta: Charge distribution, target =    396.00000
Total charge                  [Q]  :   396.00000

>>> TS.Elecs.DM.Init bulk
transiesta: Charge distribution, target =    396.00000
Total charge                  [Q]  :   395.9995
\end{output}

    The above shows that there is very little charge difference
    between the bulk electrode DM and the scattering region. This
    ensures that the charge distribution are similar and that your
    electrode is sufficiently screened.

    Additionally one may compare the final output such as total
    energies, calculated DOS and ADOS (see \tbtrans). If the two
    calculations show different properties, one should carefully
    examine the system setup.

    \item An incorrect reference energy level. In \tsiesta\ the Fermi
    level is calculated from the \siesta\ SCF. However, the \siesta\
    Fermi level corresponds to a periodic calculation and \emph{not}
    an open system calculation such as NEGF.

    If the first step shows a good screening towards the electrode it
    is usually the reference energy level, then use \fdf{TS!dQ:fermi}.

    \item A combination of the above, this is the typical case.
  \end{enumerate}

  \note we recommend only to use charge corrections for
  $0\,\mathrm{V}$ calculations. 

  \begin{fdfoptions}

    \option[none]%
    No charge corrections are introduced.

    \option[buffer]%
    Excess/missing electrons are placed in the buffer regions (buffer
    atoms are required to exist)

    \option[fermi] %
    Correct the charge filling by calculating a new reference energy
    level (referred to as the Fermi level). \\
    We approximate the contribution to be constant around the Fermi
    level and find
    \begin{equation}
      \label{eq:fermi-shift}
      \mathrm{d}E_F = \frac{Q'-Q}{Q|_{E_F}},
    \end{equation}
    where $Q'$ is the charge from a \tsiesta\ SCF step
    and $Q|_{E_F}$ is the equilibrium charge at the current Fermi
    level, $Q$ is the supposed charge to reside in the
    calculation. Fermi correction utilizes Eq.~\eqref{eq:fermi-shift} for
    the first correction and all subsequent corrections are based on a
    cubic spline interpolation to faster converge the
    ``correct'' Fermi level.
    
    This method will create a file called \file{TS\_FERMI}.

    \note correcting the reference energy level is a costly
    operation since the SCF cycle typically gets
    \emph{corrupted} resulting in many more SCF cycles.

  \end{fdfoptions}

\end{fdfentry}

\begin{fdfentry}{TS!dQ!Factor}[real]<0.8>

  Any positive value close to $1$. $0$ means no charge correction. $1$
  means total charge correction. This will reduce the fluctuations in
  the SCF and setting this to $1$ may result in difficulties in
  converging.
  
\end{fdfentry}

\begin{fdfentry}{TS!dQ!Fermi.Tolerance}[real]<0.01>

  The tolerance at which the charge correction will converge. Any
  excess/missing charge ($|Q'-Q|>\mathrm{Tol}$) will result in a
  correction for the Fermi level.
  
\end{fdfentry}

\begin{fdfentry}{TS!dQ!Fermi.Max}[energy]<$1.5\,\mathrm{eV}$>%

  The maximally allowed value that the Fermi level will change from a
  charge correction using the Fermi correction method. In case the
  Fermi level lies in between two bands a DOS of $0$ at the Fermi
  level will make the Fermi change equal to $\infty$. This is not
  physical and the user can thus truncate the correction.

  \note If you know the band-gab, setting this to $1/4$ (or smaller)
  of the band gab seems like a better value than the rather
  arbitrarily default one.

\end{fdfentry}

\begin{fdfentry}{TS!dQ!Fermi.Eta}[energy]<$1\,\mathrm{meV}$>%

  The $\eta$ value that we extrapolate the charge at the poles to.
  Usually a smaller $\eta$ value will mean larger changes in the
  Fermi level. If the charge convergence w.r.t. the Fermi level is
  fluctuating a lot one should increase this $\eta$ value.

\end{fdfentry}

\begin{fdflogicalT}{TS!HS.Save}
  \fdfindex*{TS!HS.Save:true}

  Must be \fdftrue\ for saving the Hamiltonian (\sysfile{TSHS}). Can only be set if
  \fdf{SolutionMethod} is not \fdf*{transiesta}.

  The default is \fdffalse\ for \fdf{SolutionMethod} different from
  \fdf*{transiesta} and if \code{--electrode} has not been passed as a
  command line argument.

\end{fdflogicalT}  

\begin{fdflogicalT}{TS!DE.Save}
  \fdfindex*{TS!DE.Save:true}

  Must be \fdftrue\ for saving the density and energy density matrix
  for continuation runs (\sysfile{TSDE}). Can only be set if
  \fdf{SolutionMethod} is not \fdf*{transiesta}.

  The default is \fdffalse\ for \fdf{SolutionMethod} different from
  \fdf*{transiesta} and if \code{--electrode} has not been passed as a
  command line argument.

\end{fdflogicalT}  

\begin{fdflogicalF}{TS!S.Save}

  This is a flag mainly used for the Inelastica code to produce
  overlap matrices for Pulay corrections. This should only be used by
  advanced users.

\end{fdflogicalF}


\begin{fdflogicalF}{TS!SIESTA.Only}

  Stop \tsiesta\ right after the initial diagonalization run in
  \siesta. Upon exit it will also create the \sysfile{TSDE} file which
  may be used for initialization runs later.

  This may be used to start several calculations from the same initial
  density matrix, and it may also be used to rescale the Fermi level
  of electrodes. The rescaling is primarily used for semi-conductors
  where the Fermi levels of the device and electrodes may be
  misaligned. 

\end{fdflogicalF}


\begin{fdflogicalF}{TS!Analyze}

  When using the BTD solution method (\fdf{TS!SolutionMethod}) this
  will analyze the Hamiltonian and printout an analysis of the
  sparsity pattern for optimal choice of the BTD partitioning
  algorithm.

  This yields information regarding the \fdf{TS!BTD!Pivot} flag.

  \note we advice users to \emph{always} run an analyzation step prior
  to actual calculation and select the \emph{best} BTD format. This
  analyzing step is very fast and may be performed on small
  work-station computers, even on systems of $\gg10,000$ orbitals.

  To run the analyzing step you may do:
  \begin{shellexample}
    siesta -fdf TS.Analyze RUN.fdf > analyze.out
  \end{shellexample}
  note that there is little gain on using MPI and it should complete
  within a few minutes, no matter the number of orbitals.

  Choosing the best one may be difficult. Generally one should choose
  the pivoting scheme that uses the least amount of memory. However,
  one should also choose the method with the largest block-size being
  as small as possible. As an example:
  \begin{output}[fontsize=\footnotesize]
TS.BTD.Pivot atom+GPS
...
    BTD partitions (7): 
     [ 2984, 2776, 192, 192, 1639, 4050, 105 ]
    BTD matrix block size [max] / [average]: 4050 /   1705.429
    BTD matrix elements in % of full matrix:   47.88707 %

TS.BTD.Pivot atom+GGPS
...
    BTD partitions (6): 
     [ 2880, 2916, 174, 174, 2884, 2910 ]
    BTD matrix block size [max] / [average]: 2916 /   1989.667
    BTD matrix elements in % of full matrix:   48.62867 %

  \end{output}
  Although the GPS method uses the least amount of memory, the GGPS
  will likely perform better as the largest block in GPS is $4050$
  vs. $2916$ for the GGPS method. 

\end{fdflogicalF}

\begin{fdflogicalF}{TS!Analyze.Graphviz}
  \fdfdepend{TS!Analyze}

  If performing the analysis, also create the connectivity graph and
  store it as \file{GRAPHVIZ\_atom.gv} or \file{GRAPHVIZ\_orbital.gv}
  to be post-processed in Graphviz\footnote{\url{www.graphviz.org}}.
  
\end{fdflogicalF}

\subsection{\texorpdfstring{$k$}{k}-point sampling}

The options for $k$-point sampling are identical to the \siesta\
options, \fdf{kgrid!MonkhorstPack}, \fdf{kgrid!Cutoff} or
\fdf{kgrid!File}.

One may however use specific \tsiesta\ $k$-points by using these
options:

\begin{fdfentry}{TS.kgrid!MonkhorstPack}[block]<\fdfvalue{kgrid!MonkhorstPack}>%

  See \fdf{kgrid!MonkhorstPack} for details.
  
\end{fdfentry}

\begin{fdfentry}{TS.kgrid!Cutoff}[length]<$0.\,\mathrm{Bohr}$>

  See \fdf{kgrid!Cutoff} for details.
  
\end{fdfentry}

\begin{fdfentry}{TS.kgrid!File}[string]<none>

  See \fdf{kgrid!File} for details.
  
\end{fdfentry}


\subsubsection{Algorithm specific options}

These options adhere to the specific solution methods available for
\tsiesta. For instance the \fdf*{TS.BTD.*} options adhere only when
using \fdf{TS!SolutionMethod:BTD}, similarly for options with
\fdf*{MUMPS}.

\begin{fdfentry}{TS!BTD!Pivot}[string]<\nonvalue{first electrode}>

  Decide on the partitioning for the BTD matrix. One may denote either
  \fdf*{atom+} or \fdf*{orb+} as a prefix which does the analysis on
  the atomic sparsity pattern or the full orbital sparsity pattern,
  respectively. If neither are used it will default to \fdf*{atom+}.

  Please see \fdf{TS!Analyze}.

  \begin{fdfoptions}

    \option[<elec-name>|CG-<elec-name>]%
    The partitioning will be a connectivity graph starting from the
    electrode denoted by the name. This name \emph{must} be found in
    the \fdf{TS!Elecs} block. One can append more than one electrode
    to simultaneously start from more than 1 electrode. This may be
    necessary for multi-terminal calculations.

    \option[rev-CM] %
    Use the reverse Cuthill-McKee for pivoting the matrix elements to
    reduce bandwidth. One may omit \fdf*{rev-} to use the standard
    Cuthill-McKee algorithm (not recommended).

    This pivoting scheme depends on the initial starting
    electrodes, append \fdf*{+<elec-name>} to start the Cuthill-McKee
    algorithm from the specified electrode(s).

    \option[GPS] %
    Use the Gibbs-Poole-Stockmeyer algorithm for reducing the
    bandwidth.

    \option[GGPS] %
    Use the generalized Gibbs-Poole-Stockmeyer algorithm for reducing
    the bandwidth.

    \note this algorithm does not work on dis-connected graphs.

    \option[PCG] %
    Use the perphiral connectivity graph algorithm for reducing the
    bandwidth.

    This pivoting scheme \emph{may} depend on the initial starting
    electrode(s), append \fdf*{+<elec-name>} to initialize the PCG
    algorithm from the specified electrode(s).

  \end{fdfoptions}

  Examples are
  \begin{fdfexample}
    TS.BTD.Pivot atom+GGPS
    TS.BTD.Pivot GGPS
    TS.BTD.Pivot orb+GGPS
    TS.BTD.Pivot orb+PCG+Left
  \end{fdfexample}
  where the first two are equivalent. The 3rd and 4th are more heavy
  on analysis and will typically not improve the bandwidth reduction.
  
\end{fdfentry}

\begin{fdfentry}{TS!BTD!Optimize}[string]<speed|memory>

  When selecting the smallest blocks for the BTD matrix there are
  certain criteria that may change the size of each block. For very
  memory consuming jobs one may choose the \fdf*{memory}. 

  \note often both methods provide \emph{exactly} the same BTD matrix
  due to constraints on the matrix.
  
\end{fdfentry}

\begin{fdfentry}{TS!BTD!Guess1.Min}[int]<\nonvalue{empirically determined}>
  \fdfdepend{TS!BTD!Guess1.Max}
  
  Constructing the blocks for the BTD starts by \emph{guessing} the
  first block size. One could guess on all different block sizes, but
  to speed up the process one can define a smaller range of guesses by
  defining \fdf{TS!BTD!Guess1.Min} and \fdf{TS!BTD!Guess1.Max}.

  The initial guessed block size will be between the two values.

  By default this is $1/4$ of the minimum bandwidth for a selected
  first set of orbitals.

  \note setting this to 1 may sometimes improve the final BTD matrix
  blocks.

\end{fdfentry}

\begin{fdfentry}{TS!BTD!Guess1.Max}[int]<\nonvalue{empirically determined}>
  \fdfdepend{TS!BTD!Guess1.Min}

  See \fdf{TS!BTD!Guess1.Min}.

  \note for improved initialization performance setting Min/Max flags
  to the first block size for a given pivoting scheme will drastically
  reduce the search space and make initialization much
  faster.

\end{fdfentry}

\begin{fdfentry}{TS!BTD!Spectral}[string]<propagation|column>

  How to compute the spectral function ($G\Gamma G^\dagger$).

  For $\Nelec<4$ this defaults to \fdf*{propagation} which should be the
  fastest.

  For $\Nelec\ge4$ this defaults to \fdf*{column}.

  Check which has the best performance for your system if you endeavor
  on huge amounts of calculations for the same system.

\end{fdfentry}


\begin{fdfentry}{TS!MUMPS!Ordering}[string]<\nonvalue{read MUMPS
      manual}>

  One may select from a number of different matrix orderings which are
  all described in the MUMPS manual. 

  The following list of orderings are available (without detailing
  their differences): %
  \fdf*{auto}, \fdf*{AMD}, \fdf*{AMF}, \fdf*{SCOTCH}, \fdf*{PORD},
  \fdf*{METIS}, \fdf*{QAMD}.
  
\end{fdfentry}

\begin{fdfentry}{TS!MUMPS!Memory}[integer]<20>

  Specify a factor for the memory consumption in MUMPS. See the
  \fdf*{INFOG(9)} entry in the MUMPS manual. Generally if \tsiesta\
  dies and \fdf*{INFOG(9)=-9} one should increase this number.
  
\end{fdfentry}

\begin{fdfentry}{TS!MUMPS!BlockingFactor}[integer]<112>

  Specify the number of internal block sizes. Larger numbers increases
  performance at the cost of memory.
  
  \note this option may heavily influence performance.

\end{fdfentry}

\subsubsection{Poisson solution for fixed boundary conditions}

\tsiesta\ requires fixed boundary conditions and forcing this is an
intricate and important detail.

It is important that these options are exactly the same if one reuses
the \sysfile{TSDE} files.

\begin{fdfentry}{TS!Poisson}[string]<ramp|elec-box|\nonvalue{file}>

  Define how the correction of the Poisson equation is
  superimposed. The default is to apply the linear correction across
  the entire cell (if there are two semi-infinite aligned
  electrodes). Otherwise this defaults to the \emph{box} solution
  which will introduce spurious effects at the electrode
  boundaries. In this case you are encouraged to supply a \fdf*{file}.

  If the input is a \fdf*{file}, it should be a NetCDF file containing
  the grid information which acts as the boundary conditions for the
  SCF cycle.
  The grid information should conform to the grid size of the
  unit-cell in the simulation.
  % 
  \note the file option is only applicable if compiled with CDF4
  compliance.

  \begin{fdfoptions}
    \option[ramp]%
    \fdfindex*{TS!Poisson:ramp}%

    Apply the ramp for the full cell. This is the default for 2
    electrodes.

    \option[<file>]%
    \fdfindex*{TS!Poisson:<file>}%

    Specify an external file used as the boundary conditions for the
    applied bias. This is encouraged to use for $\Nelec>2$ electrode
    calculations but may also be used when an \emph{a priori}
    potential profile is know.

    The file should contain something similar to this output
    (\code{ncdump -h}):
    \begin{output}[fontsize=\footnotesize]
netcdf <file> {
dimensions:
	one = 1 ;
	a = 43 ;
	b = 451 ;
	c = 350 ;
variables:
	double Vmin(one) ;
		Vmin:unit = "Ry" ;
	double Vmax(one) ;
		Vmax:unit = "Ry" ;
	double V(c, b, a) ;
		V:unit = "Ry" ;
}
    \end{output}
    Note that the units should be in Ry. \code{Vmax}/\code{Vmin}
    should contain the maximum/minimum fixed boundary conditions in
    the Poisson solution. This is used internally by \tsiesta\ to
    scale the potential to arbitrary $V$. This enables the Poisson
    solution to only be solved \emph{once} independent on subsequent
    calculations. For chemical potential configurations where the
    Poisson solution is not linearly dependent one have to create
    separate files for each applied bias.


    \option[elec-box]%
    \fdfindex*{TS!Poisson:elec-box}%

    The default potential profile for $\Nelec>2$, or when the electrodes
    does are not aligned (in terms of their transport direction).
    
    \note usage of this Poisson solution is \emph{highly}
    discouraged. Please see \fdf{TS!Poisson:<file>}.

  \end{fdfoptions}
  
\end{fdfentry}

\begin{fdfentry}{TS!Hartree.Fix}[string]<[-+][ABC]>

  Specify which plane to fix the Hartree potential at. For regular (2
  electrode calculations with a single transport direction) this
  should not be set.
  %
  For $\Nelec\neq2$ electrode systems one \emph{have} to specify a
  plane to fix. One can specify one or several planes to fix. Users
  are encouraged to fix the plane where the entire plane has the
  highest/lowest potential.

\end{fdfentry}

\begin{fdfentry}{TS!Hartree.Fix!Frac}[real]<$1.$>

  Fraction of the correction that is applied.

  \note this is an experimental feature!

\end{fdfentry}

\begin{fdfentry}{TS!Hartree.Offset}[energy]<$0\,\mathrm{eV}$>

  An offset in the Hartree potential to match the electrode potential.

  This value may be useful in certain cases where the Hartree
  potentials are very different between the electrode and device
  region calculations.

  This should not be changed between different bias calculations. It
  directly relates to the reference energy level ($E_F$).

\end{fdfentry}

\subsubsection{Electrode description options}

As \tsiesta\ supports $\Nelec$ electrodes one needs to specify all
electrodes in a generic input format.

\begin{fdfentry}{TS!Elecs}[block]

  Each line denote an electrode which is queried in \fdf{TS!Elec.<>}
  for its setup.
  
\end{fdfentry}

\begin{fdfentry}{TS!Elec.<>}[block]

  Each line represents a setting for electrode \fdf*{<>}.
  There are a few lines that \emph{must} be present, \fdf*{HS},
  \fdf*{semi-inf-dir}, \fdf*{electrode-pos}, \fdf*{chem-pot}. The
  remaining options are optional.

  \note Options prefixed with \fdf*{tbt} are neglected in \tsiesta\
  calculations. In \tbtrans\ calculations these flags has precedence
  over the other options and \emph{must} be placed at the end of the
  block.

  \begin{fdfoptions}

    \option[HS]%
    \fdfindex*{TS!Elec.<>!HS}%
    The Hamiltonian information from the initial electrode
    calculation. This file retains the geometrical information as well
    as the Hamiltonian, overlap matrix and the Fermi-level of the
    electrode. 
    %
    This is a file-path and the electrode \sysfile{TSHS} need not be
    located in the simulation folder.

    \option[semi-inf-direction|semi-inf-dir|semi-inf]%
    \fdfindex*{TS!Elec.<>!semi-inf-direction}%
    The semi-infinite direction of the electrode with respect to the
    electrode unit-cell.

    It may be one of \fdf*{[-+][abc]}, \fdf*{[-+]A[123]}, \fdf*{ab},
    \fdf*{ac}, \fdf*{bc} or \fdf*{abc}. The latter four all refer to a
    real-space self-energy as described in \cite{Papior2019}.

    \note this direction is \emph{not} with respect to the scattering
    region unit cell. It is with respect to the electrode unit
    cell. \tsiesta\ will figure out the alignment of the electrode
    unit cell and the scattering region unit-cell.

    \option[chemical-potential|chem-pot|mu]%
    \fdfindex*{TS!Elec.<>!chemical-potential}%
    The chemical potential that is associated with this
    electrode. This is a string that should be present in the
    \fdf{TS!ChemPots} block.

    \option[electrode-position|elec-pos]%
    \fdfindex*{TS!Elec.<>!electrode-position}%
    The index of the electrode in the scattering region.
    This may be given by either \fdf*{elec-pos <idx>}, which refers to
    the first atomic index of the electrode residing at index
    \fdf*{<idx>}. Else the electrode position may be given via
    \fdf*{elec-pos end <idx>} where the last index of the electrode
    will be located at \fdf*{<idx>}.

    \option[used-atoms]%
    \fdfindex*{TS!Elec.<>!used-atoms}%
    Number of atoms from the electrode calculation that is used in the
    scattering region as electrode. This may be useful when the
    periodicity of the electrodes forces extensive electrodes in the
    semi-infinite direction.

    \note do not set this if you use all atoms in the electrode.

    \option[Bulk]%
    \fdfindex*{TS!Elec.<>!Bulk}%
    \fdfindex*{TS!Elec.<>!Bulk:true}%
    \fdfindex*{TS!Elec.<>!Bulk:false}%
    Control whether the Hamiltonian of the electrode region in the
    scattering region is enforced \emph{bulk} or whether the
    Hamiltonian is taken from the scattering region elements.

    This defaults to \fdftrue. If there are buffer atoms \emph{behind}
    the electrode it may be advantageous to set this to false to
    extend the electrode region.

    \option[DM-update]%
    \fdfindex*{TS!Elec.<>!DM-update}%
    \fdfindex*{TS!Elec.<>!DM-update:none}%
    \fdfindex*{TS!Elec.<>!DM-update:all}%
    \fdfdepend{TS!Elec.<>!Bulk}%
    String of values \fdf*{none}, \fdf*{cross-terms} or \fdf*{all}
    which controls which part of the electrode density matrix elements
    that are updated. If \fdf*{all}, both the density matrix elements
    in the electrode and the coupling elements between the electrode
    and scattering region are updated. If \fdf*{cross-terms}; only the
    coupling elements between the electrode and the scattering region
    are updated.

    If \fdf{TS!Elec.<>!Bulk:false} this is forced to \fdf*{all} and
    cannot be changed.

    If \fdf{TS!Elec.<>!Bulk:true} this defaults to \fdf*{cross-terms},
    but may be changed.

    \option[DM-init]%
    \fdfindex*{TS!Elec.<>!DM-init}%
    \fdfindex*{TS!Elec.<>!DM-init:diagon}%
    \fdfindex*{TS!Elec.<>!DM-init:bulk}%
    \fdfdepend{TS!Elecs!DM.Init,TS!Elec.<>!Bulk,TS!Voltage}%
    String of values \fdf*{bulk}, \fdf*{diagon} (default) or
    \fdf*{force-bulk} which controls whether the DM is initially
    overwritten by the DM from the bulk electrode calculation. This
    requires the DM file for the electrode to be present. Only
    \fdf*{force-bulk} will have effect if $V\neq0$. Otherwise this
    option only affects $V=0$ calculations.

    The density matrix elements in the electrodes of the scattering
    region may be forcefully set to the bulk values by reading in the
    DM of the corresponding electrode. If one uses
    \fdf{TS!Elec.<>!Bulk:false} it may be dis-advantageous to set this
    to \fdf*{bulk}.
    If the system is well setup (good screening towards electrodes),
    setting this to \fdf*{bulk} may be advantageous.

    This option may be used to check how good the electrodes are
    screened, see \fdf{TS!dQ:fermi}.

    \option[Gf]%
    \fdfindex*{TS!Elec.<>!Gf}%
    String with filename of the surface Green function data
    (\sysfile{TSGF*}). This may be used to place a common surface
    Green function file in a top directory which may then be used in
    all calculations using the same electrode and the same contour.
    %
    If many calculations are performed this will heavily increase
    performance at the cost of disk-space.

    \option[Gf-Reuse]%
    \fdfindex*{TS!Elec.<>!Gf-Reuse}%
    Logical deciding whether the surface Green function file should be
    re-used or deleted.
    %
    If this is \fdffalse\ the surface Green function file is deleted
    and re-created upon start.
    
    \option[Eta]%
    \fdfindex*{TS!Elec.<>!Eta}%
    \fdfdepend{TS!Elecs!Eta}%
    Control the imaginary energy ($\eta$) of the surface Green
    function for this electrode.

    The imaginary part is \emph{only} used in the non-equilibrium
    contours since the equilibrium are already lifted into the complex
    plane. Thus this $\eta$ reflects the imaginary part in the
    $G\Gamma G^\dagger$ calculations. Ensure that all imaginary values
    are larger than $0$ as otherwise \tsiesta\ may seg-fault.

    \note if this energy is negative the complex value associated with
    the non-equilibrium contour is used. This is particularly useful
    when providing a user-defined contour along the real axis.

    \option[Accuracy]%
    \fdfindex*{TS!Elec.<>!Accuracy}%
    \fdfdepend{TS!Elecs!Accuracy}%
    Control the convergence accuracy required for the self-energy
    calculation when using the Lopez-Sanchez, Lopez-Sanchez iterative
    scheme. 

    \note advanced use \emph{only}.
    
    \option[DE]%
    \fdfindex*{TS!Elec.<>!DE}%
    Density and energy density matrix file for the electrode. This may
    be used to initialize the density matrix elements in the electrode
    region by the bulk values. See \fdf{TS!Elec.<>!DM-init:bulk}.

    \note this should only be performed on one \tsiesta\ calculation
    as then the scattering region \sysfile{TSDE} contains the
    electrode density matrix.

    \option[Bloch]%
    \fdfindex*{TS!Elec.<>!Bloch}%
    $3$ integers should be present on this line which each denote the
    number of times bigger the scattering region electrode is compared
    to the electrode, in each lattice direction. Remark that these
    expansion coefficients are with regard to the electrode unit-cell.
    This is denoted ``Bloch'' because it is an expansion based on
    Bloch waves.

    \note Using symmetries such as periodicity will greatly increase
    performance.

    \option[Bloch-A/a1|B/a2|C/a3]%
    \fdfindex{TS!Elec.<>!Bloch}%
    Specific Bloch expansions in each of the electrode unit-cell
    direction. See \fdf*{Bloch} for details.

    \option[pre-expand]%
    \fdfindex*{TS!Elec.<>!pre-expand}%
    String denoting how the expansion of the surface Green function
    file will be performed. This only affects the Green function file
    if \fdf*{Bloch} is larger than 1. By default the Green function
    file will contain the fully expanded surface Green function, but
    not Hamiltonian and overlap matrices (\fdf*{Green}). One may
    reduce the file size by setting this to \fdf*{Green} which only
    expands the surface Green function. Finally \fdf*{none} may be
    passed to reduce the file size to the bare minimum.
    %
    For performance reasons \fdf*{all} is preferred.

    If disk-space is a limited resource and the \sysfile{TSGF*} files
    are really big, try \fdf*{none}.

    \option[out-of-core]%
    \fdfindex*{TS!Elec.<>!Out-of-core}%
    If \fdftrue\ (default) the GF files are created which contain
    the surface Green function.
    If \fdffalse\ the surface Green function will be calculated when
    needed. 
    Setting this to \fdffalse\ will heavily degrade performance and
    it is highly discouraged!

    \option[delta-Ef]%
    \fdfindex*{TS!Elec.<>!delta-Ef}%
    Specify an offset for the Fermi-level of the electrode. This will
    directly be added to the Fermi-level found in the electrode file.
    
    \note this option only makes sense for semi-conducting electrodes
    since it shifts the entire electronic structure. This is because
    the Fermi-level may be arbitrarily placed anywhere in the band
    gap. It is the users responsibility to define a value which does
    not introduce a potential drop between the electrode and device
    region. Please do not use unless you really know what you are
    doing.

    \option[V-fraction]%
    \fdfindex*{TS!Elec.<>!V-fraction}%

    Specify the fraction of the chemical potential shift in the
    electrode-device coupling region. This corresponds to:
    \begin{equation}
      \mathbf H_{\mathfrak eD} \leftarrow \mathbf H_{\mathfrak eD} +
      \mu_{\mathfrak e} \mathrm{V-fraction} \mathbf S_{\mathfrak eD}
    \end{equation}
    in the coupling region. Consequently the value \emph{must} be
    between $0$ and $1$.

    \note this option \emph{only} makes sense for
    \fdf{TS!Elec.<>!DM-update:none} since otherwise the electrostatic
    potential will be incorporated in the Hamiltonian.

    \option[check-kgrid]%
    \fdfindex*{TS!Elec.<>!check-kgrid}%
    For $\Nelec$ electrode calculations the $\mathbf k$ mesh will sometimes
    not be equivalent for the electrodes and the device region
    calculations. However, \tsiesta\ requires that the device and
    electrode $\mathbf k$ samplings are commensurate. This flag
    controls whether this check is enforced for a given electrode.

    \note only use if fully aware of the implications!

  \end{fdfoptions}
  
\end{fdfentry}

There are several flags which are globally controlling the variables
for the electrodes (with \fdf{TS!Elec.<>} taking precedence).

\begin{fdflogicalT}{TS!Elecs!Bulk}

  This globally controls how the Hamiltonian is treated in all
  electrodes. 
  %
  See \fdf{TS!Elec.<>!Bulk}.
  
\end{fdflogicalT}

\begin{fdfentry}{TS!Elecs!Eta}[energy]<$1\,\mathrm{meV}$>
  
  Globally control the imaginary energy ($\eta$) used for the surface
  Green function calculation on the non-equilibrium contour.
  %
  See \fdf{TS!Elec.<>!Eta} for extended details on the usage of this
  flag. 
  
\end{fdfentry}

\begin{fdfentry}{TS!Elecs!Accuracy}[energy]<$10^{-13}\,\mathrm{eV}$>
  
  Globally control the accuracy required for convergence of the self-energy.
  %
  See \fdf{TS!Elec.<>!Accuracy}.
  
\end{fdfentry}

\begin{fdflogicalF}{TS!Elecs!Neglect.Principal}
  
  If this is \fdffalse\ \tsiesta\ dies if there are connections beyond
  the principal cell.

  \note set this to \fdftrue\ with care, non-physical results may
  arise. Use at your own risk!

\end{fdflogicalF}  

\begin{fdflogicalT}{TS!Elecs!Gf.Reuse}
  
  Globally control whether the surface Green function files should
  be re-used (\fdftrue) or re-created (\fdffalse).

  See \fdf{TS!Elec.<>!Gf-Reuse}.
  
\end{fdflogicalT}

\begin{fdflogicalT}{TS!Elecs!Out-of-core}

  Whether the electrodes will calculate the self energy at each SCF
  step. Using this will not require the surface Green function files
  but at the cost of heavily degraded performance.

  See \fdf{TS!Elec.<>!Out-of-core}.
  
\end{fdflogicalT}

\begin{fdfentry}{TS!Elecs!DM.Update}[string]<cross-terms|all|none>

  Globally controls which parts of the electrode density matrix
  gets updated. 

  See \fdf{TS!Elec.<>!DM-update}.
  
\end{fdfentry}

\begin{fdfentry}{TS!Elecs!DM.Init}[string]<diagon|bulk|force-bulk>
  \fdfindex*{TS!Elecs!DM.Init:bulk}%
  \fdfindex*{TS!Elecs!DM.Init:diagon}%

  Specify how the density matrix elements in the electrode regions of
  the scattering region will be initialized when starting \tsiesta.

  See \fdf{TS!Elec.<>!DM-init}.

\end{fdfentry}

\begin{fdfentry}{TS!Elecs!Coord.EPS}[length]<$0.001\,\mathrm{Ang}$>

  When using Bloch expansion of the self-energies one may experience
  difficulties in obtaining perfectly aligned electrode coordinates.

  This parameter controls how strict the criteria for equivalent
  atomic coordinates is. If \tsiesta\ crashes due to mismatch between
  the electrode atomic coordinates and the scattering region
  calculation, one may increase this criteria. This should only be
  done if one is sure that the atomic coordinates are almost similar
  and that the difference in electronic structures of the two may be
  negligible.
  
\end{fdfentry}


\subsubsection{Chemical potentials}
\label{sec:ts:chem-pot}

For $\Nelec$ electrodes there will also be $N_\mu$ chemical
potentials. They are defined via blocks similar to \fdf{TS!Elecs}.

\begin{fdfentry}{TS!ChemPots}[block]
  
  Each line denotes a new chemical potential which is defined in the
  \fdf{TS!ChemPot.<>} block.
  
\end{fdfentry}

\begin{fdfentry}{TS!ChemPot.<>}[block]

  Each line defines a setting for the chemical potential named
  \fdf*{<>}.

  \begin{fdfoptions}
    
    \option[chemical-shift|mu]%
    \fdfindex*{TS!ChemPot.<>!chemical-shift}%
    \fdfindex*{TS!ChemPot.<>!mu}%

    Define the chemical shift (an energy) for this chemical
    potential. One may specify the shift in terms of the applied bias
    using \fdf*{V/<integer>} instead of explicitly typing the energy.

    \option[contour.eq]%
    \fdfindex*{TS!ChemPot.<>!contour.eq}%
    A subblock which defines the integration curves for the
    equilibrium contour for this equilibrium chemical potential. One
    may supply as many different contours to create whatever shape of
    the contour
    
    Its format is
    \begin{fdfexample}
      contour.eq
       begin
        <contour-name-1>
        <contour-name-2>
        ...
       end
    \end{fdfexample}

    \note If you do \emph{not} specify \fdf*{contour.eq} in the block
    one will automatically use the continued fraction method and you
    are encouraged to use $50$ or more poles\cite{Ozaki2010}.

    \option[ElectronicTemperature|Temp|kT]%
    \fdfindex*{TS!ChemPot.<>!ElectronicTemperature}%
    \fdfindex*{TS!ChemPot.<>!Temp}%
    \fdfindex*{TS!ChemPot.<>!kT}%

    Specify the electronic temperature (as an energy or in
    Kelvin). This defaults to \fdf{TS!ElectronicTemperature}.

    One may specify this in units of \fdf{TS!ElectronicTemperature} by
    using the unit \fdf*{kT}.

    \option[contour.eq.pole]%
    \fdfindex*{TS!ChemPot.<>!contour.eq.pole}%

    Define the number of poles used via an energy
    specification. \tsiesta\ will automatically convert the energy to
    the closest number of poles (rounding up).
    
    \note this has precedence over
    \fdf{TS!ChemPot.<>!contour.eq.pole.N} if it is specified
    \emph{and} a positive energy. Set this to a negative energy to
    directly control the number of poles.

    \option[contour.eq.pole.N]%
    \fdfindex*{TS!ChemPot.<>!contour.eq.pole.N}%

    Define the number of poles via an integer.
    
    \note this will only take effect if
    \fdf{TS!ChemPot.<>!contour.eq.pole} is a negative energy. 

  \end{fdfoptions}

  \note It is important to realize that the parametrization in 4.1 of
  the voltage into the chemical potentials enables one to have a
  \emph{single} input file which is never required to be changed, even
  when changing the applied bias (if using the command line options
  for specifying the applied bias).
  %
  This is different from 4.0 and prior versions since one had to
  manually change the \fdf*{TS.biasContour.NumPoints} for each applied
  bias.

\end{fdfentry}

These options complicate the input sequence for regular $2$ electrode
which is unfortunate. 

Using \program{tselecs.sh -only-mu} yields this output:
\begin{fdfexample}
  %block TS.ChemPots
    Left
    Right
  %endblock
  %block TS.ChemPot.Left
    mu V/2
    contour.eq
      begin
        C-Left
        T-Left
      end
  %endblock
  %block TS.ChemPot.Right
    mu -V/2
    contour.eq
      begin
        C-Right
        T-Right
      end
  %endblock
\end{fdfexample}

Note that the default is a $2$ electrode setup with chemical
potentials associated directly with the electrode names
``Left''/``Right''. Each chemical potential has two parts of the
equilibrium contour named according to their name.



\subsubsection{Complex contour integration options}

Specifying the contour for $\Nelec$ electrode systems is a bit
extensive due to the possibility of more than 2 chemical
potentials. Please use the \program{Util/TS/tselecs.sh} as a means to
create default input blocks.

The contours are split in two segments. One, being the equilibrium
contour of each of the different chemical potentials. The second for
the non-equilibrium contour. The equilibrium contours are shifted
according to their chemical potentials with respect to a reference
energy. Note that for \tsiesta\ the reference energy is named the
Fermi-level, which is rather unfortunate (for non-equilibrium but not
equilibrium). Fortunately the non-equilibrium contours are defined
from different chemical potentials Fermi functions, and as such this
contour is defined in the window of the minimum and maximum chemical
potentials. Because the reference energy is the periodic Fermi level
it is advised to retain the average chemical potentials equal to
$0$. Otherwise applying different bias will shift transmission curves
calculated via \tbtrans\ relative to the average chemical potential.

In this section the equilibrium contours are defined, and in the next
section the non-equilibrium contours are defined.

\begin{fdfentry}{TS!Contours!Eq.Pole}[energy]<$1.5\,\mathrm{eV}$>

  The imaginary part of the line integral crossing the chemical
  potential. Note that the actual number of poles may differ between
  different calculations where the electronic temperatures are
  different.

  \note if the energy specified is negative,
  \fdf{TS!Contours!Eq.Pole.N} takes effect.
  
\end{fdfentry}

\begin{fdfentry}{TS!Contours!Eq.Pole.N}[integer]<8>

  Manually select the number poles for the equilibrium contour. 

  \note this flag will only take effect if \fdf{TS!Contours!Eq.Pole}
  is a negative energy.
  
\end{fdfentry}

\begin{fdfentry}{TS!Contour.<>}[block]

  Specify a contour named \fdf*{<>} with options within the block.

  The names \fdf*{<>} are taken from the
  \fdf{TS!ChemPot.<>!contour.eq} block in the chemical potentials.

  The format of this block is made up of at least $4$ lines, in the
  following order of appearance.

  \begin{fdfoptions}

    \option[part]%
    \fdfindex*{TS!Contour.<>!part}%

    Specify which part of the equilibrium contour this is:
    \begin{fdfoptions}

      \option[circle]%
      The initial circular part of the contour

      \option[square]%
      The initial square part of the contour

      \option[line]% 
      The straight line of the contour

      \option[tail]%
      The final part of the contour \emph{must} be a tail which
      denotes the Fermi function tail.

    \end{fdfoptions}

    \option[from \emph{a} to \emph{b}]%
    \fdfindex*{TS!Contour.<>!from}%

    Define the integration range on the energy axis.
    Thus \emph{a} and \emph{b} are energies.

    The parameters may also be given values \fdf*{prev}/\fdf*{next}
    which is the equivalent of specifying the same energy as the
    previous contour it is connected to.

    \note that \emph{b} may be supplied as \fdf*{inf} for \fdf*{tail}
    parts.

    \option[points/delta]%
    \fdfindex*{TS!Contour.<>!points}%
    \fdfindex*{TS!Contour.<>!delta}%

    Define the number of integration points/energy separation.
    If specifying the number of points an integer should be supplied.

    If specifying the separation between consecutive points an energy
    should be supplied.

    \option[method]%
    \fdfindex*{TS!Contour.<>!method}%

    Specify the numerical method used to conduct the integration. Here
    a number of different numerical integration schemes are accessible

    \begin{fdfoptions}
      \option[mid|mid-rule]%
      Use the mid-rule for integration.

      \option[simpson|simpson-mix]%
      Use the composite Simpson $3/8$ rule (three point Newton-Cotes).

      \option[boole|boole-mix]%
      Use the composite Booles rule (five point Newton-Cotes).
 
      \option[G-legendre]%
      Gauss-Legendre quadrature.

      \note has \fdf*{opt left}
      
      \note has \fdf*{opt right}

      \option[tanh-sinh]%
      Tanh-Sinh quadrature.

      \note has \fdf*{opt precision <>}

      \note has \fdf*{opt left}
      
      \note has \fdf*{opt right}

      \option[G-Fermi]%
      Gauss-Fermi quadrature (only on tails).

    \end{fdfoptions}

    \option[opt]%
    \fdfindex*{TS!Contour.<>!opt}%

    Specify additional options for the \fdf*{method}. Only a selected
    subset of the methods have additional options.

  \end{fdfoptions}
  
\end{fdfentry}

These options complicate the input sequence for regular $2$ electrode
which is unfortunate. However, it allows highly customizable contours.

Using \program{tselecs.sh -only-c} yields this output:
\begin{fdfexample}
  TS.Contours.Eq.Pole 2.5 eV
  %block TS.Contour.C-Left
    part circle
     from -40. eV + V/2 to -10 kT + V/2
       points 25
        method g-legendre
         opt right
  %endblock
  %block TS.Contour.T-Left
    part tail
     from prev to inf
       points 10
        method g-fermi
  %endblock
  %block TS.Contour.C-Right
    part circle
     from -40. eV -V/2 to -10 kT -V/2
       points 25
        method g-legendre
         opt right
  %endblock
  %block TS.Contour.T-Right
    part tail
     from prev to inf
       points 10
        method g-fermi
  %endblock
\end{fdfexample}
These contour options refer to input options for the chemical
potentials as shown in Sec.~\ref{sec:ts:chem-pot}
(p.~\pageref{sec:ts:chem-pot}). Importantly one should note the shift
of the contours corresponding to the chemical potential (the shift
corresponds to difference from the reference energy used in \tsiesta).


\subsubsection{Bias contour integration options}

The bias contour is similarly defined as the equilibrium
contours. Please use the \program{Util/TS/tselecs.sh} as a means to
create default input blocks.

\begin{fdfentry}{TS!Contours.nEq!Eta}[energy]<$0\,\mathrm{eV}$>

  The imaginary part ($\eta$) of the device states. Generally this is
  not necessary to define as the imaginary part naturally arises from
  the self-energies (where $\eta>0$).

\end{fdfentry}

\begin{fdfentry}{TS!Contours.nEq!Fermi.Cutoff}[energy]<$5\,k_BT$>

  The bias contour is limited by the Fermi function tails. Numerically
  it does not make sense to integrate to infinity.
  % 
  This energy defines where the bias integration window is turned into
  zero. Thus above $-|V|/2-E$ or below $|V|/2+E$ the DOS is defined as
  exactly zero.

\end{fdfentry}

\begin{fdfentry}{TS!Contours.nEq}[block]

  Each line defines a new contour on the non-equilibrium bias
  window. The contours defined \emph{must} be defined in
  \fdf{TS!Contour.nEq.<>}. 

  These contours must all be \fdf*{part line} or \fdf*{part tail}. 
  
\end{fdfentry}

\begin{fdfentry}{TS!Contour.nEq.<>}[block]

  This block is \emph{exactly} equivalently defined as the
  \fdf{TS!Contour.<>}. See page \pageref{TS!Contour.<>}.
  
\end{fdfentry}

The default options related to the non-equilibrium bias contour are
defined as this:
\begin{fdfexample}
  %block TS.Contours.nEq
    neq
  %endblock TS.Contours.nEq
  %block TS.Contour.nEq.neq
    part line
     from -|V|/2 - 5 kT to |V|/2 + 5 kT
       delta 0.01 eV
        method mid-rule
  %endblock TS.Contour.nEq.neq
\end{fdfexample}
If one chooses a different reference energy than $0$, then the limits
should change accordingly. Note that here \fdf*{kT} refers to
\fdf{TS!ElectronicTemperature}.


\subsection{Output}

\tsiesta\ generates several output files.  
\begin{description}
  \itemsep 10pt
  \parsep 0pt
  
  \item[\sysfile{DM}]: The \siesta\ density matrix. \siesta\ initially
  performs a calculation at zero bias assuming periodic boundary conditions in all
  directions, and no voltage, which is used as a starting point for the \tsiesta\
  calculation.
  
  \item[\sysfile{TSDE}]: The \tsiesta\ density matrix and energy
  density matrix. During a \tsiesta\ run, the \sysfile{DM} values are
  used for the density matrix in the buffer (if used) and electrode
  regions. The coupling terms may or may not be updated in a \tsiesta\
  run, see \fdf{TS!Elec.<>!DM-update}.
  
  \item[\sysfile{TSHS}]: The Hamiltonian corresponding to
  \sysfile{TSDE}. This file also contains geometry information
  etc. needed by \tsiesta\ and \tbtrans.

  \item[\sysfile{TS.KP}]: The $k$-points used in the \tsiesta\ calculation. See
  \siesta\ \sysfile{KP} file for formatting information.

  \item[\sysfile{TSFA}]: Forces only on atoms in the device
  region. See \fdf{TS!Forces} for details.

  \item[\sysfile{TSCCEQ*}]: The equilibrium complex contour integration paths.

  \item[\sysfile{TSCCNEQ*}]: The non-equilibrium complex contour
  integration paths for \emph{correcting} the equilibrium contours.

  \item[\sysfile{TSGF*}]: Self-energy files containing the used
  self-energies from the leads. These are very large files used in the
  SCF loop. Once completed one can safely delete these files.
  %
  For heavily increased throughput these files may be re-used for the
  same electrode settings in various calculations.

\end{description} 

\subsection{Utilities for analysis:
    \texorpdfstring{\tbtrans}{TBtrans}} 
\index{tbtrans@\tbtrans}

Please see the separate \tbtrans\ manual
(\href{run:tbtrans.pdf}{tbtrans.pdf}). 


\section{ANALYSIS TOOLS}

There are a number of analysis tools and programs in the \texttt{Util}
directory. Some of them have been directly or indirectly mentioned in
this manual. Their documentation is the appropriate sub-directory of
\texttt{Util}. See \texttt{Util/README}.

In addition to the shipped utilities \siesta\ is also officially
supported by \sisl\cite{sisl} which is a Python library enabling many
of the most commonly encountered things.

\section{SCRIPTING}

In the \texttt{Util/Scripting} directory we provide an experimental
python scripting framework built on top of the ``Atomic Simulation
Environment'' (see \texttt{https://wiki.fysik.dtu.dk/ase2}) by the Campos
group at DTU, Denmark.

(NOTE: ``ASE version 2'', not the new version 3, is needed)

There are objects implementing the ``Siesta as server/subroutine'' feature, and
also hooks for file-oriented-communication usage. This interface is
different from the \siesta-specific functionality already
contained in the ASE framework.

Users can create their own scripts to customize the ``outer geometry loop''
in \siesta, or to perform various repetitive calculations in compact form.

Note that the interfaces in this framework are still evolving and are
subject to change.

Suggestions for improvements can be sent to Alberto Garcia
(\href{mailto:albertog@icmab.es}{albertog@icmab.es})

\section{PROBLEM HANDLING}

\subsection{Error and warning messages}

\begin{description}
\itemsep 10pt
\parsep 0pt

\item[\texttt{chkdim: ERROR: In \textit{routine} dimension \textit{parameter} =
\textit{value}. It must be  ...}]

And other similar messages.

\textit{Description:} Some array dimensions which change infrequently,
and do not lead to much memory use, are fixed to oversized
values. This message means that one of this parameters is too small
and neads to be increased.  However, if this occurs and your system is
not very large, or unusual in some sense, you should suspect first of
a mistake in the data file (incorrect atomic positions or cell
dimensions, too large cutoff radii, etc).

\textit{Fix:} Check again the data file.  Look for previous warnings or
suspicious values in the output.  If you find nothing unusual, edit
the specified routine and change the corresponding parameter.  

\end{description}




\section{REPORTING BUGS}
\index{bug reports} 

Your assistance is essential to help improve the program. If you find
any problem, or would like to offer a suggestion for improvement,
please follow the instructions in the file
\texttt{Docs/REPORTING\_BUGS}. 

Since \siesta\ has moved to Launchpad you are encouraged to follow the
instructions presented at:
\url{https://answers.launchpad.net/siesta/+faq/2779}.



\section{ACKNOWLEDGMENTS}

We want to acknowledge the use of a small number of routines,
written by other authors, in developing the siesta code.
In most cases, these routines were acquired by now-forgotten
routes, and the reported authorships are based on their headings.
If you detect any incorrect or incomplete attribution, or suspect
that other routines may be due to different authors, please
let us know.

\begin{itemize}
  \item%
  The main nonpublic contribution, that we thank thoroughly, are
  modified versions of a number of routines, originally written by
  \textbf{A. R.\ Williams} around 1985, for the solution of the radial
  Schr\"odinger and Poisson equations in the APW code of Soler and
  Williams (PRB \textbf{42}, 9728 (1990)).  Within \siesta, they are
  kept in files arw.f and periodic\_table.f, and they are used for the
  generation of the basis orbitals and the screened pseudopotentials.

  \item%
  The exchange-correlation routines contained in SiestaXC were written
  by J.M.Soler in 1996 and 1997, in collaboration with
  \textbf{C.\ Balb\'as} and \textbf{J. L.\ Martins}.  Routine pzxc,
  which implements the Perdew-Zunger LDA parametrization of xc, is
  based on routine velect, written by \textbf{S.\ Froyen}.

  \item%
  The serial version of the multivariate fast fourier transform used
  to solve Poisson's equation was written by \textbf{Clive Temperton}.

  \item%
  Subroutine iomd.f for writing MD history in files was originally
  written by \textbf{J. Kohanoff}.

\end{itemize}

We want to thank very specially \textbf{O. F.\ Sankey}, \textbf{D. J.\
    Niklewski} and \textbf{D. A.\ Drabold} for making the FIREBALL
code available to P.\ Ordej\'on.  Although we no longer use the
routines in that code, it was essential in the initial development of
the \siesta\ project, which still uses many of the algorithms
developed by them.

We thank \textbf{V. Heine} for his support and encouraging us in this
project.

The \siesta\ project is supported by the Spanish DGES through
several contracts. We also acknowledge past support by the Fundaci\'on
Ram\'on Areces.



\section{APPENDIX: Physical unit names recognized by FDF}
\label{sec:fdf-units}

\begin{center}
\begin{tabular}{llr}
Magnitude & Unit name & MKS value \\
\hline
mass     & kg         & 1.E0 \\
mass     & g          & 1.E-3 \\
mass     & amu        & 1.66054E-27 \\
length   & m          & 1.E0 \\
length   & cm         & 1.E-2 \\
length   & nm         & 1.E-9 \\
length   & Ang        & 1.E-10 \\
length   & Bohr       & 0.529177E-10 \\
time     & s          & 1.E0 \\
time     & fs         & 1.E-15 \\
time     & ps         & 1.E-12 \\
time     & ns         & 1.E-9 \\
time     & mins       & 60.E0 \\
time     & hours      & 3.6E3 \\
time     & days       & 8.64E4 \\
energy   & J          & 1.E0 \\
energy   & erg        & 1.E-7 \\
energy   & eV         & 1.60219E-19 \\
energy   & meV        & 1.60219E-22 \\
energy   & Ry         & 2.17991E-18 \\
energy   & mRy        & 2.17991E-21 \\
energy   & Hartree    & 4.35982E-18 \\
energy   & Ha         & 4.35982E-18 \\
energy   & K          & 1.38066E-23 \\
energy   & kcal/mol   & 6.94780E-21 \\
energy   & mHartree   & 4.35982E-21 \\
energy   & mHa        & 4.35982E-21 \\
energy   & kJ/mol     & 1.6606E-21 \\
energy   & Hz         & 6.6262E-34 \\
energy   & THz        & 6.6262E-22 \\
energy   & cm-1       & 1.986E-23 \\
energy   & cm**-1     & 1.986E-23 \\
energy   & cm\^~-1      & 1.986E-23 \\
force    & N          & 1.E0 \\
force    & eV/Ang     & 1.60219E-9 \\
force    & Ry/Bohr    & 4.11943E-8 \\
\hline
\end{tabular}

\begin{tabular}{llr}
Magnitude & Unit name & MKS value \\
\hline
pressure & Pa         & 1.E0 \\
pressure & MPa        & 1.E6 \\
pressure & GPa        & 1.E9 \\
pressure & atm        & 1.01325E5 \\
pressure & bar        & 1.E5 \\
pressure & Kbar       & 1.E8 \\
pressure & Mbar       & 1.E11 \\
pressure & Ry/Bohr**3 & 1.47108E13 \\
pressure & eV/Ang**3  & 1.60219E11 \\
charge   & C          & 1.E0 \\
charge   & e          & 1.602177E-19 \\
dipole   & C*m        & 1.E0 \\
dipole   & D          & 3.33564E-30 \\
dipole   & debye      & 3.33564E-30 \\
dipole   & e*Bohr     & 8.47835E-30 \\
dipole   & e*Ang      & 1.602177E-29 \\
MomInert & Kg*m**2    & 1.E0 \\
MomInert & Ry*fs**2   & 2.17991E-48 \\
Efield   & V/m        & 1.E0 \\
Efield   & V/nm       & 1.E9  \\
Efield   & V/Ang      & 1.E10 \\
Efield   & V/Bohr     & 1.8897268E10 \\
Efield   & Ry/Bohr/e  & 2.5711273E11 \\
Efield   & Har/Bohr/e & 5.1422546E11 \\
Efield   & Ha/Bohr/e  & 5.1422546E11 \\
angle    & deg        & 1.d0 \\
angle    & rad        & 5.72957795E1 \\
torque   & eV/deg     & 1.E0 \\
torque   & eV/rad     & 1.745533E-2 \\
torque   & Ry/deg     & 13.6058E0 \\
torque   & Ry/rad     & 0.237466E0 \\
torque   & meV/deg    & 1.E-3 \\
torque   & meV/rad    & 1.745533E-5 \\
torque   & mRy/deg    & 13.6058E-3 \\
torque   & mRy/rad    & 0.237466E-3 \\
\hline
\end{tabular}
\end{center}


\newpage
\section{APPENDIX: XML Output}
\index{XML}
\index{CML}

From version 2.0, \siesta\ includes an option to write its output to an
XML file. The XML it produces is in accordance with the CMLComp subset of
version 2.2 of the Chemical Markup Language. Further information
and resources can be found at \url{http://cmlcomp.org/} and tools for working
with the XML file can be found in the \texttt{Util/CMLComp} directory.

The main motivation for standarised XML (CML) output is as a step
towards standarising formats for uses like the following.

\begin{itemize}

\item To have \siesta\ communicating with other software, either
for postprocessing or as part of a larger workflow scheme. In such a
scenario, the XML output of one \siesta\ simulation may be easily parsed
in order to direct further simulations. Detailed discussion of this is
out of the scope of this manual.

\item To generate webpages showing \siesta\ output in a more accessible,
graphically rich, fashion. This section will explain how to do this.

\end{itemize}

\subsection{Controlling XML output}

\begin{fdflogicalT}{XML!Write}

  Determine if the main XML file should be created for this run.
  
\end{fdflogicalT}

\subsection{Converting XML to XHTML}

The translation of the \siesta\ XML output to a HTML-based webpage is
done using XSLT technology. The stylesheets conform to XSLT-1.0 plus
EXSLT extensions; an xslt processor capable of dealing with this is
necessary. However, in order to make the system easy to use, a script
called ccViz is provided in \texttt{Util/CMLComp} that works on most Unix or
Mac OS X systems. It is run like so:

\texttt{./ccViz SystemLabel.xml}

A new file will be produced. Point your web-browser at \texttt{SystemLabel.xhtml}
to view the output.

The generated webpages include support for viewing three-dimensional
interactive images of the system. If you want to do this, you will
either need jMol (\url{http://jmol.sourceforge.net}) installed or access
to the internet. As this
is a Java applet, you will also need a working Java Runtime
Environment and browser plugin - installation instructions for these
are outside the scope of this manual, though. However, the webpages
are still useful and may be viewed without this plugin.

An online version of this tool is avalable from
\url{http://cmlcomp.org/ccViz/}, as are updated versions of
the ccViz script.

\newpage
\section{APPENDIX: Selection of precision for storage}
\index{Precision selection}

Some of the real arrays used in Siesta are by default
single-precision, to save memory. This applies to the array that holds
the values of the basis orbitals on the real-space grid, to the
historical data sets in Broyden mixing, and to the arrays used in the
O(N) routines. Note that the grid functions (charge densities,
potentials, etc) are now (since mid January 2010) in double
precision by default.

The following pre-processing symbols at compile time control the 
precision selection

\begin{itemize}

  \item Add \texttt{-DGRID\_SP} to the \texttt{DEFS} variable in
  \file{arch.make} to use single-precision for all the grid
  magnitudes, including the orbitals array and charge densities and
  potentials.  This will cause some numerical differences and will
  have a negligible effect on memory consumption, since the orbitals
  array is the main user of memory on the grid, and it is
  single-precision by default. This setting will recover the default
  behavior of versions prior to 4.0.\index{Grid precision}

  \item Add \texttt{-DGRID\_DP} to the \texttt{DEFS} variable in
  \file{arch.make} to use double-precision for all the grid
  magnitudes, including the orbitals array. This will significantly
  increase the memory used for large problems, with negligible
  differences in accuracy.


  \item Add \texttt{-DBROYDEN\_DP} to the \texttt{DEFS} variable in
  \file{arch.make} to use double-precision arrays for the Broyden
  historical data sets. (Remember that the Broyden mixing for SCF
  convergence acceleration is an experimental feature.)\index{Broyden
      mixing}

  \item Add \texttt{-DON\_DP} to the \texttt{DEFS} variable in
  \file{arch.make} to use double-precision for all the arrays in the
  O(N) routines.

\end{itemize}

\newpage
\section{APPENDIX: Data structures and reference counting}
\index{Reference counting}
\index{Data Structures}

To implement some of the new features (e.g. charge mixing
and DM extrapolation), \siesta\ uses new flexible data structures. These are defined and
handled through a combination and extension of ideas already in the
Fortran community:
\begin{itemize}
\item Simple templating using the ``include file'' mechanism, as for example in
  the FLIBS project led by Arjen Markus
  (\url{http://flibs.sourceforge.net}).
\item The classic reference-counting mechanism to avoid memory leaks, as
  implemented in the PyF95++ project
  (\url{http://blockit.sourceforge.net}).
\end{itemize}

Reference counting makes it much simpler to store data in container
objects. For example, a circular stack is used in the charge-mixing
module. A number of future enhancements depend on this paradigm.


\clearpage
\addcontentsline{toc}{section}{Bibliography}
\bibliographystyle{plainnat}
\bibliography{siesta}

% Indices
\clearpage
\addcontentsline{toc}{section}{Index}
\printindex

\printindex[sfiles]
\printindex[sfdf]


\end{document}




%%% Local Variables:
%%% mode: latex
%%% ispell-local-dictionary: "american"
%%% fill-column: 70
%%% TeX-master: t
%%% End:
