
% --------------------------------------------------------------------
%
%                            PART 1
%
%   This introductory PETSc information is included in
%   both manual.tex and intro.tex.
%
% --------------------------------------------------------------------

\label{sec_gettingstarted}

The Portable, Extensible Toolkit for Scientific Computation (PETSc)
has successfully demonstrated that the use of modern programming
paradigms can ease the development of large-scale scientific
application codes in Fortran, C, C++, and Python.  Begun over 20 years ago,
the software has evolved into a powerful set of tools for the
numerical solution of partial differential equations and related problems
on high-performance computers. 

PETSc consists of a variety of libraries (similar to classes in C++),
which are discussed in detail in Parts II and III of the users manual.
Each library manipulates a particular family of objects (for instance,
vectors) and the operations one would like to perform on the objects.
The objects and operations in PETSc are derived from our long
experiences with scientific computation. Some of the PETSc modules deal with
\begin{tightitemize}
  \item index sets (\lstinline{IS}), including permutations, for indexing into vectors, renumbering, etc;
  \item vectors (\lstinline{Vec});
  \item matrices (\lstinline{Mat}) (generally sparse);
  \item managing interactions between mesh data structures and vectors and matrices (\lstinline{DM});
  \item over thirty Krylov subspace methods (\lstinline{KSP});
  \item dozens of preconditioners, including multigrid, block solvers, and sparse direct solvers (\lstinline{PC});
  \item nonlinear solvers (\lstinline{SNES}); and
  \item timesteppers for solving time-dependent (nonlinear) PDEs, including support for differential algebraic equations, and the computation of adjoints (sensitivities/gradients of the solutions) (\lstinline{TS})
\end{tightitemize}
Each consists of an abstract interface
(simply a set of calling sequences) and one or more implementations
using particular data structures. Thus, PETSc provides clean and
effective codes for the various phases of solving PDEs, with a uniform
approach for each class of problem.  This design
enables easy comparison and use of different algorithms (for example,
to experiment with different Krylov subspace methods, preconditioners,
or truncated Newton methods).
Hence, PETSc provides a rich environment for modeling scientific
applications as well as for rapid algorithm design and prototyping.

The libraries enable easy customization and extension of both algorithms
and implementations.  This approach promotes code reuse and
flexibility, and separates the issues of parallelism from the choice
of algorithms.  The PETSc infrastructure creates a
foundation for building large-scale applications.

It is useful to consider the interrelationships among different
pieces of PETSc.  Figure \ref{fig_library} is a diagram of some
of these pieces.
The figure illustrates the library's hierarchical organization,
which enables users to employ the level of abstraction that is most
appropriate for a particular problem.

\begin{figure}[hbt]
\centering
\begin{tikzpicture}[xscale=0.7,yscale=0.8] 
  \tikzstyle{interface}=[fill=black!5,font=\sffamily\bfseries]
  \tikzstyle{implem}=[font=\sffamily\small,text badly centered]
  \tikzstyle{ext}=[font=\sffamily\small]
  \tikzstyle{every node}=[transform shape]
  \def\levsep{1.85} % separation of levels
  \def\spacer{0.5}
  \def\block{5.25}
  \def\hint{0.6} % height interface
  \def\himp{1} % height implementation
  \draw[thick,->](-\spacer,0) -- (-\spacer,6*\levsep-\spacer) node[rotate=90,anchor=north,xshift=-150,yshift=20] {Increasing Level of Abstraction};
  \draw[ext] (0                 ,-\levsep) rectangle node {BLAS/LAPACK} +(\block,\hint);
  \draw[ext] (\block+\spacer    ,-\levsep) rectangle node {MPI}         +(\block,\hint);
  \draw[ext] (2*\block+2*\spacer,-\levsep) rectangle node {\dots}         +(\block,\hint);
  \draw[thick] (0,-\spacer) -- (3*\block+2*\spacer,-\spacer);
  \draw[interface] (0,\himp) rectangle node {Vec (Vectors)} +(2*\block+\spacer,\hint);
  \draw[implem] (0,0) rectangle node {Standard} ++(0.5*\block-0.25*\spacer,1) ++(0,-1)
                      rectangle node {CUDA} ++(0.5*\block-0.25*\spacer,1) ++(0,-1)
                      rectangle node {CUSP} ++(0.5*\block-0.25*\spacer,1) ++(0,-1)
                      rectangle node {ViennaCL} ++(0.5*\block-0.25*\spacer,1) ++(0,-1)
                      rectangle node {\dots} ++(2*\spacer,1);
  \draw[interface] (2*\block+2*\spacer,\himp) rectangle node {IS (Index Sets)} +(\block,\hint);
  \draw[implem] (2*\block+2*\spacer,0) rectangle node {General} ++(0.33*\block,1) ++(0,-1) 
                rectangle node {Block} ++(0.33*\block,1) ++(0,-1)
                rectangle node {Stride} ++(0.34*\block,1);
  \draw[interface] (0,\levsep+\himp) rectangle node {Mat (Operators)} +(3*\block+2*\spacer,\hint);
  \draw[implem] (0,\levsep) rectangle node[text width=2.5cm] {Compressed Sparse Row} ++(0.5*\block,1) ++(0,-1) 
                rectangle node[text width=1.3cm] {Block CSR} ++(0.3*\block,1) ++(0,-1) 
                rectangle node[text width=2.3cm] {Symmetric Block CSR} ++(0.5*\block,1) ++(0,-1) 
                rectangle node {Dense} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {CUSPARSE} ++(0.4*\block,1) ++(0,-1) 
                rectangle node {CUSP} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {ViennaCL} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {FFT} ++(0.2*\block,1) ++(0,-1) 
                rectangle node {Shell} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {\dots} ++(2*\spacer,1) ++(0,-1);
  \draw[interface] (0,2*\levsep+\himp) rectangle node {PC (Preconditioners)} +(3*\block+2*\spacer,\hint);
  \draw[implem] (0,2*\levsep) rectangle node[text width=1.7cm] {Additive Schwarz} ++(0.4*\block,1) ++(0,-1) 
                rectangle node[text width=1.7cm] {Block Jacobi} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {Jacobi} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {ICC} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {ILU} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {LU} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {SOR} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {MG} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {AMG} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {BDDC} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {Shell} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {\dots} ++(2*\spacer,1);
  \draw[interface] (0,3*\levsep+\himp) rectangle node {KSP (Krylov Subspace Methods)} +(3*\block+2*\spacer,\hint);
  \draw[implem] (0,3*\levsep) rectangle node {GMRES} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {Richardson} ++(0.4*\block,1) ++(0,-1) 
                rectangle node {CG} ++(0.2*\block,1) ++(0,-1) 
                rectangle node {CGS} ++(0.2*\block,1) ++(0,-1) 
                rectangle node {Bi-CGStab} ++((0.35*\block,1) ++(0,-1) 
                rectangle node {TFQMR} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {MINRES} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {GCR} ++(0.25*\block,1) ++(0,-1) 
                rectangle node {Chebyshev} ++(0.35*\block,1) ++(0,-1) 
                rectangle node[text width=1.8cm] {Pipelined CG} ++(0.35*\block,1) ++(0,-1) 
                rectangle node {\dots} ++(2*\spacer,1);
  \draw[interface] (0,4*\levsep+\himp) rectangle node {SNES (Nonlinear Solvers)} +(2*\block+\spacer,\hint);
  \draw[implem] (0,4*\levsep) rectangle node[text width=1.8cm] {Newton Line Search} ++(0.35*\block,1) ++(0,-1) 
                rectangle node[text width=2cm] {Newton Trust Region} ++(0.45*\block,1) ++(0,-1) 
                rectangle node {FAS} ++(0.2*\block,1) ++(0,-1) 
                rectangle node {NGMRES} ++(0.35*\block,1) ++(0,-1) 
                rectangle node {NASM} ++(0.35*\block-\spacer,1) ++(0,-1) 
                rectangle node {ASPIN} ++(0.3*\block,1) ++(0,-1) 
                rectangle node {\dots} ++(2*\spacer,1);
  \draw[interface] (2*\block+2*\spacer,4*\levsep+\himp) rectangle node {TAO (Optimization)} +(\block,\hint);
  \draw[implem] (2*\block+2*\spacer,4*\levsep) rectangle node[text width=1.2cm] {Newton} ++(0.5*\block-\spacer,1) ++(0,-1) 
                rectangle node[text width=1.4cm] {Levenberg-Marquardt} ++(0.5*\block-\spacer,1) ++(0,-1)
                rectangle node {\dots} ++(2*\spacer,1);
  \draw[interface] (0,5*\levsep+\himp) rectangle node {TS (Time Steppers)} +(2*\block+\spacer,\hint);
  \draw[implem] (0,5*\levsep) rectangle node {Euler} ++(0.2*\block,1) ++(0,-1) 
                rectangle node[text width=1.5cm] {Backward Euler} ++(0.35*\block,1) ++(0,-1) 
                rectangle node[text width=1.8cm] {RK} ++(0.2*\block,1) ++(0,-1) 
                rectangle node[text width=1.8cm] {BDF} ++(0.2*\block,1) ++(0,-1) 
                rectangle node[text width=1.8cm] {SSP} ++(0.2*\block,1) ++(0,-1) 
                rectangle node[text width=1.8cm] {ARKIMEX} ++(0.45*\block-\spacer,1) ++(0,-1) 
                rectangle node[text width=1.8cm] {Rosenbrock-W} ++(0.4*\block,1) ++(0,-1) 
                rectangle node {\dots} ++(2*\spacer,1);
  \draw[interface] (2*\block+2*\spacer,5*\levsep+\himp) rectangle node {DM (Domain Management)} +(\block,\hint);
  \draw[implem] (2*\block+2*\spacer,5*\levsep) rectangle node[text width=1.7cm] {Distributed Array} ++(0.5*\block-\spacer,1) ++(0,-1) 
                rectangle node[text width=1.7cm] {Plex (Unstructured)} ++(0.5*\block-\spacer,1) ++(0,-1)
                rectangle node {\dots} ++(2*\spacer,1);
  \draw[thick] (0,6*\levsep) -- (3*\block+2*\spacer,6*\levsep);
  \node[above,text centered,font=\sffamily\bfseries\Large] at (1.5*\block+1.5*\spacer,6*\levsep) {PETSc};
  \draw[ext] (0,6*\levsep+2*\spacer) rectangle node {Application Codes} +(\block,\hint);
  \draw[ext] (\block+\spacer,6*\levsep+2*\spacer) rectangle node {Higher-Level Libraries} +(\block,\hint);
  \draw[ext] (2*\block+2*\spacer,6*\levsep+2*\spacer) rectangle node {\dots} +(\block,\hint);
\end{tikzpicture}
\caption{\label{fig_library}Numerical libraries of PETSc}
\end{figure}


\section{Suggested Reading}

The manual is
divided into three parts:
\begin{tightitemize}
\item Part I - Introduction to PETSc
\item Part II - Programming with PETSc
\item Part III - Additional Information
\end{tightitemize}

Part I describes
the basic procedure for using the PETSc library and presents two
simple examples of solving linear systems with PETSc.  This section
conveys the typical style used throughout the library and enables the
application programmer to begin using the software immediately.
Part I is also distributed separately for individuals interested in an
overview of the PETSc software, excluding the details of library usage.
Readers of this separate distribution of Part I should note that all
references within the text to particular chapters and sections
indicate locations in the complete users manual.

Part II explains in detail the use of the various PETSc libraries,
such as vectors, matrices, index sets, linear and nonlinear
solvers, and graphics.  Part III describes a variety of useful
information, including profiling, the options database, viewers, error
handling, makefiles, and some details of
PETSc design.

\nocite{efficient}

PETSc has evolved to become quite a comprehensive package, and therefore the
{\em PETSc Users Manual} can be rather intimidating for new users. We
recommend that one initially read the entire document before proceeding with
serious use of PETSc, but bear in mind that PETSc can be used efficiently
before one understands all of the material presented here. Furthermore, the
definitive reference for any PETSc function is always the online manual page (``manpage'').

\medskip \medskip

Within the PETSc distribution, the directory \trl{${PETSC_DIR}/docs}
contains all documentation.
Manual pages for all PETSc functions can be
accessed at \href{https://www.mcs.anl.gov/petsc/documentation}{www.mcs.anl.gov/petsc/documentation}.
The manual pages
provide hyperlinked indices (organized by
both concept and routine name) to the tutorial examples and enable
easy movement among related topics.

Emacs and Vi/Vim users may find the
\trl{etags}/\trl{ctags}  option to be extremely useful for exploring the PETSc
source code.  Details of this feature are provided in
Section~\ref{sec_emacs}.

The file \trl{manual.pdf} contains
the complete {\em PETSc Users Manual} in the portable document format (PDF),
while \trl{intro.pdf}
includes only the introductory segment, Part I.  \sindex{installing PETSc}
The complete PETSc distribution, users
manual, manual pages, and additional information are also available via
the PETSc home page at
\href{http://www.mcs.anl.gov/petsc}{www.mcs.anl.gov/petsc}.
The PETSc home page also
contains details regarding installation, new features and changes in recent
versions of PETSc, machines that we currently support, and a FAQ list for frequently asked questions.

\medskip\medskip

\noindent{\bf Note to Fortran Programmers}: In most of the
manual, the examples and calling sequences are given for the C/C++
family of programming languages.  We follow this convention because we
recommend that PETSc applications be coded in C or C++.
However, pure Fortran programmers can use most of the
functionality of PETSc from Fortran, with only minor differences in
the user interface.  Chapter \ref{ch_fortran} provides a discussion of the
differences between using PETSc from Fortran and C, as well as several
complete Fortran examples.  This chapter also introduces some
routines that support direct use of Fortran90 pointers. \\

\noindent{\bf Note to Python Programmers}: To program with PETSc in Python you need to install the PETSc4py package developed by
Lisandro Dalcin. This can be done by configuring PETSc with the option \trl{--download-petsc4py}. See the PETSc installation guide
for more details:\\ \href{http://www.mcs.anl.gov/petsc/documentation/installation.html}{http://www.mcs.anl.gov/petsc/documentation/installation.html}.

%-----------------------------------------------------------------------------
\section{Running PETSc Programs}
\label{sec_running}

Before using PETSc, the user must first set the environmental variable
\trl{PETSC_DIR}, \findex{PETSC_DIR} indicating the full path of the PETSc home
directory.  For example, under the UNIX bash shell a command of the form
\begin{bashlisting}
export PETSC_DIR=$HOME/petsc
\end{bashlisting}
 can be placed in the user's \trl{.bashrc} or other startup file.  In addition, the user may need to set the environment
variable {\trl{PETSC_ARCH}} to specify a particular configuration of the PETSc libraries. Note that
{\trl{PETSC_ARCH}} is just a name selected by the installer to refer to
the libraries compiled for a particular set of compiler options and
machine type. Using different values of {\trl{PETSC_ARCH}} allows one to switch between
several different sets (say debug and optimized) of libraries easily. To determine if you need to set {\trl{PETSC_ARCH}},
look in the directory indicated by \trl{PETSC_DIR}, if there are subdirectories beginning with {\tt arch} then those subdirectories give the
possible values for {\trl{PETSC_ARCH}}.

All PETSc programs use the MPI (Message Passing Interface) standard
for message-passing communication \cite{MPI-final}\findex{MPI}.  Thus, to execute
PETSc programs, users must know the procedure for beginning MPI jobs
on their selected computer system(s).  For instance, when using the
MPICH implementation of MPI \cite{mpich-web-page} and many others, the following
command initiates a program that uses eight processors:
\findex{mpiexec} \sindex{running PETSc programs}
\begin{bashlisting}
mpiexec -n 8 ./petsc_program_name petsc_options
\end{bashlisting}

PETSc also comes with a script
that uses the information set in \trl{${PETSC_DIR}/${PETSC_ARCH}/lib/petsc/conf/petscvariables} to
automatically use the correct \trl{mpiexec} for your configuration.
\begin{bashlisting}
${PETSC_DIR}/lib/petsc/bin/petscmpiexec -n 8 ./petsc_program_name petsc_options
\end{bashlisting}

All PETSc-compliant programs support the use of the \trl{-h}
\findex{-h} or \trl{-help} option as well as the \trl{-v} \findex{-v}
or \trl{-version} option.

Certain options are supported by all PETSc programs.  We list a few
particularly useful ones below; a complete list can be obtained by
running any PETSc program with the option \trl{-help}.
\begin{tightitemize}
\item \trl{-log_view} - summarize the program's performance
\item \trl{-fp_trap} - stop on floating-point exceptions; \findex{-fp_trap}
      for example divide by zero
\item \trl{-malloc_dump} - enable memory tracing; dump list of unfreed memory
      at conclusion \findex{-malloc_dump} of the run
\item \trl{-malloc_debug} - enable memory tracing (by default this is
      activated for debugging versions)
\item \trl{-start_in_debugger} \trl{[noxterm,gdb,dbx,xxgdb]} \trl{[-display name]}
     - start all processes in debugger \findex{-start_in_debugger} \sindex{debugger}
\item \trl{-on_error_attach_debugger}  \trl{[noxterm,gdb,dbx,xxgdb]}
      \trl{[-display name]} - \findex{-on_error_attach_debugger}start debugger only on encountering an error
\item \trl{-info} - print a great deal of information about what the program is doing as it runs
\item \trl{-options_file} \trl{filename} - read options from a file
\end{tightitemize}
See Section \ref{sec_debugging} for more information on debugging PETSc programs.

%-----------------------------------------------------------------------------

\section{Running PETSc Tests}
\label{sec_runningtests}

\subsection{Quick start with the tests}

For testing builds, the general invocation from the \trl{PETSC_DIR} is:
\begin{bashlisting}
make -f gmakefile.test test PETSC_ARCH=<PETSC_ARCH>
\end{bashlisting}

For testing installs, the general invocation from the installation (prefix)
directory is:
\begin{bashlisting}
make -f share/petsc/examples/gmakefile.test test
\end{bashlisting}

For a full list of options, use
\begin{bashlisting}
make -f gmakefile.test help
\end{bashlisting}

\subsection{Understanding test output and more information}

As discussed in Section~\ref{sec_running}, 
users should set \trl{PETSC_DIR} and \trl{PETSC_ARCH} before running
the tests, or can provide them on the command line as below.

To check if the libraries are working do:
\begin{bashlisting}
make PETSC_DIR=<PETSC_DIR> PETSC_ARCH=<PETSC_ARCH> test
\end{bashlisting}

A larger set of legacy tests can be run with 
\begin{bashlisting}
make PETSC_DIR=<PETSC_DIR> PETSC_ARCH=<PETSC_ARCH> alltests
\end{bashlisting}

The new testing system is available by running
\begin{bashlisting}
make -f gmakefile.test test PETSC_ARCH=<PETSC_ARCH>
\end{bashlisting}

The test reporting system classifies them according to the Test Anywhere
Protocal (TAP)\footnote{See \url{https://testanything.org/tap-specification.html}}.
In brief, the categories are
\begin{tightitemize}
  \item \lstinline{ok}
  \subitem The test passed.
\item \lstinline{not ok}
  \subitem The test failed.
\item \lstinline{not ok #SKIP}
  \subitem The test was skipped, usually because build requirements were not
  met (for example, an external solver library was required, but not
  compiled against it).
\item \lstinline{ok #TODO}
  \subitem The test is under development by the developers.
\end{tightitemize}

The tests are a series of shell scripts, generated by information
contained within the test source file, that are invoked by the makefile
system.  The tests are run in 
\trl{${PETSC_DIR}/${PETSC_ARCH}/tests}
with the same directory as the source tree underneath.
For testing installs, the default location is
\trl{${PREFIX_DIR}/tests} but this can be changed with the \trl{TESTDIR} location.
(See Section~\ref{sec_directory}).  
A label is used to denote where it can be found within the source tree.
For example, test \trl{vec_vec_tutorials-ex6},
which can be run e.g. with
\begin{bashlisting}
make -f gmakefile.test test search='vec_vec_tutorials-ex6'
\end{bashlisting}
(see the discussion of \trl{search} below), denotes the shell
script:
\begin{bashlisting}
${PETSC_DIR}/${PETSC_ARCH}/src/vec/vec/examples/tutorials/runex6.sh
\end{bashlisting}
These shell scripts can be run independently in those directories, and
take arguments to show the commands run, change arguments, etc.  Use the
\trl{-h} option to the shell script to see these options.

Often, you want to run only a subset of tests.  Our makefiles use
\trl{gmake}'s wildcard syntax.  In this syntax, \trl{%} is a wild
card character and is passed in using the \trl{search} argument.
Two wildcard characters cannot be used in a search, so the
\trl{searchin} argument is used to provide the equivalent of
\trl{%pattern%} search.
The default examples have default arguments, and we often wish
to test examples with various arguments; we use the \trl{argsearch}
argument for these searches.  Like \trl{searchin}, it does not use
wildcards, but rather whether the string is within the arguments.

Some examples are:
\begin{bashlisting}
make -f gmakefile.test test search='ts%'                      # Run all TS examples
make -f gmakefile.test test searchin='tutorials'              # Run all tutorials
make -f gmakefile.test test search 'ts%' searchin='tutorials' # Run all TS tutorials
make -f gmakefile.test test argsearch='cuda'                  # Run examples with cuda in arguments
\end{bashlisting}

It is useful before invoking the tests to see what targets will
be run.  The \lstinline{print-test} target helps with this:
\begin{bashlisting}
make -f gmakefile.test print-test argsearch='cuda'
\end{bashlisting}
To see all of the test targets which would be run, this command can be used:
\begin{bashlisting}
make -f gmakefile.test print-test
\end{bashlisting}

For testing in install directories, some examples are:
\begin{bashlisting}
cd ${PREFIX_DIR}; make -f gmakefile.test test TESTDIR=mytests
\end{bashlisting}
or
\begin{bashlisting}
cd ${PREFIX_DIR}/share/petsc/examples; make -f gmakefile.test test TESTDIR=$PWD/mytests
\end{bashlisting}
where the latter is needed to make have it run in the local directory instead of
\trl{$PREFIX_DIR}.

To learn more about the test system details, one can look at the
\href{http://www.mcs.anl.gov/petsc/petsc-current/docs/developers.pdf}{Developer's Guide}.

%-----------------------------------------------------------------------------
\section{Writing PETSc Programs}
\label{sec_writing}

Most PETSc programs begin with a call to
\begin{lstlisting}
PetscInitialize(int *argc,char ***argv,char *file,char *help);
\end{lstlisting}
which initializes PETSc and MPI.  The arguments \lstinline{argc} and
\lstinline{argv} are the command line arguments delivered in all C and C++
programs. \sindex{command line arguments} The argument \lstinline{file}
optionally indicates an alternative name for the PETSc options file,
\trl{.petscrc}, which resides by default in the user's home directory.
Section \ref{sec_options} provides details regarding
this file and the PETSc options database, which can be used for runtime
customization. The final argument, \lstinline{help}, is an optional
character string that will be printed if the program is run with the
\trl{-help} option.  In Fortran the initialization command has the form
\begin{lstlisting}[language=fortran]
call PetscInitialize(character(*) file,integer ierr)
\end{lstlisting}
\lstinline{PetscInitialize()} automatically calls \lstinline{MPI_Init()} if MPI
has not been not previously initialized. In certain \findex{MPI_Init()}
circumstances in which MPI needs to be initialized directly (or is
initialized by some other library), the user can first call
\lstinline{MPI_Init()} (or have the other library do it), and then call
\lstinline{PetscInitialize()}.
By default, \lstinline{PetscInitialize()} sets the PETSc ``world''
communicator, given by \lstinline{PETSC_COMM_WORLD}, to \lstinline{MPI_COMM_WORLD}.

For those not familiar with MPI, a {\em communicator} is a way of
indicating a collection of processes that will be involved together
in a calculation or communication. Communicators have the variable type
\lstinline{MPI_Comm}. In most cases users can employ the communicator
\lstinline{PETSC_COMM_WORLD} to indicate all processes in a given run and
\lstinline{PETSC_COMM_SELF} to indicate a single process.

MPI provides routines
for generating new communicators consisting of subsets of processors,
though most users rarely need to use these. The book {\em Using MPI},
by Lusk, Gropp, and Skjellum \cite{using-mpi} provides an excellent
introduction to the concepts in MPI. See also the MPI homepage
\href{http://www.mcs.anl.gov/mpi/}{http://www.mcs.anl.gov/mpi/}.
Note that PETSc users need not program much message passing directly
with MPI, but they must be familiar with the basic concepts of message
passing and distributed memory computing.

All PETSc routines return a \lstinline{PetscErrorCode}, which is an integer indicating whether an error has
occurred during the call.  The error code is set to be nonzero if an
error has been detected; otherwise, it is zero.  For the C/C++
interface, the error variable is the routine's return value, while for
the Fortran version, each PETSc routine has as its final argument an
integer error variable.  Error tracebacks are discussed in the following
section.

All PETSc programs should call \lstinline{PetscFinalize()}
as their final (or nearly final) statement, as given below in the C/C++
and Fortran formats, respectively:
\begin{lstlisting}
PetscFinalize();
call PetscFinalize(ierr)
\end{lstlisting}
This routine handles options to be called at the conclusion of
the program, and calls \lstinline{MPI_Finalize()} \findex{MPI_Finalize()}
if \lstinline{PetscInitialize()}
began MPI. If MPI was initiated externally from PETSc (by either
the user or another software package), the user is
responsible for calling \lstinline{MPI_Finalize()}.

\section{Simple PETSc Examples}

\label{sec_simple}

To help the user start using PETSc immediately, we begin with a simple
uniprocessor example in Figure~\ref{fig_example1} that solves the
one-dimensional Laplacian problem with finite differences.  This
sequential code, which can be found in
\trl{$PETSC_DIR/src/ksp/ksp/examples/tutorials/ex1.c},
illustrates the solution of a linear system with \lstinline{KSP}, the
interface to the preconditioners, Krylov subspace methods, and direct
linear solvers of PETSc.  Following the code we highlight a few of the most important
parts of this example.

\begin{figure}[H]
{
  \input{listing_kspex1tmp.tex}
}
\caption{Example of Uniprocessor PETSc Code}
\label{fig_example1}
\end{figure}

\subsection*{Include Files}

The C/C++ include files for PETSc should be used via statements such as
\begin{lstlisting}
#include <petscksp.h>
\end{lstlisting}
where \lstinline{petscksp.h} is the include file for the linear solver library.
Each PETSc program must specify an
include file that corresponds to the highest level PETSc objects
needed within the program; all of the required lower level include
files are automatically included within the higher level files.  For
example, \trl{petscksp.h} includes \trl{petscmat.h} (matrices),
\trl{petscvec.h} (vectors), and \trl{petscsys.h} (base PETSc file).
The PETSc include files are located in the directory
\trl{${PETSC_DIR}/include}.  See Section \ref{sec_fortran_includes}
for a discussion of PETSc include files in Fortran programs.

\subsection*{The Options Database}

As shown in Figure~\ref{fig_example1}, the user can input control data
at run time using the options database. In this example the command
\lstinline{PetscOptionsGetInt(NULL,NULL,"-n",&n,&flg);} checks whether the user has
provided a command line option to set the value of \lstinline{n}, the
problem dimension.  If so, the variable \lstinline{n} is set accordingly;
otherwise, \lstinline{n} remains unchanged. A complete description of the
options database may be found in Section \ref{sec_options}.

\subsection*{Vectors}
\label{sec_vecintro}

One creates a new parallel or
sequential vector, \lstinline{x}, of global dimension \lstinline{M} with the
commands  \sindex{vectors}
\begin{lstlisting}
VecCreate(MPI_Comm comm,Vec *x);
VecSetSizes(Vec x, PetscInt m, PetscInt M);
\end{lstlisting}
where \lstinline{comm} denotes the MPI communicator and \lstinline{m} is the optional local size
which may be \lstinline{PETSC_DECIDE}. The type of storage
for the vector may be set with either calls to
\lstinline{VecSetType()} or \lstinline{VecSetFromOptions()}.
Additional vectors of the same type can be formed with
\begin{lstlisting}
VecDuplicate(Vec old,Vec *new);
\end{lstlisting}
The commands
\begin{lstlisting}
VecSet(Vec x,PetscScalar value);
VecSetValues(Vec x,PetscInt n,PetscInt *indices,PetscScalar *values,INSERT_VALUES);
\end{lstlisting}
respectively set all the components of a vector to a particular scalar
value and assign a different value to each component.  More
detailed information about PETSc vectors, including their basic
operations, scattering/gathering, index sets, and distributed arrays, is
discussed in Chapter~\ref{chapter_vectors}.

\sindex{complex numbers}
Note the use of the PETSc variable type \lstinline{PetscScalar} in this example.
The \lstinline{PetscScalar} is simply defined to be \lstinline{double} in C/C++
(or correspondingly \lstinline{double precision} in Fortran) for versions of
PETSc that have {\em not} been compiled for use with complex numbers.
The \lstinline{PetscScalar} data type enables
identical code to be used when the PETSc libraries have been compiled
for use with complex numbers.  Section~\ref{sec_complex} discusses the
use of complex numbers in PETSc programs.

\subsection*{Matrices}
\label{sec_matintro}
Usage of PETSc matrices and vectors is similar. \sindex{matrices}
The user can create a new parallel or sequential matrix, \lstinline{A}, which
has \lstinline{M} global rows and \lstinline{N} global columns, with the routines
\begin{lstlisting}
MatCreate(MPI_Comm comm,Mat *A);
MatSetSizes(Mat A,PETSC_DECIDE,PETSC_DECIDE,PetscInt M,PetscInt N);
\end{lstlisting}
where the matrix format can be specified at runtime via the options database.  The user could
alternatively specify each processes' number of local rows and columns
using \trl{m} and \trl{n}.
\begin{lstlisting}
MatSetSizes(Mat A,PetscInt m,PetscInt n,PETSC_DETERMINE,PETSC_DETERMINE);
\end{lstlisting}
Generally one then sets the ``type'' of the matrix, with, for example,
\begin{lstlisting}
MatSetType(A,MATAIJ);
\end{lstlisting}
This causes the matrix \trl{A} to used the compressed sparse row storage format to store the
matrix entries. See \lstinline{MatType} for a list of all matrix types.
Values can then be set with the command
\begin{lstlisting}
MatSetValues(Mat A,PetscInt m,PetscInt *im,PetscInt n,PetscInt *in,PetscScalar *values,INSERT_VALUES);
\end{lstlisting}
After all elements have been inserted into the
matrix, it must be processed with the pair of commands
\begin{lstlisting}
MatAssemblyBegin(A,MAT_FINAL_ASSEMBLY);
MatAssemblyEnd(A,MAT_FINAL_ASSEMBLY);
\end{lstlisting}
Chapter~\ref{chapter_matrices} discusses various matrix formats as
well as the details of some basic matrix manipulation routines.

\subsection*{Linear Solvers}

After creating the matrix and vectors that define a linear system,
\lstinline{Ax} $=$ \lstinline{b}, the user can then use \lstinline{KSP} to solve the system
with the following sequence of commands:
\begin{lstlisting}
KSPCreate(MPI_Comm comm,KSP *ksp); 
KSPSetOperators(KSP ksp,Mat Amat,Mat Pmat);
KSPSetFromOptions(KSP ksp);
KSPSolve(KSP ksp,Vec b,Vec x);
KSPDestroy(KSP ksp);
\end{lstlisting}
The user first creates the \lstinline{KSP} context and sets the operators
associated with the system (matrix that defines the linear system, \lstinline{Amat} and matrix from which the 
preconditioner is constructed, \lstinline{Pmat}).  The user then sets various options for
customized solution, solves the linear system, and finally destroys
the \lstinline{KSP} context.  We emphasize the command \lstinline{KSPSetFromOptions()},
which enables the user to customize the linear solution
method at runtime by using the options database, which is discussed in
Section~\ref{sec_options}. Through this database, the user not only
can select an iterative method and preconditioner, but also can prescribe
the convergence tolerance, set various monitoring routines, etc.
(see, e.g., Figure~\ref{fig_exprof}).

Chapter~\ref{ch_ksp} describes in detail the \trl{KSP} package, including
the \lstinline{PC} and \lstinline{KSP} packages for preconditioners and Krylov subspace methods.

\subsection*{Nonlinear Solvers}
Most PDE problems of interest are inherently nonlinear. PETSc provides
an interface to tackle the nonlinear problems directly called \lstinline{SNES}. Chapter
\ref{chapter_snes} describes the nonlinear solvers in detail. We recommend
most PETSc users work directly with \lstinline{SNES}, rather than using PETSc
for the linear problem within a nonlinear solver.

\subsection*{Error Checking}

All PETSc routines return an integer indicating whether an error
has occurred during the call.  The PETSc macro \lstinline{CHKERRQ(ierr)}
checks the value of \lstinline{ierr} and calls the PETSc error handler
upon error detection.  \lstinline{CHKERRQ(ierr)} should be used in all
subroutines to enable a complete error traceback.
In Figure~\ref{fig_traceback} we indicate a
traceback generated by error detection within a sample PETSc
program. The error occurred on line 3618 of the file \trl{
${PETSC_DIR}/src/mat/impls/aij/seq/aij.c} and was caused by trying to allocate too
large an array in memory. The routine was called in the program
\trl{ex3.c} on line 66.  See Section \ref{sec_fortran_errors} for
details regarding error checking when using the PETSc Fortran interface.

\begin{figure}[H]
  {
    \begin{outputlisting}
 $ mpiexec -n 1 ./ex3 -m 100000
 [0]PETSC ERROR: --------------------- Error Message --------------------------------
 [0]PETSC ERROR: Out of memory. This could be due to allocating
 [0]PETSC ERROR: too large an object or bleeding by not properly
 [0]PETSC ERROR: destroying unneeded objects.
 [0]PETSC ERROR: Memory allocated 11282182704 Memory used by process 7075897344
 [0]PETSC ERROR: Try running with -malloc_dump or -malloc_log for info.
 [0]PETSC ERROR: Memory requested 18446744072169447424
 [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
 [0]PETSC ERROR: Petsc Development GIT revision: v3.7.1-224-g9c9a9c5  GIT Date: 2016-05-18 22:43:00 -0500
 [0]PETSC ERROR: ./ex3 on a arch-darwin-double-debug named Patricks-MacBook-Pro-2.local by patrick Mon Jun 27 18:04:03 2016
 [0]PETSC ERROR: Configure options PETSC_DIR=/Users/patrick/petsc PETSC_ARCH=arch-darwin-double-debug --download-mpich --download-f2cblaslapack --with-cc=clang --with-cxx=clang++ --with-fc=gfortran --with-debugging=1 --with-precision=double --with-scalar-type=real --with-viennacl=0 --download-c2html -download-sowing
 [0]PETSC ERROR: #1 MatSeqAIJSetPreallocation_SeqAIJ() line 3618 in /Users/patrick/petsc/src/mat/impls/aij/seq/aij.c
 [0]PETSC ERROR: #2 PetscTrMallocDefault() line 188 in /Users/patrick/petsc/src/sys/memory/mtr.c
 [0]PETSC ERROR: #3 MatSeqAIJSetPreallocation_SeqAIJ() line 3618 in /Users/patrick/petsc/src/mat/impls/aij/seq/aij.c
 [0]PETSC ERROR: #4 MatSeqAIJSetPreallocation() line 3562 in /Users/patrick/petsc/src/mat/impls/aij/seq/aij.c
 [0]PETSC ERROR: #5 main() line 66 in /Users/patrick/petsc/src/ksp/ksp/examples/tutorials/ex3.c
 [0]PETSC ERROR: PETSc Option Table entries:
 [0]PETSC ERROR: -m 100000
 [0]PETSC ERROR: ----------------End of Error Message ------- send entire error message to petsc-maint@mcs.anl.gov----------
\end{outputlisting}
}
\nobreak
\caption{Example of Error Traceback}
\label{fig_traceback}
\end{figure}

When running the debug version of the PETSc libraries, it
does a great deal of checking for memory corruption (writing outside of
array bounds etc). The macro \lstinline{CHKMEMQ} can be called
anywhere in the code to check the current status of the memory for corruption.
By putting several (or many) of these macros into your code you can usually
easily track down in what small segment of your code the corruption has occured.
One can also use Valgrind to track down memory errors; see the FAQ at
\href{http://www.mcs.anl.gov/petsc/documentation/faq.html}{www.mcs.anl.gov/petsc/documentation/faq.html}

\subsection*{Parallel Programming}
Since PETSc uses the message-passing model for
parallel programming and employs MPI for all interprocessor
communication, the user is free to employ MPI routines as needed
throughout an application code.  However, by default the user is
shielded from many of the details of message passing within PETSc,
since these are hidden within parallel objects, such as vectors,
matrices, and solvers.  In addition, PETSc provides tools such as
generalized vector scatters/gathers and distributed arrays to assist
in the management of parallel data.

\sindex{collective operations}
Recall that the user must specify a communicator upon creation of any
PETSc object (such as a vector, matrix, or solver) to indicate the
processors over which the object is to be distributed.  For example,
as mentioned above, some commands for matrix, vector, and linear solver
creation are:
\begin{lstlisting}
MatCreate(MPI_Comm comm,Mat *A);
VecCreate(MPI_Comm comm,Vec *x);
KSPCreate(MPI_Comm comm,KSP *ksp);
\end{lstlisting}
The creation routines are collective over all processors in the
communicator; thus, all processors in the communicator {\em must}
call the creation routine.  In addition, if a sequence of
collective routines is being used, they {\em must} be called
in the same order on each processor.

The next example, given in Figure~\ref{fig_example2}, illustrates the
solution of a linear system in parallel.  This code, corresponding to
\href{http://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex3.c.html}{\trl{$PETSC_DIR/src/ksp/ksp/examples/tutorials/ex2.c}}, handles the
two-dimensional Laplacian discretized with finite differences, where
the linear system is again solved with KSP.  The code performs the
same tasks as the sequential version within Figure~\ref{fig_example1}.
Note that the user interface for initiating the program, creating
vectors and matrices, and solving the linear system is {\em exactly}
the same for the uniprocessor and multiprocessor examples.  The
primary difference between the examples in Figures \ref{fig_example1}
and \ref{fig_example2} is that each processor forms only its local
part of the matrix and vectors in the parallel case.

\begin{figure}[H]
{
  \input{listing_kspex2tmp.tex}
}
\nobreak
  \caption{Example of Multiprocessor PETSc Code}
\label{fig_example2}
\end{figure}

\subsection*{Compiling and Running Programs}

Figure~\ref{fig_exrun} illustrates compiling and running a PETSc program
using MPICH on an OS X laptop.  Note that different machines will have
compilation commands as determined by the configuration process.  See Chapter \ref{ch_makefiles}
for a discussion about compiling PETSc programs.
Users who are experiencing difficulties linking PETSc programs should
refer to the FAQ on the PETSc website
\href{http://www.mcs.anl.gov/petsc}{http://www.mcs.anl.gov/petsc} or
given in the file \trl{$PETSC_DIR/docs/faq.html}.

\begin{figure}[H]
{
 \begin{outputlisting}
$ make ex2
/Users/patrick/petsc/arch-darwin-double-debug/bin/mpicc -o ex2.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Qunused-arguments -fvisibility=hidden -g3   -I/Users/patrick/petsc/include -I/Users/patrick/petsc/arch-darwin-double-debug/include -I/opt/X11/include -I/opt/local/include    `pwd`/ex2.c
/Users/patrick/petsc/arch-darwin-double-debug/bin/mpicc -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first -Wl,-multiply_defined,suppress -Wl,-multiply_defined -Wl,suppress -Wl,-commons,use_dylibs -Wl,-search_paths_first    -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Qunused-arguments -fvisibility=hidden -g3  -o ex2 ex2.o  -Wl,-rpath,/Users/patrick/petsc/arch-darwin-double-debug/lib -L/Users/patrick/petsc/arch-darwin-double-debug/lib  -lpetsc -Wl,-rpath,/Users/patrick/petsc/arch-darwin-double-debug/lib -lf2clapack -lf2cblas -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 -lssl -lcrypto -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.0.2/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/7.0.2/lib/darwin -lmpifort -lgfortran -Wl,-rpath,/opt/local/lib/gcc5/gcc/x86_64-apple-darwin14/5.3.0 -L/opt/local/lib/gcc5/gcc/x86_64-apple-darwin14/5.3.0 -Wl,-rpath,/opt/local/lib/gcc5 -L/opt/local/lib/gcc5 -lgfortran -lgcc_ext.10.5 -lquadmath -lm -lclang_rt.osx -lmpicxx -lc++ -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.2/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.2/lib/darwin -lclang_rt.osx -Wl,-rpath,/Users/patrick/petsc/arch-darwin-double-debug/lib -L/Users/patrick/petsc/arch-darwin-double-debug/lib -ldl -lmpi -lpmpi -lSystem -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.2/lib/darwin -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.2/lib/darwin -lclang_rt.osx -ldl
/bin/rm -f ex2.o
$ $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1 ./ex2
Norm of error 0.000156044 iterations 6
$ $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 2 ./ex2
Norm of error 0.000411674 iterations 7
\end{outputlisting}
}
\nobreak
\caption{Running a PETSc Program}
\label{fig_exrun}
\end{figure}

As shown in Figure \ref{fig_exprof}, the option \trl{-log_view} activates printing of a performance summary, including
times, floating point operation (flop) rates, and message-passing
activity.  Chapter~\ref{ch_profiling}
provides details about profiling, including interpretation of the
output data within Figure~\ref{fig_exprof}.  This particular example involves the solution of a linear
system on one processor using GMRES and ILU.  The low floating point
operation (flop) rates in this example are due to the fact that the
code solved a tiny system.  We include this example merely to
demonstrate the ease of extracting performance information.

\begin{figure}[H]
{
  \begin{outputlisting}[\fontsize{7.5pt}{8pt}\ttfamily]
$ $PETSC_DIR/lib/petsc/bin/petscmpiexec -n 1 ./ex1 -n 1000 -pc_type ilu -ksp_type gmres -ksp_rtol 1.e-7 -log_view
...
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

VecMDot                1 1.0 3.2830e-06 1.0 2.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   0  5  0  0  0   609
VecNorm                3 1.0 4.4550e-06 1.0 6.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  0 14  0  0  0   0 14  0  0  0  1346
VecScale               2 1.0 4.0110e-06 1.0 2.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  0  5  0  0  0   0  5  0  0  0   499
VecCopy                1 1.0 3.2280e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                11 1.0 2.5537e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
VecAXPY                2 1.0 2.0930e-06 1.0 4.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  0 10  0  0  0   0 10  0  0  0  1911
VecMAXPY               2 1.0 1.1280e-06 1.0 4.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  0 10  0  0  0   0 10  0  0  0  3546
VecNormalize           2 1.0 9.3970e-06 1.0 6.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  1 14  0  0  0   1 14  0  0  0   638
MatMult                2 1.0 1.1177e-05 1.0 9.99e+03 1.0 0.0e+00 0.0e+00 0.0e+00  1 24  0  0  0   1 24  0  0  0   894
MatSolve               2 1.0 1.9933e-05 1.0 9.99e+03 1.0 0.0e+00 0.0e+00 0.0e+00  1 24  0  0  0   1 24  0  0  0   501
MatLUFactorNum         1 1.0 3.5081e-05 1.0 4.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  2 10  0  0  0   2 10  0  0  0   114
MatILUFactorSym        1 1.0 4.4259e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0   3  0  0  0  0     0
MatAssemblyBegin       1 1.0 8.2015e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd         1 1.0 3.3536e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
MatGetRowIJ            1 1.0 1.5960e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 3.9791e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0   3  0  0  0  0     0
MatView                2 1.0 6.7909e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  5  0  0  0  0   5  0  0  0  0     0
KSPGMRESOrthog         1 1.0 7.5970e-06 1.0 4.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00  1 10  0  0  0   1 10  0  0  0   526
KSPSetUp               1 1.0 3.4424e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
KSPSolve               1 1.0 2.7264e-04 1.0 3.30e+04 1.0 0.0e+00 0.0e+00 0.0e+00 19 79  0  0  0  19 79  0  0  0   121
PCSetUp                1 1.0 1.5234e-04 1.0 4.00e+03 1.0 0.0e+00 0.0e+00 0.0e+00 11 10  0  0  0  11 10  0  0  0    26
PCApply                2 1.0 2.1022e-05 1.0 9.99e+03 1.0 0.0e+00 0.0e+00 0.0e+00  1 24  0  0  0   1 24  0  0  0   475
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Vector     8              8        76224     0.
              Matrix     2              2       134212     0.
       Krylov Solver     1              1        18400     0.
      Preconditioner     1              1         1032     0.
           Index Set     3              3        10328     0.
              Viewer     1              0            0     0.
========================================================================================================================
...
\end{outputlisting}
}
\nobreak
\caption{Running a PETSc Program with Profiling (partial output)}
\label{fig_exprof}
\end{figure}

\subsection*{Writing Application Codes with PETSc}

The examples throughout the library demonstrate the software usage
and can serve as templates for developing
custom applications.  We suggest that new PETSc
users examine programs in the directories
\trl{${PETSC_DIR}/src/<library>/examples/tutorials}
where \trl{<library>}
denotes any of the PETSc libraries (listed in the following
section), such as \trl{SNES} or \trl{KSP}.
The manual pages located at \trl{${PETSC_DIR}/docs/index.htm} or 
\href{http://www.mcs.anl.gov/petsc/documentation}{http://www.mcs.anl.gov/petsc/documentation}
provide links (organized by both routine names and concepts) to the tutorial examples.

To write a new application program using PETSc, we suggest the
following procedure:
\begin{tightenumerate}
\item Install and test PETSc according to the instructions at the PETSc web site.
\item Copy one of the many PETSc examples in the directory
      that corresponds to the class of problem of interest (e.g.,
      for linear solvers, see \trl{${PETSC_DIR}/src/ksp/ksp/examples/tutorials}).
\item Copy the corresponding makefile within the example directory;
      compile and run the example program.
\item Use the example program as a starting point for developing a custom code.
\end{tightenumerate}

%---------------------------------------------------------------------

\section{Citing PETSc}

When citing PETSc in a publication please cite the following:
\begin{verbatim}
@Misc{petsc-web-page,
   Author = "Satish Balay and Shrirang Abhyankar and Mark~F. Adams and Jed Brown
   and Peter Brune and Kris Buschelman and Lisandro Dalcin and Victor Eijkhout
   and William~D. Gropp and Dinesh Kaushik and Matthew~G. Knepley and Dave~A. May
   and Lois Curfman McInnes and Karl Rupp and Barry~F. Smith and Stefano Zampini
   and Hong Zhang and Hong Zhang",
   Title  = "{PETS}c {W}eb page",
   Note   = "http://www.mcs.anl.gov/petsc",
   Year   = "2017"}

@TechReport{petsc-user-ref,
   Author = "Satish Balay and Shrirang Abhyankar and Mark~F. Adams and Jed Brown
   and Peter Brune and Kris Buschelman and Lisandro Dalcin and Victor Eijkhout
   and Dinesh Kaushik and Matthew~G. Knepley and Dave~A. May and Lois Curfman McInnes
   and William~D. Gropp and Karl Rupp and Patrick Sanan and Barry~F. Smith and
   Stefano Zampini and Hong Zhang and Hong Zhang",
   Title       = "{PETS}c Users Manual",
   Number      = "ANL-95/11 - Revision 3.8",
   Institution = "Argonne National Laboratory",
   Year        = "2017"}

@InProceedings{petsc-efficient,
   Author    = "Satish Balay and William D. Gropp and Lois C. McInnes and Barry F. Smith",
   Title     = "Efficient Management of Parallelism in Object Oriented
                Numerical Software Libraries",
   Booktitle = "Modern Software Tools in Scientific Computing",
   Editor    = "E. Arge and A. M. Bruaset and H. P. Langtangen",
   Pages     = "163--202",
   Publisher = "Birkhauser Press",
   Year      = "1997"}
\end{verbatim}

%---------------------------------------------------------------------

\section{Directory Structure}
\label{sec_directory}

We conclude this introduction with an overview of the
organization of the PETSc software.
The root directory of PETSc contains the following directories:

\begin{tightitemize}
\item \trl{docs} - All documentation for PETSc. The files \trl{manual.pdf}
                   contains the hyperlinked users manual, suitable for printing
                   or on-screen viewering. Includes the subdirectory
 \subitem - \trl{manualpages} (on-line manual pages).
\item \trl{bin} - Utilities and short scripts for use with PETSc, including
 \begin{tightitemize}
 \item \trl{petscmpiexec} (utility for setting running MPI jobs),
 \end{tightitemize}

\item \trl{conf} - Base PETSc configuration files that define the standard make variables and rules used by PETSc
\item \trl{include} - All include files for PETSc that are visible to the user.
\item \trl{include/petsc/finclude}    - PETSc include files for Fortran programmers using
                                  the .F suffix (recommended).
\item \trl{include/petsc/private}    - Private PETSc include files that should {\em not}
                                 need to be used by application programmers.
\item \trl{share} - Some small test matrices in data files
\item \trl{src} - The source code for all PETSc libraries, which
                  currently includes
 \begin{tightitemize}
 \item \trl{vec} - vectors,
   \begin{tightitemize}
     \item \trl{is} - index sets,
   \end{tightitemize}
 \item \trl{mat} - matrices,
 \item \trl{dm} - data management between meshes and vectors and matrices,
 \item \trl{ksp} - complete linear equations solvers,
 \begin{tightitemize}
   \item \trl{ksp} - Krylov subspace accelerators,
   \item \trl{pc} - preconditioners,
 \end{tightitemize}
 \item \trl{snes} - nonlinear solvers
 \item \trl{ts} - ODE solvers and timestepping,
 \item \trl{sys} - general system-related routines,
 \begin{tightitemize}
   \item \trl{logging} - PETSc logging and profiling routines,
   \item \trl{classes} - low-level classes
   \begin{tightitemize}
     \item \trl{draw} - simple graphics,
     \item \trl{viewer}
     \item \trl{bag}
     \item \trl{random} - random number generators.
   \end{tightitemize}
\end{tightitemize}
 \item \trl{contrib} - contributed modules that use PETSc but are not
    part of the official PETSc package.  We encourage users who have
    developed such code that they wish to share with others to let us
    know by writing to petsc-maint@mcs.anl.gov.
 \end{tightitemize}
\end{tightitemize}

Each PETSc source code library directory has the following subdirectories:
\begin{tightitemize}
\item  \trl{examples} - Example programs for the component, including
  \begin{tightitemize}
  \item \trl{tutorials} - Programs designed to teach users about PETSc.  These
          codes can serve as templates for the design of custom applications.
  \item \trl{tests} - Programs designed for thorough testing of PETSc.  As such,
          these codes are not intended for examination by users.
  \end{tightitemize}
\item  \trl{interface} - The calling sequences for the abstract interface
        to the component.
        Code here does not know about particular implementations.
\item  \trl{impls} - Source code for one or more implementations.
\item  \trl{utils} - Utility routines.  Source here may know about the
          implementations, but ideally will not know about implementations
          for other components.
\end{tightitemize}

% ------------------------------------------------------------------
%   End of introductory information
% ------------------------------------------------------------------
