\documentclass[fleqn,10pt]{article}
\usepackage{graphicx} % Required for inserting images
\usepackage{xeCJK} 
\usepackage{amsmath}
\usepackage{pgfplots}
\usepackage{tikz}
\usepackage{subcaption}
\usepackage{overpic}
\usepackage{listings}
\usepackage{xcolor} 
\usepackage{pdfpages}

\title{项目作业报告\\{deal.II step-3内容简介}}
\author{3220101611 韩耀霆}
\date{\today}

\begin{document}
\flushbottom
\maketitle
\tableofcontents\\



\section{Induction}

\subsection{The basic set up of finite element methods}

This is the first example where we actually use finite elements to compute something. 
We will solve a simple version of Poisson's equation with zero boundary values, but a nonzero right hand side:
\begin{eqnarray*} 
   { -\bigtriangleup u }&=&{ f } \qquad \qquad in \; \Omega,  \\
   { u }&=&{ 0 } \qquad \qquad on \; \partial \Omega. 
\end{eqnarray*}

We will solve this equation on the square, $ \Omega  =  [-1,1] ^{2} $.
In this program, we will also only consider the particular case $ f(x) =1 $.

If you've learned about the basics of the finite element method, 
you will remember the steps we need to take to approximate the solution $u$ by a finite dimensional approximation. 
Specifically, we first need to derive the weak form of the equation above, 
which we obtain by multiplying the equation by a test function $\varphi$ from the left 
(we will come back to the reason for multiplying from the left and not from the right below) and integrating over the domain $\Omega$:
\begin{eqnarray*} 
    -\int_{\Omega}\varphi \bigtriangleup u = \int_{\Omega}\varphi f.
\end{eqnarray*}

This can be integrated by parts:
\begin{eqnarray*} 
    \int_{\Omega} \bigtriangledown \varphi \cdot \bigtriangledown u -
    \int_{\partial \Omega} \varphi n \cdot \bigtriangledown u =
    \int_{\Omega}\varphi f.
\end{eqnarray*}

The test function $\varphi$ has to satisfy the same kind of boundary conditions 
(in mathematical terms: it needs to come from the tangent space of the set in which we seek the solution), 
so on the boundary $\varphi=0$ and consequently the weak form we are looking for reads:
\begin{eqnarray*} 
    (\bigtriangledown \varphi,\bigtriangledown u)=(\varphi,f),
\end{eqnarray*}

where we have used the common notation $(a,b)=\int_{\Omega}ab$. 
The problem then asks for a function $u$ for which this statement is true for all test functions $\varphi$ from the appropriate space.

Of course we can't find such a function on a computer in the general case, 
and instead we seek an approximation $u_h(x) = \Sigma_j U_j \varphi_j(x)$, 
where the $U_j$ are unknown expansion coefficients we need to determine 
(the "degrees of freedom" of this problem), and $\varphi_i(x)$ are the finite element shape functions we will use. 
To define these shape functions, we need the following:

\begin{itemize}
    \item A mesh on which to define shape functions. 
    \item A finite element that describes the shape functions we want to use on the reference cell 
    (which in deal.II is always the unit interval $[0,1]$, the unit square $[0,1]^2$ or the unit cube $[0,1]^3$, depending on which space dimension you work in). 
    \item A \verb|DoFHandler| object that enumerates all the degrees of freedom on the mesh, 
    taking the reference cell description the finite element object provides as the basis. 
    \item A mapping that tells how the shape functions on the real cell are obtained from the shape functions defined by the finite element class on the reference cell. 
    By default, unless you explicitly say otherwise, deal.II will use a $(bi-, tri-)$linear mapping for this, so in most cases you don't have to worry about this step.
\end{itemize}

Through these steps, we now have a set of functions $\varphi_i$, and we can define the weak form of the discrete problem: 
Find a function $u_h$, $i.e.$, find the expansion coefficients $U_j$ mentioned above, so that
\begin{eqnarray*}
    (\bigtriangledown \varphi_i,\bigtriangledown u_h)=
    (\varphi_i,f), \qquad \qquad i=0...N-1
\end{eqnarray*}

Note that we here follow the convention that everything is counted starting at zero, as common in C and C++. 
This equation can be rewritten as a linear system if you insert the representation $u_h(x)=\Sigma_j U_j \varphi_j(x)$ and then observe that
\begin{eqnarray*}
    (\bigtriangledown \varphi_i,\bigtriangledown u_h)&=&(\bigtriangledown \varphi_i,\bigtriangledown [\Sigma_j U_j \varphi_j])\\
    ~&=&\Sigma_j(\bigtriangledown \varphi_i,\bigtriangledown [U_j \varphi_j])\\
    ~&=&\Sigma_j(\bigtriangledown \varphi_i,\bigtriangledown \varphi_j)U_j.
\end{eqnarray*}

With this, the problem reads: Find a vector $U$ so that
\begin{eqnarray*}
    AU=F,
\end{eqnarray*}

where the matrix $A$ and the right hand side $F$ are defined as
\begin{eqnarray*}
    A_{ij}=(\bigtriangledown \varphi_i,\bigtriangledown \varphi_j),\\
    F_i=(\varphi_i,f).
\end{eqnarray*}


\subsection{Should we multiply by a test function from the left or from the right?}

Before we move on with describing how these quantities can be computed, 
note that if we had multiplied the original equation from the right by a test function rather than from the left, 
then we would have obtained a linear system of the form
\begin{eqnarray*}
    U^T A = F^T
\end{eqnarray*}

with a row vector $F^T$. By transposing this system, this is of course equivalent to solving
\begin{eqnarray*}
    A^T U=F
\end{eqnarray*}

which here is the same as above since $A=A^T$. But in general is not, and in order to avoid any sort of confusion, 
experience has shown that simply getting into the habit of multiplying the equation from the left rather than from the right (as is often done in the mathematical literature) 
avoids a common class of errors as the matrix is automatically correct and does not need to be transposed when comparing theory and implementation. 


\subsection{Assembling the matrix and right hand side vector}

Now we know what we need (namely: objects that hold the matrix and vectors, as well as ways to compute $A_{ij}$,$F_i$), 
and we can look at what it takes to make that happen:
\begin{itemize}
    \item The object for $A$ is of type \verb|SparseMatrix| while those for $U$ and $F$ are of type \verb|Vector|. 
    We will see in the program below what classes are used to solve linear systems.
    \item We need a way to form the integrals. In the finite element method, this is most commonly done using quadrature, 
    $i.e.$ the integrals are replaced by a weighted sum over a set of quadrature points on each cell. 
    That is, we first split the integral over $\Omega$ into integrals over all cells,
    \begin{eqnarray*}
        &&A_{ij}=(\bigtriangledown \varphi_i,\bigtriangledown \varphi_j)=
        \underset{K \in \mathrm{T} }{\Sigma}\int_K \bigtriangledown \varphi_i \cdot \bigtriangledown \varphi_j,\\
        &&F_i=(\varphi_i,f)=
        \underset{K \in \mathrm{T}}{\Sigma}\int_K \varphi_i f.
    \end{eqnarray*}
    \item and then approximate each cell's contribution by quadrature:
    \begin{eqnarray*}
        &&A_{ij}^K =
        \int_K \bigtriangledown \varphi_i \cdot \bigtriangledown \varphi_j \approx
        \Sigma_q \bigtriangledown \varphi_i (x_q^K) \cdot \varphi_j (x_q^K) w_q^K,\\
        &&F_i^K = 
        \int_K \varphi_i f \approx
        \Sigma_q \varphi_i (x_q^K)  f (x_q^K) w_q^K,
    \end{eqnarray*}
    \item First, we need a way to describe the location $x_q^K$ of quadrature points and their weights $w_q^K$. 
    They are usually mapped from the reference cell in the same way as shape functions, 
    $i.e.$, implicitly using the \verb|MappingQ1| class or, if you explicitly say so, through one of the other classes derived from \verb|Mapping|. 
    The locations and weights on the reference cell are described by objects derived from the \verb|Quadrature| base class. 
    Typically, one chooses a quadrature formula ($i.e.$ a set of points and weights) 
    so that the quadrature exactly equals the integral in the matrix; this can be achieved because all factors in the integral are polynomial, 
    and is done by Gaussian quadrature formulas, implemented in the \verb|QGauss|class.
    \item We then need something that can help us evaluate $\varphi_i(x_q^K)$ on cell $K$. 
    This is what the \verb|FEValues| class does: 
    it takes a finite element objects to describe $\varphi$ on the reference cell, 
    a quadrature object to describe the quadrature points and weights, and a mapping object 
    (or implicitly takes the \verb|MappingQ1| class) 
    and provides values and derivatives of the shape functions on the real cell $K$ as well as all sorts of other information needed for integration, 
    at the quadrature points located on K.
\end{itemize}

\verb|FEValues| really is the central class in the assembly process. One way you can view it is as follows: 
The \verb|FiniteElement| and derived classes describe shape functions, $i.e.$, infinite dimensional objects: 
functions have values at every point. We need this for theoretical reasons because we want to perform our analysis with integrals over functions. However, 
for a computer, this is a very difficult concept, since they can in general only deal with a finite amount of information, 
and so we replace integrals by sums over quadrature points that we obtain by mapping (the \verb|Mapping| object) using points defined on a reference cell 
(the \verb|Quadrature| object) onto points on the real cell. In essence, we reduce the problem to one where we only need a finite amount of information, 
namely shape function values and derivatives, quadrature weights, normal vectors, etc, exclusively at a finite set of points. 
The \verb|FEValues| class is the one that brings the three components together and provides this finite set of information on a particular cell $K$. 
You will see it in action when we assemble the linear system below.

The final piece of this introduction is to mention that after a linear system is obtained, 
it is solved using an iterative solver and then postprocessed: 
we create an output file using the \verb|DataOut| class that can then be visualized using one of the common visualization programs.


\subsection{Solving the linear system}

For a finite element program, the linear system we end up with here is relatively small: 
The matrix has size $1089×1089$, owing to the fact that the mesh we use is $32×32$ and so there are $33^2=1089$ vertices in the mesh. 
In many of the later tutorial programs, matrix sizes in the range of tens of thousands to hundreds of thousands will not be uncommon, 
and with codes such as ASPECT that build on deal.II, we regularly solve problems with more than a hundred million equations (albeit using parallel computers). 
In any case, even for the small system here, the matrix is much larger than what one typically encounters in an undergraduate or most graduate courses, 
and so the question arises how we can solve such linear systems.

The first method one typically learns for solving linear systems is Gaussian elimination. 
The problem with this method is that it requires a number of operations that is proportional to $N^3$, 
where N is the number of equations or unknowns in the linear system – more specifically, the number of operations is $\frac{2}{3}N^3$, 
give or take a few. With $N=1089$, this means that we would have to do around $861$ million operations. 
This is a number that is quite feasible and it would take modern processors less than $0.1$ seconds to do this. But it is clear that this isn't going to scale.

Instead, what we will do here is take up an idea from 1952: the Conjugate Gradient method, or in short "CG". 
CG is an "iterative" solver in that it forms a sequence of vectors that converge to the exact solution; in fact, 
after $N$ such iterations in the absence of roundoff errors it finds the exact solution if the matrix is symmetric and positive definite. 
The method was originally developed as another way to solve a linear system exactly, like Gaussian elimination, 
but as such it had few advantages and was largely forgotten for a few decades. 
But, when computers became powerful enough to solve problems of a size where Gaussian elimination doesn't work well any more (sometime in the 1980s), 
CG was rediscovered as people realized that it is well suited for large and sparse systems like the ones we get from the finite element method. This is because 
\begin{itemize}
    \item the vectors it computes converge to the exact solution, 
    and consequently we do not actually have to do all N iterations to find the exact solution as long as we're happy with reasonably good approximations; and
    \item it only ever requires matrix-vector products, which is very useful for sparse matrices because a sparse matrix has, 
    by definition, only $\mathcal{O}(N)$ entries and so a matrix-vector product can be done with $\mathcal{O}(N)$ effort whereas it costs $N^2$ operations to do the same for dense matrices. 
    As a consequence, we can hope to solve linear systems with at most $\mathcal{O}(N^2)$ operations, and in many cases substantially fewer.
\end{itemize} 

Finite element codes therefore almost always use iterative solvers such as CG for the solution of the linear systems, and we will do so in this code as well. 
(We note that the CG method is only usable for matrices that are symmetric and positive definite; for other equations, 
the matrix may not have these properties and we will have to use other variations of iterative solvers such as \verb|BiCGStab| or \verb|GMRES| that are applicable to more general matrices.)

An important component of these iterative solvers is that we specify the tolerance with which we want to solve the linear system – in essence, 
a statement about the error we are willing to accept in our approximate solution. 
The error in an approximate solution $\tilde{x}$ obtained to the exact solution $x$ of a linear system $Ax=b$ is defined as $\parallel x−\tilde{x} \parallel $, 
but this is a quantity we cannot compute because we don't know the exact solution $x$. Instead, we typically consider the residual, 
defined as $\parallel b−A\tilde{x} \parallel = \parallel A(x−\tilde{x}) \parallel $, as a computable measure. 
We then let the iterative solver compute more and more accurate solutions $\tilde{x}$, until $\parallel b−A\tilde{x}\parallel \le \tau $. 
A practical question is what value $\tau$ should have. In most applications, setting
\begin{eqnarray*}
    \tau = 10^{-6} \parallel b \parallel
\end{eqnarray*}

is a reasonable choice. 


\subsection{About the implementation}

Although this is the simplest possible equation you can solve using the finite element method, 
this program shows the basic structure of most finite element programs and also serves as the template that almost all of the following programs will essentially follow. Specifically, 
the main class of this program looks like this:
\begin{lstlisting}[language=C++]
class Step3
{
  public:
    Step3 ();
    void run ();
 
  private:
    void make_grid ();
    void setup_system ();
    void assemble_system ();
    void solve ();
    void output_results () const;
 
    Triangulation<2>     triangulation;
    FE_Q<2>              fe;
    DoFHandler<2>        dof_handler;
 
    SparsityPattern      sparsity_pattern;
    SparseMatrix<double> system_matrix;
    Vector<double>       solution;
    Vector<double>       system_rhs;
};
\end{lstlisting}

This follows the object oriented programming mantra of data encapsulation, 
$i.e.$ we do our best to hide almost all internal details of this class in private members that are not accessible to the outside.

Let's start with the member variables: These follow the building blocks we have outlined above in the bullet points, 
namely we need a \verb|Triangulation| and a \verb|DoFHandler| object, and a finite element object that describes the kinds of shape functions we want to use. 
The second group of objects relate to the linear algebra: the system matrix and right hand side as well as the solution vector, 
and an object that describes the sparsity pattern of the matrix. This is all this class needs 
(and the essentials that any solver for a stationary PDE requires) and that needs to survive throughout the entire program. 
In contrast to this, the \verb|FEValues| object we need for assembly is only required throughout assembly, 
and so we create it as a local object in the function that does that and destroy it again at its end.

Secondly, let's look at the member functions. These, as well, already form the common structure that almost all following tutorial programs will use:

\begin{itemize}
    \item \verb|make_grid()|:This is what one could call a preprocessing function. As its name suggests, 
    it sets up the object that stores the triangulation. In later examples, it could also deal with boundary conditions, geometries, $etc$.
    \item \verb|setup_system()|:This then is the function in which all the other data structures are set up that are needed to solve the problem. 
    In particular, it will initialize the DoFHandler object and correctly size the various objects that have to do with the linear algebra. 
    This function is often separated from the preprocessing function above because, 
    in a time dependent program, it may be called at least every few time steps whenever the mesh is adaptively refined. 
    On the other hand, setting up the mesh itself in the preprocessing function above is done only once at the beginning of the program and is, 
    therefore, separated into its own function.
    \item \verb|assemble_system()|:This, then is where the contents of the matrix and right hand side are computed, 
    as discussed at length in the introduction above. Since doing something with this linear system is conceptually very different from computing its entries, 
    we separate it from the following function.
    \item \verb|solve()|:This then is the function in which we compute the solution $U$ of the linear system $AU=F$. 
    In the current program, this is a simple task since the matrix is so simple, 
    but it will become a significant part of a program's size whenever the problem is not so trivial any more.
    \item \verb|output_results()|:Finally, when you have computed a solution, you probably want to do something with it. 
    For example, you may want to output it in a format that can be visualized, or you may want to compute quantities you are interested in: 
    say, heat fluxes in a heat exchanger, air friction coefficients of a wing, maximum bridge loads, or simply the value of the numerical solution at a point. 
    This function is therefore the place for postprocessing your solution.
\end{itemize}

All of this is held together by the single public function (other than the constructor), namely the run() function. 
It is the one that is called from the place where an object of this type is created, 
and it is the one that calls all the other functions in their proper order. Encapsulating this operation into the run() function, 
rather than calling all the other functions from main() makes sure that you can change how the separation of concerns within this class is implemented. 
For example, if one of the functions becomes too big, you can split it up into two, 
and the only places you have to be concerned about changing as a consequence are within this very same class, and not anywhere else.


\subsection{A note on types}

deal.II defines a number of integral types via alias in namespace \verb|types|. 
In particular, in this program you will see \verb|types::global_dof_index| in a couple of places: 
an integer type that is used to denote the global index of a degree of freedom, 
$i.e.$, the index of a particular degree of freedom within the \verb|DoFHandler| object that is defined on top of a \verb|triangulation| 
(as opposed to the index of a particular degree of freedom within a particular cell). 



\section {The Purpose, Method and Results of step-3.}

\subsection{Purpose}

In summary,step-3 is the first Laplace solver in deal.II. This routine introduces the general structure of finite element programs, 
demonstrating how to assemble linear systems, solve the linear equations, and generate graphical output source files (used for post-processing).

Specifically in this example, it primarily introduces the solution method of the Poisson equation in deal.II and obtains the results.


\subsection{Method}

According to the requirements of the project assignment, this section focuses on describing the mathematical calculation part of the program and provides analysis. 
The following are the relevant functions and their code (without detailed analysis of header files and class):

\begin{itemize}
    \item \verb|Step3::Step3|:The constructor. It does not much more than first to specify that we want bi-linear elements 
    (denoted by the parameter to the finite element object, which indicates the polynomial degree), and to associate the \verb|dof_handler| variable to the \verb|triangulation| we use. 
    
    \item \verb|Step3::make_grid|:Generate the triangulation on which to do computation and number each vertex with a degree of freedom. 
    We have seen these two steps in step-1 and step-2 before, respectively.\\
    This function does the first part, creating the mesh. We create the grid and refine all cells five times. 
    Since the initial grid (which is the square $[−1,1]×[−1,1]$) consists of only one cell, the final grid has 32 times 32 cells, for a total of 1024.
    
    \item \verb|Step3::setup_system|:Enumerate all the degrees of freedom and set up matrix and vector objects to hold the system data. 
    Enumerating is done by using \verb|DoFHandler::distribute_dofs()|, as we have seen in the step-2 example. 
    Since we use the \verb|FE_Q| class and have set the polynomial degree to 1 in the constructor, $i.e.$ bilinear elements, this associates one degree of freedom with each vertex.

    There should be one DoF for each vertex. Since we have a 32 times 32 grid, the number of DoFs should be 33 times 33, or 1089.

    As we have seen in the previous example, we set up a sparsity pattern by first creating a temporary structure, tagging those entries that might be nonzero, 
    and then copying the data over to the \verb|SparsityPattern| object that can then be used by the system matrix.

    Note that the \verb|SparsityPattern| object does not hold the values of the matrix, it only stores the places where entries are. 
    The entries themselves are stored in objects of type \verb|SparseMatrix|, of which our variable \verb|system_matrix| is one.

    The distinction between sparsity pattern and matrix was made to allow several matrices to use the same sparsity pattern. 
    This may not seem relevant here, but when you consider the size which matrices can have, and that it may take some time to build the sparsity pattern, 
    this becomes important in large-scale problems if you have to store several matrices in your program.

    The last thing to do in this function is to set the sizes of the right hand side vector and the solution vector to the right values.
    
    \item \verb|Step3::assemble_system|:The next step is to compute the entries of the matrix and right hand side that form the linear system from which we compute the solution. 
    This is the central function of each finite element program and we have discussed the primary steps in the introduction already.

    The general approach to assemble matrices and vectors is to loop over all cells, 
    and on each cell compute the contribution of that cell to the global matrix and right hand side by quadrature. 
    The point to realize now is that we need the values of the shape functions at the locations of quadrature points on the real cell. 
    However, both the finite element shape functions as well as the quadrature points are only defined on the reference cell. 
    They are therefore of little help to us, and we will in fact hardly ever query information about finite element shape functions or quadrature points from these objects directly.

    Rather, what is required is a way to map this data from the reference cell to the real cell. 
    Classes that can do that are derived from the Mapping class, though one again often does not have to deal with them directly: 
    many functions in the library can take a mapping object as argument, but when it is omitted they simply resort to the standard bilinear Q1 mapping.

    So what we now have is a collection of three classes to deal with: finite element, quadrature, and mapping objects. 
    That's too much, so there is one type of class that orchestrates information exchange between these three: the \verb|FEValues| class. 
    If given one instance of each three of these objects (or two, and an implicit linear mapping), 
    it will be able to provide you with information about values and gradients of shape functions at quadrature points on a real cell.

    Using all this, we will assemble the linear system for this problem in the function.
    
    \item \verb|Step3::solve|:The following function solves the discretized equation. 
    As discussed in the introduction, we want to use an iterative solver to do this, specifically the Conjugate Gradient (CG) method.

    The way to do this in deal.II is a three-step process:
    \begin{itemize}
        \item First, we need to have an object that knows how to tell the CG algorithm when to stop. This is done by using a \verb|SolverControl| object, 
        and as stopping criterion we say: stop after a maximum of 1000 iterations 
        (which is far more than is needed for 1089 variables; see the results section to find out how many were really used), 
        and stop if the norm of the residual is below $\tau = 10^{−6} \parallel b \parallel∥$ where $b$ is the right hand side vector. In practice, 
        this latter criterion will be the one which stops the iteration.

        \item Then we need the solver itself. The template parameter to the \verb|SolverCG| class is the type of the vectors we are using.

        \item The last step is to actually solve the system of equations. The CG solver takes as arguments the components of the linear system $Ax=b$ 
        (in the order in which they appear in this equation), and a preconditioner as the fourth argument. We don't feel ready to delve into preconditioners yet, 
        so we tell it to use the identity operation as preconditioner. Later tutorial programs will spend significant amount of time and space on constructing better preconditioners.
    \end{itemize}
    At the end of this process, the solution variable contains the nodal values of the solution function.

    \item \verb|Step3::output_results|:The last part of a typical finite element program is to output the results and maybe do some postprocessing 
    (for example compute the maximal stress values at the boundary, or the average flux across the outflow, etc). We have no such postprocessing here, 
    but we would like to write the solution to a file.
    
    \item \verb|Step3::run|:The last function of this class is the main function which calls all the other functions of the Step3 class. 
    The order in which this is done resembles the order in which most finite element programs work. Since the names are mostly self-explanatory, 
    there is not much to comment about.

    \item The \verb|main| function:This is the main function of the program. 
    Since the concept of a main function is mostly a remnant from the pre-object oriented era before C++ programming, 
    it often does not do much more than creating an object of the top-level class and calling its principle function.

    Finally, the first line of the function is used to enable output of some diagnostics that deal.II can generate. 
    The \verb|deallog| variable (which stands for \verb|deallog|, not de-allog) represents a stream to which some parts of the library write output. 
    For example, iterative solvers will generate diagnostics (starting residual, number of solver steps, final residual) as can be seen when running this tutorial program.

    The output of \verb|deallog| can be written to the console, to a file, or both. 
    Both are disabled by default since over the years we have learned that a program should only generate output when a user explicitly asks for it. 
    But this can be changed, and to explain how this can be done, we need to explain how \verb|deallog| works: When individual parts of the library want to log output, 
    they open a "\verb|context|" or "\verb|section|" into which this output will be placed. At the end of the part that wants to write output, one exits this section again. 
    Since a function may call another one from within the scope where this output section is open, output may in fact be nested hierarchically into these sections. 
    The \verb|LogStream| class of which \verb|deallog| is a variable calls each of these sections a "\verb|prefix|" because all output is printed with this prefix at the left end of the line, 
    with prefixes separated by colons. There is always a default prefix called "DEAL" 
    (a hint at deal.II's history as the successor of a previous library called "DEAL" and from which the \verb|LogStream| class is one of the few pieces of code that were taken into deal.II).

    By default, \verb|logstream| only outputs lines with zero prefixes – $i.e$., all output is disabled because the default "DEAL" prefix is always there. 
    But one can set a different maximal number of prefixes for lines that should be output to something larger, 
    and indeed here we set it to two by calling \verb|LogStream::depth_console()|. This means that for all screen output, 
    a context that has pushed one additional prefix beyond the default "DEAL" is allowed to print its output to the screen ("console"), 
    whereas all further nested sections that would have three or more prefixes active would write to \verb|deallog|, 
    but \verb|deallog| does not forward this output to the screen. Thus, running this example, you will see the solver statistics prefixed with "\verb|DEAL:CG|", 
    which is two prefixes. This is sufficient for the context of the current program, 
    but you will see examples later on where solvers are nested more deeply and where you may get useful information by setting the depth even higher.
\end{itemize}


\subsection{Results}

The output of the program looks as follows:
\begin{lstlisting}[language=C++]
Number of active cells: 1024
Number of degrees of freedom: 1089
DEAL:cg::Starting value 0.121094
DEAL:cg::Convergence step 36 value 7.07761e-08
\end{lstlisting}

The first two lines is what we wrote to \verb|std::cout|. The last two lines were generated by the CG solver because we called \verb|deallog.depth_console(2)|.

Apart from the output shown above, the program generated the file \verb|solution.vtk|, 
which is in the VTK format that is widely used by many visualization programs today – including the two heavy-weights \verb|VisIt| and 
\verb|Paraview| that are the most commonly used programs for this purpose today.

Using VisIt, it is not very difficult to generate a picture of the solution like this(\verb|Figure 1|):

\begin{figure}[htbp]
  \centering
  \includegraphics[width=0.7\textwidth]{solution.pdf}
  \caption{Output of step-3}
  \label{fig:image}
\end{figure}

It shows both the solution and the mesh, elevated above the $x-y$ plane based on the value of the solution at each point. 
Of course the solution here is not particularly exciting, but that is a result of both what the Laplace equation represents and 
the right hand side $f(x)=1$ we have chosen for this program: The Laplace equation describes 
(among many other uses) the vertical deformation of a membrane subject to an external (also vertical) force. 
In the current example, the membrane's borders are clamped to a square frame with no vertical variation; 
a constant force density will therefore intuitively lead to a membrane that simply bulges upward – like the one shown above.


\end{document}