\documentclass[twoside,a4paper]{article}
\usepackage{geometry}
\geometry{margin=1.5cm, vmargin={0pt,1cm}}
\setlength{\topmargin}{-1cm}
\setlength{\paperheight}{29.7cm}
\setlength{\textheight}{25.3cm}

% useful packages.
\usepackage{amsfonts}
\usepackage{amsmath}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{enumerate}
\usepackage{graphicx}
\usepackage{multicol}
\usepackage{fancyhdr}
\usepackage{layout}

% some common command
\newcommand{\dif}{\mathrm{d}}
\newcommand{\avg}[1]{\left\langle #1 \right\rangle}
\newcommand{\difFrac}[2]{\frac{\dif #1}{\dif #2}}
\newcommand{\pdfFrac}[2]{\frac{\partial #1}{\partial #2}}
\newcommand{\OFL}{\mathrm{OFL}}
\newcommand{\UFL}{\mathrm{UFL}}
\newcommand{\fl}{\mathrm{fl}}
\newcommand{\op}{\odot}
\newcommand{\Eabs}{E_{\mathrm{abs}}}
\newcommand{\Erel}{E_{\mathrm{rel}}}

\begin{document}

\pagestyle{fancy}
\fancyhead{}
\lhead{Chenyue (22035026)}
\chead{math story one}
\rhead{2020/10/26}


\section*{I. Why do we learn multigrid? }

\subsection*{I-a Two-point boundary value problem} 

Many definitions of mathematical knowledge are based on physical problems in reality, and multigrid is no exception.
Multigrid methods were originally applied to simple boundary value problems that arise in many physical applications.
As an example,consider the two-point boundary value problem that describes the steady-state temperature distribution
in a long uniform rod.

It is given by the second-order boundary value problem:
\begin{equation}
\begin{aligned}
-u^{\prime \prime}(x)+\sigma u(x) &=f(x), \qquad  0 \leq x \leq1 ,\sigma \geq 0,\\
u(0)=u(1) &=0.
\end{aligned}
\end{equation}

While this problem can be handled analytically, our present aim is to consider numerical methods.The domain of the problem
$\{x:0 \leq x \leq1 \}$ is partitioned into n subintervals by introducing the grid points$x_j=jh$ , where h = 1/n is the constant
width of the subintervals.In making this replacement, we also introduce $v_j$ as an approximation to the exact solution u($x_j$).
This approximate solution may now be represented by a vector$\mathbf{v} = (v_1,...,v_{n-1}) ^T$ ,whose components satisfy the n-1
linear equations:

\begin{equation}
\begin{aligned}
\frac{-v_{j-1}+2 v_{j}-v_{j+1}}{h^{2}}+\sigma v_{j} &=f\left(x_{j}\right), \quad 1 \leq j \leq n-1, \\
v_{0}=v_{n} &=0
\end{aligned}
\end{equation}

Defining $\mathbf{f} = (f_1,...,f_{n-1}) ^T$,the vector of right-side values, we may also represent this system of linear equations
in matrix form as $A\mathbf{v}=\mathbf{f}$.The matrix A is$ (n-1)\times (n-1)$, tridiagonal,symmetric, and positive definite.Analogously,
it is possible to formulate a two-dimensional version of this problem,We will not repeat here.

We now turn to relaxation methods for our first model problem (2) with $\sigma =0$.Multiplying that equation by $h^2$ for convenience, the
discrete problem becomes:

\begin{equation}
\begin{aligned}
-u_{j-1}+2 u_{j}-u_{j+1} &=h^{2} f_{j}, \quad 1 \leq j \leq n-1 \\
u_{0}=u_{n} &=0
\end{aligned}
\end{equation}

\subsection*{I-b Basic Iterative Methods}
There are many solutions to model problem (3),such as the Jacobi (or simultaneous displacement) method,the weighted Jacobi method and the
Gauss–Seidel method.In this case, why don't we use these methods to deal with the problem directly?We will explain this later in this section.

First of all,we define several variables.

Suppose that the system  $A\mathbf{v}=\mathbf{f}$ has a unique solution and that $\mathbf{v}$ is a computed approximation to  $\mathbf{u}$. There
are two important measures of  $\mathbf{v}$ as an approximation to  $\mathbf{u}$. One is the error (or algebraic error) and is given simply by:

\begin{equation}
  \mathbf{e}= \mathbf{u}- \mathbf{v}
\end{equation}

We find that the error is still difficult to calculate when the exact solution is not known.So we introduce the concept of the residual,which is easy
to calculate by an approximation:

\begin{equation}
  \mathbf{r}=\mathbf{f}-A\mathbf{v}
\end{equation}

Identifying $\mathbf{v}$ with the current approximation $\mathbf{v}^{(0)}$ and $\mathbf{v}$ with the new approximation $\mathbf{v}^{(1)}$ , an iteration may
be formed by taking:

\begin{equation}
\begin{aligned}
\mathbf{v}^{(1)}=\mathbf{v}^{(0)}+B \mathbf{r}^{(0)} &=\mathbf{v}^{(0)}+B\left(f-A \mathbf{v}^{(0)}\right) \\
&=(I-B A) \mathbf{v}^{(0)}+B \mathbf{f} \\
& \equiv R \mathbf{v}^{(0)}+B \mathbf{f}
\end{aligned}
\end{equation}

where B is an approximation to$ A^{-1}$,the general iteration matrix as $R=I-BA$. It can also be shown that m sweeps of this iteration result in:

\begin{equation}
\mathbf{e}^{(m)}=R^{m} \mathbf{e}^{(0)}
\end{equation}

Recall that the spectral radius of a matrix is given by:

\begin{equation}
\rho(A)=\max |\lambda(A)|
\end{equation}

Easy to prove, it follows that the iteration associated with the matrix R converges for all initial guesses if and only if $\rho(R)<1$.




Then let's look at the weighted Jacobi method.

There is a simple but important modification that can be made to the Jacobi iteration. As before, we compute the new Jacobi iterates using:

\begin{equation}
v_{j}^{*}=\frac{1}{2}\left(v_{j-1}^{(0)}+v_{j+1}^{(0)}+h^{2} f_{j}\right), \quad 1 \leq j \leq n-1
\end{equation}

However, $v_j^{*}$ is now only an intermediate value. The new iterate is given by the weighted average:

\begin{equation}
v_{j}^{(1)}=(1-\omega) v_{j}^{(0)}+\omega v_{j}^{*}=v_{j}^{(0)}+\omega\left(v_{j}^{*}-v_{j}^{(0)}\right), \quad 1 \leq j \leq n-1
\end{equation}

where $\omega \in R$ is a weighting factor that may be chosen. This generates an entire family of iterations called the weighted or damped Jacobi method.
Notice that $\omega=1$ yields the original Jacobi iteration.

In matrix form, the weighted Jacobi method is given by:

$$
\mathbf{v}^{(1)}=\left[(1-\omega) I+\omega R_{J}\right] \mathbf{v}^{(0)}+\omega D^{-1} \mathbf{f}
$$

If we define the weighted Jacobi iteration matrix by:

$$
R_{\omega}=(1-\omega) I+\omega R_{J}
$$

In all that follows, we let$ w_{k,j}$ be the jth component of the kth eigenvector, $\mathbf{w}_k$ .Through calculation, we get the eigenvalues and eigenvectors of $R_{\omega}$ as:

\begin{equation}
  w_{k, j}=\sin \left(\frac{j k \pi}{n}\right), \quad 1 \leq k \leq n-1, \quad 1 \leq j \leq {n-1}
\end{equation}
\begin{equation}
\lambda_{k}\left(R_{\omega}\right)=1-2 \omega \sin ^{2}\left(\frac{k \pi}{2 n}\right), \quad 1 \leq k \leq n-1
\end{equation}

The modes in the lower half of the spectrum, with wavenumbers in the range$ 1 \leq k < n^2$ , are called low-frequency or smooth
modes.The modes in the upper half of the spectrum, with $n^ 2 \leq  k \leq n-1$, are called high-frequency or oscillatory modes.i.e.
$\mathbf{w}_k(1 \leq k < n^2)$are called low-frequency or smooth modes,$\mathbf{w}_k(n^ 2 \leq  k \leq n-1)$ are called  high-frequency
or oscillatory modes.While the eigenvectors of $R_{\omega}$ are the same as the eigenvectors of A . It is important to note that if $0< \omega \leq 1$,
then$|\lambda_{k} (R_{\omega} )| <1|$ and the weighted Jacobi iteration converges.Owing to the matrix A is$ (n-1)\times (n-1)$, tridiagonal,
symmetric, and positive definite,the eigenvectors are linearly independent and form a set of bases.Then it is possible to represent
$\mathbf{e}^{(0)}$ using the eigenvectors of A in the form:

\begin{equation}
\mathbf{e}^{(0)}=\sum_{k=1}^{n-1} c_{k} \mathbf{w}_{k}
\end{equation}

\begin{equation}
\mathbf{e}^{(m)}=R_{\omega}^{m} \mathbf{e}^{(0)}
\end{equation}

\begin{equation}
\mathbf{e}^{(m)}=R_{\omega}^{m} \mathbf{e}^{(0)}=\sum_{k=1}^{n-1} c_{k} R_{\omega}^{m} \mathbf{w}_{k}=\sum_{k=1}^{n-1} c_{k} \lambda_{k}^{m}\left(R_{\omega}\right) \mathbf{w}_{k}
\end{equation}
where the coefficients $c_{k} \in \mathbf{R}$ give the "amount" of each mode in the error. This expansion for e (m) shows that after m iterations, the kth mode of
the initial error has been reduced by a factor of $\lambda_{k}^{m}\left(R_{\omega}\right) $. It should also be noted that the weighted Jacobi method does not mix
modes: when applied to a single mode,the iteration can change the amplitude of that mode, but it cannot convert that mode into different modes.We established that
the eigenvalues of the iteration matrix are given by:

\begin{equation}
\lambda_{k}\left(R_{\omega}\right)=1-2 \omega \sin ^{2}\left(\frac{k \pi}{2 n}\right), \quad 1 \leq k \leq n-1
\end{equation}

Obviously,we would like to find the value of $\omega$ that makes  $|\lambda_{k}\left(R_{\omega}\right)| $ as small as possible for all$ 1 \leq k \leq n-1$.However,
unfortunately, no matter how we choose $\omega$, we can not make all the front coefficients of modes infinitely close to 0 after many iterations.For example,
notice that for all values of $\omega$ satisfying$ 0< \omega \leq 1$,

\begin{equation}
\lambda_{1}=1-2 \omega \sin ^{2}\left(\frac{\pi}{2 n}\right)=1-2 \omega \sin ^{2}\left(\frac{\pi h}{2}\right) \approx 1-\frac{\omega \pi^{2} h^{2}}{2}
\end{equation}

This fact implies that $\lambda_{1}$, the eigenvalue associated with the smoothest mode,will always be close to 1.
Therefore, no value of $\omega$ will reduce the smooth components of the error effectively. Furthermore, the smaller
the grid spacing h, the closer $\lambda_{1}$  is to 1. Any attempt to improve the accuracy of the solution (by decreasing the grid
spacing) will only worsen the convergence of the smooth components of the error.Most basic relaxation schemes share this ironic limitation.

Similar problems can occur with other iterative methods mentioned above,these schemes work very well for the first several iterations. Inevitably,
however, convergence slows and the entire scheme appears to stall. We have found a simple explanation for this phenomenon: the rapid decrease in error
during the early iterations is due to the efficient elimination of the oscillatory modes of that error; but once the oscillatory modes have been removed,
the iteration is much less effective in reducing the remaining smooth components.

This is why we don't directly use these methods to deal with the problem.To solve this problem, we introduce multigrid.The most effective multigrid
techniques are usually built upon the simple relaxation schemes presented in this chapter. We now use these few basic schemes and develop them into far more
powerful methods.



\section*{II. What is Multigrid? }

\subsection*{II-a Two ways to solve the smoothing property}

So far we have established that many standard iterative methods possess the smoothing property. This property makes these methods
very effective at eliminating the high-frequency or oscillatory components of the error, while leaving the low-frequency or smooth
components relatively unchanged.The immediate issue is whether these methods can be modified in some way to make them effective on all error components.

One way is to select the appropriate initial vector to make the smooth components in the initial error as small as possible.
A well-known technique for obtaining an improved initial guess is to perform some preliminary iterations on a coarse grid.Then we illustrate the rationality of this method.

First of all,we introduce $\Omega^{h}$as the grid which spaceing is $h=\frac{1}{n}$.Consider the kth mode on the fine grid evaluated at the even-numbered grid points.
If $1 \leq k<\frac{n}{2},$ its components may be written as

\begin{equation}
w_{k, 2 j}^{h}=\sin \left(\frac{2 j k \pi}{n}\right)=\sin \left(\frac{j k \pi}{n / 2}\right)=w_{k, j}^{2 h}, \quad 1 \leq k<\frac{n}{2}
\end{equation}

Notice that superscripts have been used to indicate the grids on which the vectors are defined. From this identity, we see that the $k$ th mode on $\Omega^{h}$ becomes the $k$ th
mode on $\Omega^{2h}$.The important point is that smooth modes on a fine grid look less smooth on a coarse grid. This suggests that when relaxation begins to stall, signaling the
predominance of smooth error modes, it is advisable to move to a coarser grid;there, the smooth error modes appear more oscillatory and relaxation will be more effective.

The another way is to incorporates the idea of using the residual equation to relax on the error.Firstly, an approximate value is calculated in the fine-grid and the corresponding
residual is calculated. Then, the error value is obtained by solving the residual equation in the coarse-grid. Finally, a more accurate approximate value is obtained by adding
the approximate value with the error value.The explanation of the rationality of this method is similar to the above, so we will not repeat it.

\subsection*{II-b Nested iteration and the correction scheme}
In the last section, we talked about two directions of thinking, and now we apply them to practice.

We begin by proposing a strategy that uses coarse grids to obtain better initial guesses.

- Relax on $A \mathbf{u}=\mathbf{f}$ on a very coarse grid to obtain an initial guess for the next finer grid.

- ......

- Relax on $A \mathbf{u}=\mathbf{f}$ on $\Omega^{4 h}$ to obtain an initial guess for $\Omega^{2 h}$.

- Relax on $A \mathbf{u}=\mathbf{f}$ on $\Omega^{2 h}$ to obtain an initial guess for $\Omega^{h}$.

- Relax on $A \mathbf{u}=\mathbf{f}$ on $\Omega^{h}$ to obtain a final approximation to the solution.

This idea of using coarser grids to generate improved initial guesses is the basis of a strategy called nested iteration.

Then,we introduce the other strategy.i.e.Having relaxed on the fine grid until convergence deteriorates, we relax on the residual equation on
a coarser grid to obtain an approximation to the error itself. We then return to the fine grid to correct the approximation first obtained there.It can be represented by the following procedure:

- Relax on $A \mathbf{u}=\mathbf{f}$ on $\Omega^{h}$ to obtain an approximation $\mathbf{v}^{h}$.

- Compute the residual $\mathbf{r}=\mathbf{f}-A \mathbf{v}^{h}$.

$\qquad$Relax on the residual equation $A \mathbf{e}=\mathbf{r}$ on $\Omega^{2 h}$ to obtain an approximation to the error $\mathbf{e}^{2 h}$

- Correct the approximation obtained on $\Omega^{h}$ with the error estimate obtained on $\Omega^{2 h}: \mathbf{v}^{h} \leftarrow \mathbf{v}^{h}+\mathbf{e}^{2 h}$

This procedure is the basis of what is called the correction scheme.
The rest of the question is,how do you transform a vector between coarse-grid and fine-grid?i.e.how do we define  $I_{2 h}^{h} $ and $I_{h}^{2h} $? Now let's introduce two common methods
(The multidimensional case is similar, and only one-dimensional case is expressed here):

The linear interpolation operator will be denoted $I_{2 h}^{h} $. It takes coarse-grid vectors and produces fine-grid vectors according to the rule $I_{2 h}^{h} \mathbf{v}^{2 h}=\mathbf{v}^{h},$ where
\begin{equation}
\begin{aligned}
v_{2 j}^{h} &=v_{j}^{2 h} \\
v_{2 j+1}^{h} &=\frac{1}{2}\left(v_{j}^{2 h}+v_{j+1}^{2 h}\right), \quad 0 \leq j \leq \frac{n}{2}-1
\end{aligned}
\end{equation}

The restriction operator, called full weighting, is defined by $I_{h}^{2 h} \mathbf{v}^{h}=\mathbf{v}^{2 h}$, where
\begin{equation}
v_{j}^{2 h}=\frac{1}{4}\left(v_{2 j-1}^{h}+2 v_{2 j}^{h}+v_{2 j+1}^{h}\right), \quad 1 \leq j \leq \frac{n}{2}-1
\end{equation}

\subsection*{II-c V-cycle Scheme and Full Multigrid V-Cycle}

Now let me introduce our first true multigrid method.

V-Cycle Scheme

\begin{equation}
\mathbf{v}^{h} \leftarrow V^{h}\left(\mathbf{v}^{h}, \mathbf{f}^{h}\right)
\end{equation}

- Relax on $A^{h} \mathbf{u}^{h}=\mathbf{f}^{h} \nu_{1}$ times with initial guess $\mathbf{v}^{h}$.

$\bullet $ Compute $\mathbf{f}^{2 h}=I_{h}^{2 h} \mathbf{r}^{h}$

- Relax on $A^{2 h} \mathbf{u}^{2 h}=\mathbf{f}^{2 h} \nu_{1}$ times with initial guess $\mathbf{v}^{2 h}=\mathbf{0}$.

$\bullet$ Compute $\mathbf{f}^{4 h}=I_{2 h}^{4 h} \mathbf{r}^{2 h}$

- Relax on $A^{4 h} \mathbf{u}^{4 h}=\mathbf{f}^{4 h} \nu_{1}$ times with initial guess $\mathbf{v}^{4 h}=\mathbf{0}$.

$\bullet$ Compute $\mathbf{f}^{8 h}=I_{4 h}^{8 h} \mathbf{r}^{4 h}$

......

- Solve $A^{L h} \mathbf{u}^{L h}=\mathbf{f}^{L h}$.

......

$\bullet$ Correct $\mathbf{v}^{4 h} \leftarrow \mathbf{v}^{4 h}+I_{8 h}^{4 h} \mathbf{v}^{8 h}$

- Relax on $A^{4 h} \mathbf{u}^{4 h}=\mathbf{f}^{4 h} \nu_{2}$ times with initial guess $\mathbf{v}^{4 h}$.

$\bullet$ Correct $\mathbf{v}^{2 h} \leftarrow \mathbf{v}^{2 h}+I_{4 h}^{2 h} \mathbf{v}^{4 h}$

- Relax on $A^{2 h} \mathbf{u}^{2 h}=\mathbf{f}^{2 h} \nu_{2}$ times with initial guess $\mathbf{v}^{2 h}$.

$\bullet $ Correct $\mathbf{v}^{h} \leftarrow \mathbf{v}^{h}+I_{2 h}^{h} \mathbf{v}^{2 h}$

- Relax on $A^{h} \mathbf{u}^{h}=\mathbf{f}^{h} \nu_{2}$ times with initial guess $\mathbf{v}^{h}$.

The algorithm that joins nested iteration with the V-cycle is called the full multigrid V-cycle (FMG) . Given first in explicit terms, it appears as follows:

Full Multigrid V-Cycle

\begin{equation}
\mathbf{v}^{h} \leftarrow F M G^{h}\left(\mathbf{f}^{h}\right)
\end{equation}

Initialize $f^{2 h} \leftarrow I_{h}^{2 h} f^{h}, f^{4 h} \leftarrow I_{2 h}^{4 h} f^{2 h}, \ldots$

$\bullet$Solve or relax on coarsest grid.

......

-$\mathbf{v}^{4 h} \leftarrow I_{8 h}^{4 h} \mathbf{v}^{8 h} $

$\bullet\mathbf{v}^{4 h} \leftarrow V^{4 h}\left(\mathbf{v}^{4 h}, \mathbf{f}^{4 h}\right) \nu_{0} \text { times. }$

-$\mathbf{v}^{2 h} \leftarrow I_{4 h}^{2 h} \mathbf{v}^{4 h}$

$\bullet \mathbf{v}^{2 h} \leftarrow V^{2 h}\left(\mathbf{v}^{2 h}, \mathbf{f}^{2 h}\right) \nu_{0}$ times.

-$\mathbf{v}^{h} \leftarrow I_{2 h}^{h} \mathbf{v}^{2 h}$

$\bullet \mathbf{v}^{h} \leftarrow V^{h}\left(\mathbf{v}^{h}, \mathbf{f}^{h}\right), \nu_{0}$ times.

\section*{III. How to prove the effectiveness of multigrid?}

\subsection*{III-a The result of the full weighting operator and the interpolation operator acting on modes. }

Now let's consider the effectiveness of multigrid.Easily,we can find that he V-cycle is just nested applications of the correction scheme and that the FMG method is just
repeated applications of the V-cycle on various grids.Therefore, an understandingof the correction scheme is essential for a complete explanation of the basic
multigrid methods.

If we want to know the effect of the correction scheme on modes,we should begin with a detailed look at the intergrid transfer operators.Recall that the modes of $A^{h}$
for the one-dimensional model problem are given by:

\begin{equation}
w_{k, j}^{h}=\sin \left(\frac{j k \pi}{n}\right), \quad 1 \leq k \leq n-1, \quad 0 \leq j \leq n
\end{equation}

The full weighting operator may be applied directly to these vectors. The result of $I_{h}^{2 h} $  acting on the modes is:
  
\begin{equation}
I_{h}^{2 h} \mathbf{w}_{k}^{h}=\cos ^{2}\left(\frac{k \pi}{2 n}\right) \mathbf{w}_{k}^{2 h}, \quad 1 \leq k \leq \frac{n}{2}
\end{equation}

\begin{equation}
I_{h}^{2 h} \mathbf{w}_{k^{\prime}}^{h}=-\sin ^{2}\left(\frac{k \pi}{2 n}\right) \mathbf{w}_{k}^{2 h}, \quad 1 \leq k<\frac{n}{2}
\end{equation}

where $k^{\prime}=n-k$.This says that $I_{h}^{2 h}$ acting on the $k$ th (smooth) mode of $A^{h}$ produces a constant times the $k$ th mode of $A^{2 h}$ when $1 \leq k \leq \frac{n}{2} $, 
and that $I_{h}^{2 h}$ acting on the $(n-k)$ th mode of $A^{h}$ produces a constant multiple of the $k$ th mode of $A^{2 h}$. The oscillatory modes on $\Omega^{h}$ cannot be represented
on $\Omega^{2 h}$. As a result, the full weighting operator transforms these modes into relatively smooth modes on $\Omega^{2 h}$.

Similarly, leting

\begin{equation}
w_{k, j}^{2 h}=\sin \left(\frac{j k \pi}{n / 2}\right), \quad 1 \leq k<\frac{n}{2}, \quad 0 \leq j \leq \frac{n}{2}
\end{equation}

be the $\Omega^{2 h}$ modes. The calculation shows that

\begin{equation}
I_{2 h}^{h} \mathbf{w}_{k}^{2 h}=c_{k} \mathbf{w}_{k}^{h}-s_{k} \mathbf{w}_{k^{\prime}}^{h}, \quad 1 \leq k<\frac{n}{2}, \quad k^{\prime}=n-k
\end{equation}

where $c_{k}=\cos ^{2}\left(\frac{k \pi}{2 n}\right)$ and $s_{k}=\sin ^{2}\left(\frac{k \pi}{2 n}\right) .$ We see that $I_{2 h}^{h}$ acting on the $k$ th mode of $\Omega^{2 h}$ produces
not only the $k$ th mode of $\Omega^{h}$ but also the complementary mode $\mathbf{w}_{k^{\prime}}^{h} .$ This fact exposes the interesting property that interpolation of smooth modes on
$\Omega^{2 h}$ excites (to some degree) oscillatory modes on $\Omega^{h}$.

\subsection*{III-b The validity proof of the correction scheme. }

We can now turn to the correction scheme. The steps of this scheme, with an exact solution on the coarse grid, are given by the following procedure:

- Relax $\nu$ times on $\Omega^{h}$ with scheme $R: \mathbf{v}^{h} \leftarrow R^{\nu} \mathbf{v}^{h}+C(\mathbf{f})$.

- Full weight $\mathbf{r}^{h}$ to $\Omega^{2 h}: \mathbf{f}^{2 h} \leftarrow I_{h}^{2 h}\left(\mathbf{f}^{h}-A^{h} \mathbf{v}^{h}\right)$.

- Solve the residual equation exactly: $\mathbf{v}^{2 h}=\left(A^{2 h}\right)^{-1} \mathbf{f}^{2 h}$

- Correct the approximation on $\Omega^{h}: \mathbf{v}^{h} \leftarrow \mathbf{v}^{h}+I_{2 h}^{h} \mathbf{v}^{2 h}$.

If we now take this process one step at a time, it may be represented in terms of a single replacement operation:

\begin{equation}
\mathbf{v}^{h} \leftarrow R^{\nu} \mathbf{v}^{h}+C(\mathbf{f})+I_{2 h}^{h}\left(A^{2 h}\right)^{-1} I_{h}^{2 h}\left(\mathbf{f}^{h}-A^{h}\left(R^{\nu} \mathbf{v}^{h}+C(\mathbf{f})\right)\right)
\end{equation}

The exact solution $\mathbf{u}^{h}$ is unchanged by the correction scheme. Therefore,

\begin{equation}
\mathbf{u}^{h}=R^{\nu} \mathbf{u}^{h}+C(\mathbf{f})+I_{2 h}^{h}\left(A^{2 h}\right)^{-1} I_{h}^{2 h}\left(\mathbf{f}^{h}-A^{h}\left(R^{\nu} \mathbf{u}^{h}+C(\mathbf{f})\right)\right)
\end{equation}

By subtracting these last two expressions, we can see how the correction operator, which we now denote $T G,$ acts upon the error, $\mathbf{e}^{h}=\mathbf{u}^{h}-\mathbf{v}^{h} .$ We find that

\begin{equation}
\mathbf{e}^{h} \leftarrow\left[I-I_{2 h}^{h}\left(A^{2 h}\right)^{-1} I_{h}^{2 h} A^{h}\right] R^{\nu} \mathbf{e}^{h} \equiv T G \mathbf{e}^{h}
\end{equation}

Now let's see how the modes change after the TG operation:

\begin{equation}
\begin{aligned}
T G \mathbf{w}_{k} &=\lambda_{k}^{\nu} s_{k} \mathbf{w}_{k}+\lambda_{k}^{\nu} s_{k} \mathbf{w}_{k^{\prime}} \\
T G \mathbf{w}_{k^{\prime}} &=\lambda_{k^{\prime}}^{\nu} c_{k} \mathbf{w}_{k}+\lambda_{k^{\prime}}^{\nu} c_{k} \mathbf{w}_{k^{\prime}}, \quad 1 \leq k \leq \frac{n}{2}, k^{\prime}=n-k
\end{aligned}
\end{equation}

where $\lambda_{k}$ is the eigenvalue of R associated with the kth mode $\mathbf{w}_{k}$.
We know that the smoothing property of relaxation has the strongest effect on the oscillatory modes. This is reflected in the term $\lambda_{k^{\prime}}^{\nu},$ which is small.
At the same time, the correction scheme alone (without relaxation,i.e.$v=0$) eliminates the smooth modes. This is reflected in the $s_{k}$ terms.So far,this analysis explains
how the two-grid correction process eliminates both the smooth and oscillatory components of the error.

To sum up, we prove the effectiveness of multigrid.


\end{document}

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: t
%%% End: 
