\documentclass{article}
\usepackage{amsmath,amssymb,amsrefs}
\usepackage{amscd}
\usepackage{latexsym}
\usepackage{graphics}
\usepackage{booktabs}
\usepackage{multirow}
\usepackage{placeins}
\begin{document}

\section{Oct. 22, 2009}

This is the code that only calculates the diagonal elements of a Green's
function, but not the nearest off-diagonal blocks.

I tested the code using a potential \verb|V = ones(N,N) + rand(N,N)|. 
Domain is $[0,1]\times [0,1]$, discretized into $N\times N$ grid.

\begin{table}[h]
  \centering
  \begin{tabular}{c|c}
    \hline 
    Level & relative $L^1$ error  \\
          & of diagonal elements  \\
    \hline
    1 & 1.1e-5 \\
    2 & 4.4e-5 \\
    3 & 2.1e-4 \\
    \hline
  \end{tabular}
  \caption{N=64, SVDCut = 1e-6 uniformly for all levels. This SVD cut
  follows an absolute criterion.}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c}
    \hline 
    Level & relative $L^1$ error  \\
          & of diagonal elements  \\
    \hline
    1 & 1.1e-5 \\
    2 & 6.0e-5 \\
    3 & 2.6e-4 \\
    4 & 4.8e-4 \\
    \hline
  \end{tabular}
  \caption{N=128, SVDCut = 1e-6 uniformly for all levels. This SVD cut
  follows an absolute criterion.}
\end{table}


\section{Nov. 27, 2009}

I have written the uniformized version of H-matrix construction and it
is relatively well documented now. However, I still cannot understand
why the $L^1$ error grows with respect to increasing levels and
$L^2$ error does not. Here are some experiments done on fine210c.
Potential is \verb|V = ones(N,N) + rand(N,N)|. Domain is $[0,1]\times
[0,1]$.

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & relative $L^1$ error & $L^2$ error & Time & MatVec number \\
          & of diagonal elements &             &      & \\
    \hline
    2 & 1.5e-6 & 5.6e-8 & 5.7s & 2920 \\
    3 & 3.9e-6 & 6.4e-8 & 15.1s & 3440 \\
    4 & 5.3e-6 & 5.5e-8 & 48.9s & 4520\\ 
    \hline
  \end{tabular}
  \caption{N=64, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & relative $L^1$ error & $L^2$ error & Time & MatVec number \\
          & of diagonal elements &             &      & \\
    \hline
    2 & 1.2e-6 & 4.3e-8 & 68.2s & 5960 \\
    3 & 2.9e-6 & 5.5e-8 & 41.3s & 4240 \\
    4 & 6.4e-6 & 4.7e-8 & 83.1s & 4720\\
    \hline
  \end{tabular}
  \caption{N=128, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & relative $L^1$ error & $L^2$ error & Time & MatVec number \\
          & of diagonal elements &             &      & \\
    \hline
    4 & 5.4e-6 & 4.8e-8 & 229s & 5520\\
    5 & 1.0e-5 & 3.8e-8 & 405s & 6000\\
    \hline
  \end{tabular}
  \caption{N=256, SVDCut = 1e-6 uniformly for all levels}
\end{table}


\section{Nov. 30, 2009}

The $L^2$ norm error estimation might be problematic. The $L^2$ norm
estimated using random methods might just reflect the error cancellation
part of the matrix. To get a better idea of the error, the power method
should be used instead. So I am repeating the tables on Nov. 27, but
adding a column of the $L^2$ norm using power method.


\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & relative $L^1$ error & $L^2$ error & $L^2$ error& iter number\\
          & of diagonal elements & $100$ estimate  &  power method  & by power method\\
    \hline
    2 & 1.5e-6 & 5.6e-8 & 2.5e-7 & 180\\
    3 & 3.9e-6 & 6.4e-8 & 2.4e-7 & 146\\
    4 & 5.3e-6 & 5.5e-8 & 2.8e-7 & 42\\ 
    \hline
  \end{tabular}
  \caption{N=64, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & relative $L^1$ error & $L^2$ error & $L^2$ error& iter number\\
          & of diagonal elements & $100$ estimate  &  power method  & by power method\\
    \hline
    3 & 2.9e-6 & 5.5e-8 & 2.5e-7 & 83\\
    4 & 6.4e-6 & 4.7e-8 & 2.5e-7 & 54\\
    \hline
  \end{tabular}
  \caption{N=128, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & relative $L^1$ error & $L^2$ error & $L^2$ error& iter number\\
          & of diagonal elements & $100$ estimate  &  power method  & by power method\\
    \hline
    3 & 2.6e-6 & 5.3e-8 & 2.4e-7 & 160\\
    4 & 5.4e-6 & 4.8e-8 & 2.1e-7 & 282\\
    \hline
  \end{tabular}
  \caption{N=256, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\section{Dec. 01, 2009}

Today Lexing made an argument that the maximum of the error matrix for
each L should increase exponentially as L grows. So I did some
experiment as below, using the comparison of Green's function in a
brute-force way:


\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c}
    \hline 
    Level & absolute Max error & relative Max error \\
    \hline
    1 & 2.5e-8 & 3.7e-5 \\
    2 & 2.7e-8 & 3.7e-5 \\
    3 & 2.0e-8 & 3.0e-5 \\
    \hline
  \end{tabular}
  \caption{N=32, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c}
    \hline 
    Level & absolute Max error & relative Max error \\
    \hline
    1 & 1.3e-8 & 7.7e-5 \\
    2 & 1.3e-8 & 7.1e-5 \\
    3 & 1.3e-8 & 6.8e-5 \\
    4 & 1.0e-8 & 4.7e-5 \\
    \hline
  \end{tabular}
  \caption{N=64, SVDCut = 1e-6 uniformly for all levels}
\end{table}

Actually this result looks mysterious since we definitely know that the
$L^1$ norm increases as L grows. Based on the result measured in Nov.
30, we compute the absolute and relative $L^{\infty}$ error for the diagonal
vector.

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c}
    \hline 
    Level & relative $L^1$ error & absolute $L^{\infty} error$ &
    relative $L^{\infty}$ error \\
          & of diagonal elements & of diagonal elements  &  of diagonal
	  elements \\
    \hline
    2 & 1.9e-6 & 1.6e-8 & 1.3e-5 \\
    3 & 2.7e-6 & 2.7e-8 & 2.2e-5 \\
    4 & 4.9e-6 & 5.3e-8 & 4.3e-5 \\ 
    \hline
  \end{tabular}
  \caption{N=32, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c}
    \hline 
    Level & relative $L^1$ error & absolute $L^{\infty} error$ &
    relative $L^{\infty}$ error \\
          & of diagonal elements & of diagonal elements  &  of diagonal
	  elements \\
    \hline
    3 & 3.4e-6 & 1.4e-8 & 4.2e-5 \\
    4 & 5.4e-6 & 2.6e-8 & 8.0e-5 \\
    \hline
  \end{tabular}
  \caption{N=64, SVDCut = 1e-6 uniformly for all levels}
\end{table}

The error in diagonal elements clearly increases!

\section{Dec. 02, 2009}

Lexing suggested that the maximum criterion might not be indicative
enough to measure the error in off-diagonal blocks, and it might be
better to use the average absolute value for the error matrix. Here is
the result, together with the matrix produced yesterday:


\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & absolute Max error & relative Max error & absolute average
    error & relative average error \\
    \hline
    1 & 2.5e-8 & 3.7e-5 & 1.3e-9 & 2.1e-6\\
    2 & 2.7e-8 & 3.7e-5 & 1.8e-9 & 2.7e-6\\
    3 & 2.0e-8 & 3.0e-5 & 5.0e-9 & 6.5e-6\\
    \hline
  \end{tabular}
  \caption{N=32, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c}
    \hline 
    Level & absolute Max error & relative Max error & absolute average
    error & relative average error \\
    \hline
    1 & 1.3e-8 & 7.7e-5 & 2.9e-10 & 1.8e-6\\
    2 & 1.3e-8 & 7.1e-5 & 5.3e-10 & 3.1e-6\\
    3 & 1.3e-8 & 6.8e-5 & 1.4e-9 & 8.5e-6\\
    4 & 1.0e-8 & 4.7e-5 & 5.1e-9 & 2.3e-5\\
    \hline
  \end{tabular}
  \caption{N=64, SVDCut = 1e-6 uniformly for all levels}
\end{table}

\section{Dec. 03, 2009}

Today Jianfeng and Lexing proposed that we will state in a numerical way
that the $L^2$ error in the blocks, including diagonal block, does not
increase. The trace norm will be hidden under the rug at this moment.
But in order to do this, we still need to prove that the errors for the
diagonal blocks (and off-diagonal blocks) do not indeed increase as
$L$ grows. Here is the result.

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c|c|c}
    \hline 
    Level & absolute  & relative  & absolute & relative & absolute  & relative 
    \\
     &  $L^2$ & $L^2$ & Max & Max & average $L^1$ & average $L^1$ \\
    \hline
    2 & 8.3e-8 & 2.2e-6 & 3.0e-8 & 2.4e-5 & 1.7e-9 & 2.1e-6\\
    3 & 8.5e-8 & 8.3e-6 & 6.6e-8 & 5.3e-5 & 5.0e-9 & 5.9e-6\\
    \hline
  \end{tabular}
  \caption{Diagonal and nearest off-diagonal blocks error. N=32, SVDCut
  = 1e-6 uniformly for all levels} 
\end{table}

\begin{table}[h]
  \centering
  \begin{tabular}{c|c|c|c|c|c|c}
    \hline 
    Level & absolute  & relative  & absolute & relative & absolute  & relative 
    \\
     &  $L^2$ & $L^2$ & Max & Max & average $L^1$ & average $L^1$ \\
    \hline
    2 & 1.0e-7 & 2.7e-6 & 2.2e-8 & 6.4e-5 & 4.0e-10 & 2.0e-6\\
    3 & 1.0e-7 & 9.7e-6 & 5.0e-8 & 1.5e-4 & 1.6e-9 & 7.7e-6\\
    4 & 8.8e-8 & 3.0e-5 & 5.2e-8 & 1.5e-4 & 5.6e-9 & 2.3e-5\\
    \hline
  \end{tabular}
  \caption{Diagonal and nearest off-diagonal blocks error. N=64, SVDCut
  = 1e-6 uniformly for all levels} 
\end{table}

\end{document}
