\documentclass[12pt]{article}

\usepackage{palatino}
\usepackage{a4wide}
\usepackage{float}
\usepackage{amsmath,amsfonts,array,theorem}
\usepackage{graphicx,url}
\usepackage{version}
\usepackage{textcomp}
\usepackage[latin1]{inputenc}
\usepackage{amssymb}
\usepackage{multirow}
\usepackage{subfig}
\usepackage{fancyhdr}
\usepackage{amssymb}
%\usepackage{tabularx}

%\pagestyle{fancy}

%\renewcommand{\headrulewidth}{0.4pt}

\begin{document}
\pagestyle{fancy}
\input{dec}

\tableofcontents
\thispagestyle{empty}
\clearpage

\setcounter{page}{1}
\begin{abstract}
%\pagestyle{fancy}
Total variation denoising algorithms bases on gradient dependent energy functionals and modify image towards piecewise constant functions. By starting with the basic concepts and foundations of the original work in \cite{bib:ori}, this paper gives a general overview of total variation denoising. Many details about this approach is introduced and discussed, including the implementation of this algorithm. After that image examples in the implementation and examples part are showed to give a visual expression.
\end{abstract}

\section{Introduction}
Variational methods have become more and more important in image processing during the last years. In the classical application of denoising the noise removal is achieved by minimizing an energy function, the total variation of the image in this case. Total variation (TV) denoising, also called total variation regularization, is a technique that was originally developed for additive white Gaussian noise (AWGN) image denoising by Rudin, Osher and Fatemi \cite{bib:ori}. Since then the TV methods have been applied to a multitude of other imaging problems.

In general total variation denoising methods has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. However, textures and fine-scale details are also removed at the same time in the original algorithm. Modern techniques tend to improve the quality of results by preserving more textures and details and have got great achievements.

\paragraph{Outline}
The remainder of this article is organized as follows. Section~\ref{denoising} gives account of basic concepts and a general mathematical explanation. Besides, a brief introduction of modern work is also given in this section. The implementation details and results are described in section~\ref{results}. Finally, section~\ref{conclusions} comes to the conclusions.

\section{Basic Concepts of Variational Methods}\label{concept}
In variation approaches problems are formulated as the minimization of an energy function E,
\begin{equation*}
arg min_{\mu \in \nu} E(\mu).
\end{equation*}
The energy function can be classified into two categories, convex energy and non-convex energy as showed in Fig. \ref{fig:convex}. Convex energies can be globally minimized but for non-convex energies it is usually impossible. Most of the time total variation is used as a regularization term that permits to select the most reasonable one among several competing solutions.

\begin{figure}[!h]
\centering
\subfloat[Convex Energy]{\includegraphics[scale=1.2]{fig/convex.jpg}}
\hspace{4mm}
\subfloat[Non-convex Energy]{\includegraphics[scale=1.2]{fig/nonconvex.jpg}}
\caption{Convex vs. non-convex energies. Source: \cite{bib:tum}.}
\label{fig:convex}
\end{figure}

Images are represented by discrete functions. A greyscale image is a real-valued function and a color image is a vector-valued function which maps. e.g. into RGB color space.

To simplify, images are discretized as two dimensional matrices. The total variation of image f is defined by
\begin{eqnarray*}
    |f|_{TV(\Omega)} = \sum_{1\leq i,j \leq N} |(\nabla f)_{i,j}|,
\end{eqnarray*}
\newcommand{\coloneqq}{\mathrel{\mathop:}=}
with $N$ the size of image and $|x| \coloneqq \sqrt{x_{1}^{2} + x_{2}^{2}}$ for every $x = (x_{1}, x_{2})$.

\section{Total Variation Denoising}\label{denoising}
\subsection{Denoising}
Denoising is the problem of removing noise from an image. The observed noisy image f is related to the clean image $f_{0}$ by
\begin{equation*}
f = f_{0} + n,
\end{equation*}
and n is the additive noise. In real applications noise always has a random distribution which makes the removal quite difficult.

\subsection{ROF Denoising}
In 1992 Rudin, Osher, and Fatemi proposed to remove noise from images by using total variation method \cite{bib:ori}. They estimated the denoised image $f_{0}$ as the solution of a minimization problem,
\begin{equation}
arg min |f_{0}|_{TV(\Omega)} + \frac{\lambda}{2}\int_\Omega (f(x) - f_{0}(x))^2\,dx,
\label{equ:ori}
\end{equation}
where $\lambda$ is a positive parameter. This problem is referred to as the Rudin-Osher-Fatemi or ROF problem. It is based on the principle that signals with excessive and possibly spurious details have high total variation. Therefore a close matched signal can be achieved by reducing the integral of the absolute gradient of the signal. Unwanted details can be removed and important details preserved, which is quite remarkable compared with other simple denoising techniques like linear smoothing or median filtering \cite{bib:wiki}.

In ROF model the first TV term in the minization function is defined as:
\begin{equation}
|f|_{TV(\Omega)} = \sum_{i, j}\sqrt{|f_{i+1,j} - f_{i,j}|^{2} + |f_{i,j+1}-f_{i,j}|^{2}},
\label{equ:tvTerm}
\end{equation}
It discourages the solution from having oscillations. The second term encourages the solution to be close to the observed image $f$ by measuring the integral of square errors. By this combination, the minimization finds a denoised image.  $\lambda$ is called the regularization parameter. With a small $\lambda$ more noise is removed at the expense of being less like the input. When $\lambda$ becomes bigger, the result is much more similar as the input image but less denoised.

Besides the definition in \ref{equ:tvTerm}, there are other ways to define discrete TVs by means of finite difference, for example with more symmetric schemes (with 3, 4 or 8 neighbors), or absolute vales. However, the pixel size can not be chosen which means the sampling frequency is fixed. According to Shannon Theorem signal can be represented only if the sampling step is halved when it has been correctly sampled from an band-limited function. As a consequence, the result image will be aliased as long as TV term is used in a minimization process \cite{bib:dis}. Fig. \ref{fig:alias} shows an example of aliasing in one dimensional case.

\begin{figure}[!h]
\centering
\subfloat{\includegraphics[scale=1.0]{fig/alias.jpg}}
\caption{(a) Results with aliasing. (b) Results without aliasing. Source: \cite{bib:ali}.}
\label{fig:alias}
\end{figure}

\subsection{Noise Models}
As mentioned before, in the original vision of ROF problem the noise is modeled as AWGN. For salt-and-pepper noise, where pixels are randomly either full black or full white, ROF denoising is ill-suited. One possible solution is the $TV-L^{1}$ model as follows:
\begin{equation*}
arg min_{\mu \in L^{2}(\Omega)} |f_{0}|_{TV(\Omega)} + \lambda \int_\Omega (f(x) - f_{0}(x))\,dx,
\end{equation*}
Through the connection with other estimation, TV denoising can be extended to noise models like Poisson noise \cite{bib:poi}, 
\begin{equation*}
arg min_{\mu \in L^{2}(\Omega)} |f_{0}|_{TV(\Omega)} + \lambda \int_\Omega (f_{0}(x)-f(x)\log f_{0}(x))\,dx,
\end{equation*}
multiplicative noise \cite{bib:mul} and so on. Furthermore, this model can be extended to use a spatially varying $\lambda$ to impose different regularization strength at different points of space \cite{bib:var}.

The choice of noise model can significantly affect the denoising results. For better results, the noise should correspond with the actual noise distribution in the image.


\section{Implementation}\label{results}
\subsection{Solution Approach}
Solving equation \ref{equ:ori} numerically requires two steps. First, a discrete equation has to be defined for images. Second, use an algorithm to solve the minimization problem obtained from first step. After the choice of discretization model, the solution depends only on the convergence of the algorithm.

\subsection{Discretization}
Discrete gradient operators are used to define the discrete total variation.

Let $\nabla_{x}^{+}$, $\nabla_{x}^{-}$, $\nabla_{y}^{+}$, and $\nabla_{y}^{-}$ denote the forward (+) and backward (-) finite differences in the x and y directions and let m(a, b) denote the minmod operator
\begin{equation*}
m(a,b) = (\frac{sign a + sign b}{2})min(|a|, |b|)
\end{equation*}

Here are several gradient operators \cite{bib:spl}:
\begin{itemize}
  \item One-sided difference\hspace{8mm}$(\nabla_{x}f_{0})^2 = (\nabla_{x}^{+}f_{0})^2$
  \item Central difference\hspace{8mm}$(\nabla_{x}f_{0})^2 = (\frac{(\nabla_{x}^{+}f_{0} + \nabla_{x}^{-}f_{0})}{2})^2$
  \item Geometric average\hspace{8mm}$(\nabla_{x}f_{0})^2 = \frac{(\nabla_{x}^{+}f_{0})^2+(\nabla_{x}^{-}f_{0})^2}{2}$
  \item Minmod\hspace{8mm}$(\nabla_{x}f_{0})^2 = m(\nabla_{x}^{+}f_{0},\nabla_{x}^{-}f_{0})^2$
\end{itemize}

Central differences are undesirable for TV discretization because they miss thin structure. Notice that the central difference at (i, j) does not depend on $f_{0}(i,j)$.
\begin{eqnarray*}
\nabla_{x}^{+}f_{0}(i,j) + \nabla_{x}^{-}f_{0}(i,j) & = & (f_{0}(i+1, j) - f_{0}(i, j)) + (f_{0}(i, j) - f_{0}(i-1, j)) \\
& = & f_{0}(i+1, j) - f_{0}(i-1, j).
\end{eqnarray*}
Therefore, if $f_{0}$ has a one-sample wide structure like $f_{0}(i, j) = 1$ and $f_{0}(k, j) = 0$ for all $k \neq i$, the TV estimated at (i, j) by central differences is zero. To avoid this problem, one-sided differences can be used, however, they are not symmetric. The geometric average and minmod above regain symmetry by combining the forward and backward one-sided differences.

For numerical solution of the minimization problem, several approaches for implementing the TV seminorm have been proposed in the literature. A difficulty with TV is that it has a derivative singularity when $f_{0}$ is locally constant. To avoid this, some algorithms regularize TV by introducing a small parameter $\epsilon > 0,$\\
\begin{equation*}
\sum_{i, j} \sqrt{\epsilon^2 + (\nabla_{x}f_{0})^2_{i,j} + (\nabla_{y}f_{0})^2_{i,j}}
\end{equation*}
where $\nabla_{x}$ and $\nabla_{y}$ are discretizations of the horizontal and vertical derivatives. For regularized TV, and $\epsilon^2$ term is also included within the square root.

However the challenge in using these more sophisticated TV discretizations is how to solve the resulting discretized minimization problem.

\subsection{Minimization}
Now comes back to the first original proposal in \cite{bib:ori} where an alternative to discretizing the minimization problem is given by directly discreting its gradient descent PDE. Through calculus of variations, the gradient descent PDE of the minimization is
\begin{equation*}
    \begin{cases}
       \partial_t f_{0} = div\frac{\nabla f_{0}}{|\nabla f_{0}|} + \lambda(f - f_{0})\\
       \nabla f_{0} = 0
    \end{cases}
\end{equation*}

Since the problem is convex \cite{bib:con, bib:con2, bib:con3}, the steady state solution of the gradient descent is the minimizer of the problem. And gradient descent is performed by iterating.
\begin{equation*}
    \begin{aligned}
        f_{0}^{n+1}(i, j) = &f_{0}^{n}(i, j)\\
        & + dt[\nabla_{x}^{-}\frac{\nabla_{x}^{+}f_{0}^{n}(i, j)}{\sqrt{(\nabla_{x}^{+}f_{0}^{n}(i, j))^2+(m(\nabla_{y}^{+}f_{0}^{n}(i, j), \nabla_{y}^{-}f_{0}^{n}(i, j)))^2}}\\
        & + \nabla_{y}^{-}\frac{\nabla_{y}^{+}f_{0}^{n}(i, j)}{\sqrt{(\nabla_{y}^{+}f_{0}^{n}(i, j))^2+(m(\nabla_{x}^{+}f_{0}^{n}(i, j), \nabla_{y}^{-}f_{0}^{n}(i, j)))^2}}]\\
        & + dt\lambda(f(i,j) - f_{0}^{n}(i, j))
    \end{aligned}
\end{equation*}
$dt$ is a small positive timestep parameter. The discretization is symmetric through a balance of forward and backward differences. In the divisions, notice that the numerator is always smaller in magnitude than the denominator.

Here is an example implementation in matlab \cite{bib:code}:
\begin{equation*}
\begin{aligned}
&for\hspace{2mm} i=1:iter\\
    &\hspace{8mm}I_x = (I(:,[2:nx \hspace{2mm} nx])-I(:,[1 \hspace{2mm} 1:nx-1]))/2;\\
		&\hspace{8mm}I_y = (I([2:ny \hspace{2mm} ny],:)-I([1 \hspace{2mm} 1:ny-1],:))/2;\\
		&\hspace{8mm}I_xx = I(:,[2:nx \hspace{2mm} nx])+I(:,[1 \hspace{2mm} 1:nx-1])-2*I;\\
		&\hspace{8mm}I_yy = I([2:ny \hspace{2mm} ny],:)+I([1 \hspace{2mm} 1:ny-1],:)-2*I;\\
		&\hspace{8mm}Dp = I([2:ny \hspace{2mm} ny],[2:nx \hspace{2mm} nx])+I([1 \hspace{2mm} 1:ny-1],[1 \hspace{2mm} 1:nx-1]);\\
		&\hspace{8mm}Dm = I([1 \hspace{2mm} 1:ny-1],[2:nx \hspace{2mm} nx])+I([2:ny \hspace{2mm} ny],[1 \hspace{2mm} 1:nx-1]);\\
		&\hspace{8mm}I_xy = (Dp-Dm)/4;\\
    &\hspace{8mm}Num = I_xx.*(ep2+I_y.^2)-2*I_x.*I_y.*I_xy+I_yy.*(ep2+I_x.^2);\\
    &\hspace{8mm}Den = (ep2+I_x.^2+I_y.^2).^(3/2);\\
    &\hspace{8mm}I_t = Num./Den + lam.*(I0-I+C);\\
    &\hspace{8mm}I=I+dt*I_t;  \\
&end
\end{aligned}
\end{equation*}

Instead of using gradient descent, other approaches directly solve for the steady state:
\begin{equation*}
	0 = div \frac{\nabla f_{0}}{|\nabla f_{0}|} + \lambda(f - f_{0})
\end{equation*}
Typical algorithms are duality-based approach \cite{bib:dua} and the employment of splitting operator \cite{bib:spl1, bib:spl2}. 

\section{Examples}\label{exampless}
The first example in Fig. \ref{fig:lamb} demonstrates how for TV-regularized Gaussian denoising the value of $\lambda$ influences the result. A smaller value of $\lambda$ implies stronger denoising. When $\lambda$ is very small, the image becomes cartoon-like with sharp jumps between nearly flat regions. The $\lambda$ parameter needs to be balanced to remove noise without losing too much signal content.

\begin{figure}[!h]
\centering
\subfloat[Input f]{\includegraphics[scale=0.6]{fig/k19-f.png}}
\hspace{4mm}
\subfloat[$\lambda = 5$]{\includegraphics[scale=0.6]{fig/k19-l5.png}}
\hspace{4mm}
\subfloat[$\lambda = 20$]{\includegraphics[scale=0.6]{fig/k19-l20.png}}
\hspace{4mm}
\subfloat[$\lambda = 40$]{\includegraphics[scale=0.6]{fig/k19-l40.png}}
\caption{TV-regularized denoising with increasing value of $\lambda$.}
\label{fig:lamb}
\end{figure}

To illustrate the importance of the noise model, the image in Fig. \ref{fig:lap} has been corrupted with impulsive noise. The Gaussian noise model works poorly on impulsive noise: $\lambda$ must be very small to remove all the noise, but this also removes much of the signal content. Better results are obtained with the Laplace noise model, which better approximates the distribution of impulsive noise.

\begin{figure}[!h]
\centering
\subfloat[Input f]{\includegraphics[scale=0.6]{fig/imp-f.png}}
\hspace{4mm}
\subfloat[Gaussian,$\lambda=4$]{\includegraphics[scale=0.6]{fig/imp-g-l4.png}}
\hspace{4mm}
\subfloat[Gaussian,$\lambda=8$]{\includegraphics[scale=0.6]{fig/imp-g-l8.png}}
\hspace{4mm}
\subfloat[Laplace,$\lambda=1.25$]{\includegraphics[scale=0.6]{fig/imp-l.png}}
\caption{The Laplace model is more effective for removing impulsive noise.}
\label{fig:lap}
\end{figure}

The last example in Fig. \ref{fig:col} demonstrates Gaussian denoising on a color image.
\begin{figure}[!h]
\centering
\subfloat[Exact]{\includegraphics[scale=0.4]{fig/hummingbird.jpg}}
\hspace{4mm}
\subfloat[Input f]{\includegraphics[scale=0.4]{fig/hummingbird-f.jpg}}
\hspace{4mm}
\subfloat[Denoised $f_{0}$ with $\lambda=7$]{\includegraphics[scale=0.4]{fig/hummingbird-u.jpg}}
\caption{Example of color images.}
\label{fig:col}
\end{figure}

\section{Conclusions}\label{conclusions}
In this paper it focuses on the original TV denoising approach in \cite{bib:ori} by explaining the meaning of mathematical formulations and implementation details. The most fascinating part is definitely the implementation because it shows how the computer works and calculates the complicated formulations. Furthermore, this paper also briefly introduces the modern direction of TV denoising.

\clearpage

\bibliographystyle{abbrv}
\bibliography{simple}

\end{document}

