\chapter{Generalized Gaussian Markov Random Field Image Restoration Using Variational Distribution Approximation}
\label{ch:GGMRF}

In this chapter we present novel algorithms for image restoration and parameter estimation with a \acf{Generalized Gaussian Markov Random Field}{GGMRF} \cite{Bouman93Generalized}\cite{Lopez04} prior utilizing variational distribution approximation. The restored image and the unknown hyperparameters for both the image prior and the image degradation noise are simultaneously estimated within a hierarchical Bayesian framework.  Two algorithms are developed using this formulation that jointly provide estimates of the posterior distributions of the restored image and the hyperparameters.

We utilize a \ac{GGMRF} as the image prior. In addition to the unknown image and noise, the hyperparameters are also cast into the Bayesian framework and simultaneously estimated. This is in contrast to the methods in literature utilizing \ac{GGMRF} priors. For instance, in \cite{Bouman93Generalized} and \cite{Lopez04} point estimates for the unknown image are found and the hyperparameters are not estimated explicitly, but they are instead marginalized (evidence approach). In addition, in \cite{Lopez04} a Poisson noise model is utilized. 

This chapter is organized as follows. The hierarchical Bayesian model is presented in Sec.~\ref{ch4:sec:bayesian}. Section~\ref{ch4:sec:inference} describes the variational approach to distribution approximation and the derivation of our algorithms. We present the experimental results in Sec.~\ref{ch4:sec:Exp} and conclude in Sec.~\ref{ch4:sec:Conclusions}.

\section{Bayesian Modeling}
\label{ch4:sec:bayesian}

The Bayesian modeling of the GGMRF restoration  problem requires first the definition of a joint distribution $\p(\alpha,\beta,\bx,\by)$ of the observation, $\by$, the unknown image, $\bx$, and the hyperparameters $\alpha$ (to be defined below) and $\beta$. We utilize the hierarchical Bayesian paradigm where in the first stage we form the prior distributions $\p(\by|\bx,\beta)$ and $\p(\bx|\alpha)$ for the unknowns, and in the second stage we define hyperpriors on the hyperparameters. The joint probability model is shown in graphical form in Fig.~\ref{ch4:fig:directions}(a) using a directed acyclic graph.


\subsection{First stage: prior models on image and observation}

The probability distribution corresponding to the observation model in Eq.~(\ref{eq:ch2:noisemodel1}) is given by
\begin{equation}
\p(\by|\bx,\beta)\propto \beta^{N/2}\exp\left[-\frac{\beta}{2}\parallel \by-\bH\bx\parallel^2\right]\label{ch4:eq:observation_eq}
\end{equation}

As the image model we use the GGMRF prior, given by

\begin{equation}
\p(\bx|\alpha)\propto \frac{1}{Z_{\bGG}(\alpha)}\exp\left[-\alpha \bGG(\bx)\right], 
\label{ch4:eq:prior1}
\end{equation}
where $Z_{\bGG}(\alpha)$ is the partition function and

\begin{equation*}
\bGG(\bx)=\sum_i\sum_{d=1}^4\left[|\Delta_i^d(\bx)|^p\right],
\end{equation*}
where the first summation is over all pixels $i$, $p \in [1,2]$, and $\Delta_i^d(\bx)$ denotes the first order difference in the $d$ direction, such that
\[
\Delta_i^d(\bx)=x_i-x_{i:+d},\ \ d=1,\ldots,4
\]
Figure~\ref{ch4:fig:directions}(b) shows the directions $d=1,\ldots,4$ along which the first order differences are taken.


\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.4]{ch4_dependency_graph_GGMRF2}&
\includegraphics[scale=0.4]{ch4_directions2}\\ (a) & (b)
\end{tabular}
\vspace{-.03in} \caption{(a) Graphical model showing relationships between variables, (b) the directions for the first order differences around the pixel $i$.} \label{ch4:fig:directions}
\end{figure}

Using $u^p=v$ and taking into account that
\begin{equation*}
\int_0^\infty \exp\left[-{\alpha}u^p \right]du = \frac{1}{p} \int_0^\infty \exp\left[-{\alpha}v \right]v^{\frac{1-p}{p}}dv\propto\alpha^{-\frac{1}{p}},
\end{equation*}
we use the approximation $\alpha^{-N/p}$ to the partition function to obtain
\begin{equation}
\p(\bx|\alpha) \propto  \alpha^{N/p}\exp\left[-\alpha
\bGG(\bx)\right].
\end{equation}

\subsection{Second stage: hyperprior on the hyperparameters}

We use Gamma distributions as our model for the hyperparameters $\omega \in \{\alpha,\beta\}$, given by 

\begin{equation}
\p(\omega) = \Gamma(\omega|a_{\omega}^o,b_{\omega}^o) =
\frac{{(b_{\omega}^o)}^{a_{\omega}^o}}{\Gamma(a_{\omega}^o)}
\omega^{a_{\omega}^o-1}\exp\left[-{\omega b_{\omega}^o}\right].
\label{eq:general_hyperprior}
\end{equation}

Combining the first and second stage, the joint distribution can be written as
\begin{equation}
\p(\alpha,\beta,\bx,\by)=\p(\alpha)\p(\beta)\p(\bx|\alpha)\p(\by|\bx,\beta).
\label{ch4:eq:very_global}
\end{equation}

\section{Inference and Variational Approximation}
\label{ch4:sec:inference}

The Bayesian inference on $(\alpha,\beta,\bx)$ should be based on 
\begin{equation}
\p(\alpha,\beta,\bx \mid \by)=\frac{\p(
\alpha,\beta,\bx,\by)}{\p(\by)}.
\end{equation}

However, since the posterior $\p(\alpha,\beta,\bx \mid \by)$ cannot be found in closed form, we approximate it by a simpler parametric form $\q(\alpha,\beta,\bx)=\q(\alpha,\beta)\q(\bx)$. This distribution can be found in a variational framework by minimizing the \acf{Kullback-Leibner}{KL} distance, that is,

\begin{align}
C_{KL}(\q(\alpha,\beta)\q(\bx)\parallel \p(\alpha,\beta,\bx|\by) &= \int_{\alpha}\int_{\beta}\int_{\bx}
\q(\alpha,\beta)\q(\bx)\log\left(\frac{\q(\alpha,\beta)\q(\bx)}{\p(\alpha,\beta,\bx|\by)}\right)d\alpha
d\beta d\bx \nonumber \\
&= \int_{\alpha}\int_{\beta}\int_{\bx} \q(\alpha,\beta)\q(\bx)\log\left(\frac{\q(\alpha,\beta)\q(\bx)}{\p(\alpha,\beta,\bx,\by)}\right)d\alpha
d\beta d\bx +\mbox{const}, \label{ch4:eq:KL}
\end{align}
which is always non negative and equal to zero only when $\q(\alpha,\beta)\q(\bx)=\p(\alpha,\beta,\bx|\by)$. 

Due to the form of our image prior, the \ac{KL} distance cannot be minimized directly. We define the functional $\M(\alpha,\bx,\bv)$ for $\alpha$, $x$ and $v \in (R^4+)^N$, with components $(v_{i,1},\ldots,v_{i,4}), i=1,\ldots,N$
\begin{equation*}
\M(\alpha,\bx,\bv)= \alpha^{N/p} \exp \left[-\frac{\alpha p}{2}\sum_i\sum_{d=1}^4\left[\frac{(\Delta_i^d(\bx))^2+\frac{2-p}{p}v_{i,d}} {v_{i,d}^{1-p/2}}\right]\right].
\end{equation*}

Next, using the following inequality for $w\ge0$, $z>0$, and $p \in [1,2]$
\begin{equation}
w^{p/2}\le z^{p/2}+\frac{p}{2z^{1-p/2}}(w-z)=\frac{p}{2}\frac{(w+\frac{2-p}{p}z)}{{z}^{1-p/2}},
\end{equation}
we find a lower bound for the image prior, given by
\begin{eqnarray*}
\p(\bx|\alpha)&\ge& \mbox{c} \cdot \, \M(\alpha,\bx,\bv),
\end{eqnarray*}
where $\mbox{c}$ is a constant. This inequality can be used to find a lower bound for the joint probability distribution
\begin{eqnarray}
\p(\alpha,\beta,\bx,\by) &\ge& \mbox{c} \cdot \, \p(\alpha)\p(\beta)\M(\alpha,\bx,\bv)\p(\by|\bx,\beta) \nonumber \\ &=& \F(\alpha,\beta,\bx,\bv,\by). \label{ch4:eq::F}
\end{eqnarray}
Using these lower bounds in Eq.~(\ref{ch4:eq:KL}), we can find an upper bound for the \ac{KL} distance as follows:

\begin{multline}
C_{KL}(\q(\alpha,\beta)\q(\bx)\parallel \p(\alpha,\beta,\bx|\by)
\\
\le \min_{\bv} \int_{\alpha}\int_{\beta}\int_{\bx}
\q(\alpha,\beta)\q(\bx)\log\left(\frac{\q(\alpha,\beta)\q(\bx)}{\F(\alpha,\beta,\bx,\bv,\by)}\right)d\alpha
d\beta d\bx . \label{ch4:eq:upperbound}
\end{multline}

Finally, we employ a minimization of the right-hand side of Eq.~(\ref{ch4:eq:upperbound}) and obtain the following iterative procedure to estimate the unknowns:

%
\begin{alg}\label{algg} Posterior parameter and image distributions estimation by approximating $p(\alpha,\beta,\bx \mid \by)$ by $\q(\alpha,\beta)\q(\bx)$.\ \\
\noindent Given $v^1 \in (R^4+)^N$ and $\q^1(\alpha,\beta)$,
\\%
\noindent For $k=1,2,\ldots$ until convergence:
\begin{enumerate}
\item Find 
\begin{equation}
\q^{k}(\bx) = \argmin_{\q(\bx)}
\int_{\bx}\int_{\alpha}\int_{\beta}
\q^k(\alpha,\beta)\q(\bx) \times \log\left(\frac{\q^k(\alpha,\beta)\q(\bx)}{\F(\alpha,\beta,\bx,\bv^k,\by)}\right)d\alpha
d\beta d\bx \label{ch4:eq:costx_it}
\end{equation}
\item  Find
\begin{equation}
\bv^{k+1} = \argmin_{\bv} \int_{\alpha}\int_{\beta}\int_{\bx} \q^{k}(\alpha,\beta)\q^k(\bx) \log\left(\frac{\q^{k}(\alpha,\beta)\q^k(\bx)}{\F(\alpha,\beta,\bx,\bv,\by)}\right)d\alpha
d\beta d\bx \label{ch4:eq:costitv}
\end{equation}
\item  Find
\begin{equation}
\q^{k+1}(\alpha,\beta) = \argmin_{\q(\alpha,\beta)} \int_{\alpha}\int_{\beta}\int_{\bx} 
\q(\alpha,\beta)\q^k(\bx) \log\left(\frac{\q(\alpha,\beta)\q^k(\bx)}{\F(\alpha,\beta,\bx,\bv^{k+1},\by)}\right)d\alpha
d\beta d\bx \label{ch4:eq:costgamma_itab}
\end{equation}
\end{enumerate}
\end{alg}

Now we proceed to give the explicit solutions at each step of the algorithm. Note that in the first step we have
\begin{equation}
\q^{k}(\bx)\propto \exp \left\{ \bE_{\q^{k}(\alpha,\beta)}[\ln \F(\alpha,\beta,\bx,\bv^k)] \right\},
\end{equation}
which corresponds to a multivariate Gaussian distribution with the mean and the covariance given by
\begin{equation}
\mbox{E}_{\q^{k}(\bx)}[\bx]=\mbox{cov}_{\q^{k}(\bx)}[\bx]\bE_{\q^{k}(\beta)}[\beta]\bH^t\by, \label{ch4:eq:meanx}
\end{equation}

\begin{equation}
\mbox{cov}_{\q^{k}(\bx)}[\bx] = \big[\bE_{\q^{k}(\beta)}[\beta]\bH^t\bH + p\bE_{\q^{k}(\alpha)}[{\alpha}]\sum_{d=1}^4{(\Delta^d)}^tW_d(\bv^{k}){(\Delta^d)} \big]^{-1} = [\bC^k(\bv^{k})]^{-1}, \label{ch4:eq:covx}
\end{equation}
where
\[
W_d(\bv^k)=diag\left(\frac{1}{v_{i,d}^{1-p/2}}\right), \ \ d=1,\ldots,4,\ \
i=1,\ldots,N.
\]

In the second step, we have
\[
\bv_d^{k+1} = \arg \min_{\bv_d}
\sum_i\frac{\bE_{\q^k(\bx)}[(\Delta_i^d(\bx))^2]+\frac{2-p}{p}v_{i,d}}{{v_{i,d}^{1-p/2}}}\label{ch4:costitvr} \ \ d=1,\ldots,4
\]
and therefore
\begin{equation}
\bv^{k+1}_{i,d}=\bE_{\q^k(\bx)}[(\Delta_i^d(\bx))^2],\
\ i=1,\ldots,N\ \ d=1,\ldots,4 \label{ch4:eq:v}
\end{equation}
where
\begin{equation*}
\bE_{\q^k(\bx)}[(\Delta_i^d(\bx))^2 ]=(\Delta_i^d(\mbox{E}_{\q^k(\bx)}[\bx]))^2 + \frac{1}{N}\mbox{trace} \left[\mbox{cov}_{\q^k(\bx)}[\bx] \times \left({(\Delta^d)}^t{(\Delta^d)} \right)\right]. \label{ch4:eq:expandv}
\end{equation*}

Finally to find $\q^{k+1}(\alpha,\beta)$ we differentiate the
integral on the right hand side of Eq.~(\ref{ch4:eq:costgamma_itab}) with
respect to $\q(\alpha,\beta)$ and set it equal to zero to obtain 
\[ \q^{k+1}(\alpha,\beta)\propto\exp \left\{ \bE_{\q^k(\bx))}[\ln
\F(\alpha,\beta,\bx,\bv^{k+1})] \right\}.
\]
Therefore, $\q^{k+1}(\alpha)$ and $\q^{k+1}(\beta)$ are both Gamma distributions, given by
\[
\q^{k+1}(\alpha)\propto
\alpha^{N/p+a_\alpha^o-1}\exp\left[-\alpha\left(\sum_i\sum_{d=1}^4([v_{i,d}^{k+1}]^{p/2})+b_\alpha^o\right)\right],
\]
\[
\q^{k+1}(\beta)\propto
\beta^{N/2+a_\beta^o-1}\exp\left[-\beta\left(\frac{\bE_{\q^{k}(\bx)}\parallel
\by-\bH\bx\parallel^2}{2}+b_\beta^o\right)\right].
\]

As the estimates to these hyperparameters, we use the means of these distributions, which can be given as
\begin{equation}
(E_{q^{k+1}(\alpha)}[\alpha])^{-1} = \gamma_{\alpha}
\frac{1}{\overline{\alpha}^o}+ (1-\gamma_{\alpha})
\frac{p\sum_{d=1}^4\sum_i[v_{i,d}^{k+1}]^{p/2}}{N},
\label{ch4:eq:inv_hyper_update1}
\end{equation}
\begin{equation}
(E_{q^{k+1}(\beta)}[\beta])^{-1} = \gamma_{\beta}
\frac{1}{\overline{\beta}^o}
 + (1-\gamma_{\beta}) \frac{\mbox{E}_{\q^k(\bx)}\left[\parallel
\by-\bH\bx\parallel^2\right]}{N},\label{ch4:eq:inv_hyper_update2}
\end{equation}
where $\overline{\alpha}^o= a_{\alpha}^o/ b_{\alpha}^o$, $\overline{\beta}^o=a_\beta^o/ b_\beta^o$, $\gamma_{\alpha} = \frac{a_{\alpha}^o}{a_{\alpha}^o + \frac{N}{p}},$ and $\gamma_{\beta} = \frac{a_{\beta}^o}{a_{\beta}^o + \frac{N}{2}}$. The parameters $\gamma_{\alpha}$ and $\gamma_{\beta}$, both taking values in the interval $[0,1)$,  can be understood as normalized confidence parameters. According to Eqs.~(\ref{ch4:eq:inv_hyper_update1}) and (\ref{ch4:eq:inv_hyper_update2}), when they are equal to zero, no confidence is placed on the inverse of the mean of the corresponding hyperprior, while when they are asymptotically equal to one, the prior knowledge of the mean is fully enforced, i.e., no estimation of the hyperparameters is performed.
 
The only remaining task is the calculation of $\mbox{E}_{\q^k(\bx)}\left[\parallel
\by-\bH\bx\parallel^2\right]$ which can be given as
\begin{equation*}
\mbox{E}_{\q^k(\bx)}\left[\parallel
\by-\bH\bx\parallel^2\right] = \parallel
\by-\bH\mbox{E}_{\q^k(\bx)}[\bx]\parallel^2 + \mbox{trace}\left(\mbox{cov}_{\q^k(\bx)}[\bx]\bH^t\bH\right). \label{ch4:eq:errorE}
\end{equation*}


The estimate $\q^k(\bx)$ in Algorithm 1 is the best approximation to the posterior in the KL divergence sense. However, we can also consider a suboptimal case where we assume a degenerate distribution for $\q(\bx)$, that is, $\q(\bx)$ takes one value, $\bx^k$, with probability one and the rest of the values with probability zero. This approach leads to an alternative algorithm, referred to as Algorithm 2, where the expectations involving the parameter $\q^k(\bx)$ are removed. Thus, the covariances in Eqs.~(\ref{ch4:eq:v}), (\ref{ch4:eq:inv_hyper_update1}), and (\ref{ch4:eq:inv_hyper_update2}) are set equal to zero. 


As the estimate to the unknown image $\bx$, we use the mean of $\q^k(\bx)$ shown in Eq.~(\ref{ch4:eq:meanx}) in both algorithms, which requires the inversion of a very large matrix $\bC^k(\bv^{k})$. This, however, introduces a big computational challenge since the last terms in Eq.~(\ref{ch4:eq:covx}) cannot be represented as block-circulant matrices with circulant blocks (BCCB), and therefore the inverse cannot be computed in the Fourier domain. We therefore employ a gradient descent approach to compute the image estimates without explicitly calculating the image covariance.

Note, however, that the explicit form of $\mbox{cov}_{\q^k(\bx)}[\bx]$ is needed in Eqs.~(\ref{ch4:eq:inv_hyper_update1})-(\ref{ch4:eq:inv_hyper_update2}) in Algorithm 1. To overcome this computational difficulty, we use the following approximation

\begin{equation*}
\mbox{cov}_{\q^k(\bx)}[\bx] \approx \big[\bE_{\q^{k}(\beta)}[\beta]\bH^t\bH + p {\bE_{\q^{k}(\alpha)}[\alpha]} \sum_{d=1}^4z_d(\bv^{k}) {(\Delta^d)}^t{(\Delta^d)} \big]^{-1} =\bB^{-1},
\end{equation*}
where $W_d(\bv^k)\approx z_d(\bv^k)\bI$ and $z_d(\bv^{k})= \frac{1}{N}\sum_i\frac{1}{[v_{i,d}^k]^{1-p/2}}.$ Note that in this approximation, matrix $\bB$ is BCCB, and therefore its inversion can be carried out very efficiently in the Fourier domain. 

\section{Experimental Results}
\label{ch4:sec:Exp}

We performed a number of experiments with the proposed algorithms using several images and several types of blurring functions. The results of some of them are presented here. Since we developed two different algorithms resulting from our framework, we will present results for both of them. 

For the experiments presented here, the ``Lena'' image (shown in Fig.~\ref{ch4:fig:lena}(a)) is blurred with a Gaussian shaped blur with variance 9 and a 9x9 uniform blur. Gaussian noise is added to the blurred images to obtain degraded images with blurred-signal-to-noise (BSNR) ratios of 20 and 40dB. An example degraded image is shown in Fig.~\ref{ch4:fig:lena}(b) where the blur is Gaussian-shaped with variance 9 and BSNR = 40dB. 

The parameters of both algorithms are initialized as follows: The observed image is used as initial estimate for the unknown image $\bx$. The initial values of the hyperparameters and $\bv$ are determined using this initial $\bx$ in Eqs.~(\ref{ch4:eq:v})-(\ref{ch4:eq:inv_hyper_update2}). Note that all parameters of the algorithms are initialized using the observation $\by$ so that no manual input is needed, i.e., both algorithms are initialized and run automatically. For all experiments, the criterion $\parallel \bx^k - \bx^{k-1}\parallel^2 / \parallel \bx^{k-1} \parallel^2 < 10^{-4}$ is used to terminate the iterative procedure, where $\bx^k$ is the mean of $\q^k(\bx)$ in Algorithm 1 and the point estimate in Algorithm 2. 

The restoration results of the Lena image in the case of Gaussian blur with 40dB BSNR are shown in Fig.~\ref{ch4:fig:lena}(c) for Algorithm 1 and \ref{ch4:fig:lena}(d) for Algorithm 2. Note that Algorithm 1 is more successful at removing the blur whereas the restored image has less pronounced ringing artifacts in Algorithm 2. In both cases the restoration quality is good considering that the parameters of both algorithms are estimated using only the degraded observation without any prior knowledge about the noise.  Also, in all cases the estimated value of $\beta^{-1}$ was very close to the original noise variance. 

Figure~(\ref{ch4:fig:p}) shows the ISNR evolution in the case of Gaussian and uniform blurs with Algorithm 1 and BSNR = 40dB and 20dB with varying $p$-values, where ISNR is defined as $10\log_{10} (\parallel \bx - \by \parallel^2 / \parallel \bx - \hat \bx\parallel^2)$, where $\hat \bx$ is the estimated image. We experimented with two cases, where in the first case we initialize and estimate the hyperparameters from the observation, and in the second case we compute them using the original unknown image and noise. As can be seen from Fig.~(\ref{ch4:fig:p}), the highest ISNR values are achieved with different $p$-values for different noise levels and blur functions, and the ISNR values are comparable for both cases. It can also be seen that with fixed parameters the performance of the algorithms as a function of $p$ is in agreement with the results reported in \cite{Bouman93Generalized} and \cite{Lopez04}.

\begin{figure}
\centering
\begin{tabular}{cccc}
%\includegraphics[width=7.35pc]{lena40}&
\includegraphics[width=9pc]{ch4_lena256}&
\includegraphics[width=9pc]{ch4_lena256_40} \\ (a) & (b) \\
\includegraphics[width=9pc]{ch4_ALG1_BSNR40_p18} &
\includegraphics[width=9pc]{ch4_ALG2_BSNR40_p16} \\
 (c) & (d)
\end{tabular}
\caption{(a) Original Lena Image, (b) Image degraded by a Gaussian shaped PSF
with variance $9$ and Gaussian noise of variance $0.16$ (BSNR=40dB),
(c) Restored image using Algorithm 1 with p = 1.8 (ISNR = 4.15dB), (c) Restored
image using Algorithm 2 with p = 1.6 (ISNR = 3.78dB).} \label{ch4:fig:lena} 
\end{figure}

\begin{figure}
\centering
\begin{tabular}{cc}
%\includegraphics[width=7.35pc]{lena40}&
(a)&\includegraphics[width=16pc]{ch4_gaussian_p_new3}\\
(b)&\includegraphics[width=16pc]{ch4_uniform_p_new3} 
\end{tabular}
%\end{center}
\caption{ISNR values obtained by different $p$ values with Lena image degraded by (a) a Gaussian blur with variance 9 and (b) a 9x9 uniform blur with Gaussian noise (BSNR = 40dB and 20dB).} 
\label{ch4:fig:p} 
\end{figure}

\section{Conclusions}
\label{ch4:sec:Conclusions}

A novel GGMRF based image restoration methodology has been presented that simultaneously estimates the reconstructed image and the hyperparameters of the Bayesian formulation. We have adopted a variational approach to approximate the posterior distributions of the unknown parameters to estimate the posterior distributions of unknowns so that the uncertainty of the estimates can be evaluated and different values from these distributions can be used in the restoration process. Two algorithms are provided resulting from this approach. We have shown that the unknown parameters of the Bayesian formulation can be calculated automatically using only the observation or initial knowledge can be incorporated with different confidence values. Experimental results demonstrated the performance of the proposed algorithms.

