\chapter{Background}
\label{ch:prior}

A general degradation model is \cite{katsaggelos:book:91}
\begin{equation}
y(i,j) = S \left\{ \sum_m \sum_n h(i,j,m,n) x(m,n) \right\} \odot n(i,j), \label{eq:ch2:general1}
\end{equation}
where $S{\cdot}$ represents a nonlinear function, $h(i,j,m,n)$ is the impulse reponse of the blurring system at the location $(i,j)$, $n(i,j)$ is additive noise, and $\odot$ represents pointwise operation. This is the most general form of a degradation model, since it includes pointwise nonlinearities such as saturation, nonstationary aberrations, spatial distortions, inter-channel effects, multiplicative noise, noise from several sources etc. However, most of the work in the literature does not take nonlinearity and signal-dependent noise into account (see, however, \cite{TekalpPavlovic91,KuanSawchuk85,Pearlman84}), such that Eq.~(\ref{eq:ch2:general1}) reduces to

\begin{equation}
y(i,j) = \sum_m \sum_n h(i,j,m,n) x(m,n) + n(i,j), \label{eq:ch2:general2}
\end{equation}

The continuous version of this equation is called Fredholm integral equation of the first kind. This degradation model has found limited use in the literature, primarily due to the difficulty of estimating the spatially varying blur $h(i,j,m,n)$. In most practical applications, the blur can be assumed as space-invariant so that the following degradation model can be used

\begin{equation}
y(i,j) = \sum_m \sum_n h(i-m,j-n) x(m,n) + n(i,j), \label{eq:ch2:general3}
\end{equation}

We note that Eqs.~(\ref{eq:ch2:general2}) and (\ref{eq:ch2:general3}) can be written in matrix-vector form, by lexicographically ordering arrays $y(i,j)$, $x(i,j)$ and $n(i,j)$. Assuming that the images are of size $m \times n = N$, the degradation model can be written as

\begin{equation}
\by = \bH\bx + \bn, \label{eq:ch2:noisemodel1}
\end{equation}
where $\by$, $\bx$ and $\bn$ are $N \times 1$ vectors obtained by some ordering of the corresponding images, and $\bH$ is a $N \times N$ matrix. Note that Eq.~(\ref{eq:ch2:noisemodel1}) can also be written as 
\begin{equation}
\by = \bX\bh + \bn, \label{eq:ch2:noisemodel2}
\end{equation}
The ordering to obtain the matrix-vector forms is typically lexicographic, although other forms such as raster-scan or interlaced are also possible. In the case of space-invariant degradation systems, the matrix $\bH$ in Eq.~(\ref{eq:ch2:noisemodel1}) is block-Toeplitz with Toeplitz blocks (BTTB), and can be approximated by a block circulant matrix with circulant blocks (BCCB). BCCB matrices have the very useful property that their eigenvalues are the 2D discrete Fourier coefficients of their defining sequences, and their eigenvectors are defined using their Fourier kernels. Using this property, Equation~(\ref{eq:ch2:noisemodel1}) can be written as follows

\begin{equation}
\bY(k,l) = \bH(k,l) \bX(k,l) + \bN(k,l), \label{eq:ch2:noisemodelFFT}
\end{equation}
where $\bY(k,l)$, $\bH(k,l)$, $\bX(k,l)$ and $\bN(k,l)$ are 2D Fourier transforms of the sequences $y(i,j)$, $h(i,j)$, $x(i,j)$, and $n(i,j)$, respectively. Note that this equation is valid under the assumption of circular convolution, although linear convolution can always be achieved by appropriate zero padding of the 2D arrays. 

The image restoration problem calls for finding an estimate of $\bx$ given $\by$, $\bH$, and knowledge about $\bn$ and possibly $\bx$ \cite{katsaggelos:book:91}. The literature on image restoration is rich (reviews and classifications of the major approaches can be found in \cite{katsaggelos:book:91}, \cite{Banham:97}, \cite{chan:05}, and references therein). Methods based on Bayesian formulations are of the most commonly used methods in the image restoration literature. Since this work is also based on a Bayesian formulation, we will focus on the literature of Bayesian methods. However, it should be noted that most of the other methods can also be derived using a Bayesian framework. Examples of such methods and their formulation under a Bayesian framework will also be given in what follows. 

\section{Bayesian Framework for Image Restoration}

The fundamental principle of the Bayesian formulation is to treat all parameters and observable variables as unknown stochastic quantities, and form probabilistic distributions for these unknowns. Therefore, the original image $\bx$, the blur $\bh$, and the noise $\bn$ in Eq.~(\ref{eq:ch2:noisemodel1}) are treated as samples of random fields, with corresponding \emph{prior} probability density functions (PDFs) that model our knowledge about the original image and the degradation process. Additionally, the PDFs of these unknowns depend on parameters $\Omega$, which are termed \emph{hyperparameters}. The goal in Bayesian formulation is to form a joint global distribution using all unknowns, and perform inference using this distribution.

The hyperparameters $\Omega$ can be assumed known, estimated separately from $\bx$ and $\bh$, or estimated simultaneously by adopting a \emph{hierarchical} Bayesian framework where they are also assumed unknown and their PDFs are formed. The PDFs of the hyperparameters are called \emph{hyperprior} distributions. The hierarchical model allows us to write the joint global distribution 

\begin{equation}
\p(\bx,\bh,\by, \Omega)=\p(\Omega)\p(\bx, \bh|\Omega)\p(\by|\Omega,\bx, \bh), \label{eq:ch2:joint}
\end{equation}

Typically, we assume that $\bx$ and $\bh$ are \emph{a priori} conditionally independent, given $Omega$, i.e., $\p(\bx, \bh|\Omega) = \p(\bx|\Omega) \p(\bh |\Omega)$. The inference is performed using the posterior
\begin{equation}
\p(\bx, \bh,\Omega| \by) = \frac{\p(\by | \bx, \bh, \Omega) \p(\bx|\Omega) \p(\bh|\Omega) \p(\Omega)}{\p(\by)} \label{ch2:eq:bayesian}
\end{equation}

In the following sections we study first various prior models for the image, blur, and hyperparameters that appeared in the literature. We will then proceed to analyze inference models for their estimation. 


\section{Bayesian Modelling of Blind Image Deconvolution}
\label{sec:Bay_mod}
\subsection{Observation Model}
Due to the model in Eq.~(\ref{eq:ch2:noisemodel1}), the observation model is related to the PDF of the noise $\bn$. A typically used model is stationary zero mean independent white Gaussian noise with distribution $\mathcal{N}(\bn | 0, \beta^{-1})$, that is,
\begin{equation}
\p(\bn) = \p(\by|\bx,h, \beta) = (\frac{\beta}{2\pi})^{N/2}\exp\left[-\frac{\beta}{2}\parallel \by-\bH\bx\parallel^2\right],
\end{equation}
where $\beta^{-1}$ denotes the variance. 

Alternative noise models, such as Poisson noise arising in low-intensity imaging, or nonstationary noise are also assumed in certain models. 

\subsection{Parametric Prior Blur Models}

Prior blur models can be chosen to be parametric, in which case $\p(\bh)$ is usually a uniform distribution. The unknown parameters of this parametric form may be experimentally computed, or be estimated using, for example, \emph{Maximum Likelihood} methods (see \cite{Lagendijk:90,Katsaggelos:91b}).  In the following subsections we will focus on the most popular parametric blur models in the literature. 

Note that any blur model satisfies three constraints:
\begin{itemize}
\item{Positivity:} $h(i,j) \ge 0$
\item{The blur PSF is real valued when the images are real.}
\item{Energy conservation} $\sum_i \sum_j h(i,j) = 0$.
\end{itemize}

\subsubsection*{Linear Motion Blur}

In general, relative motion of the camera and scene results in a temporal integration. If the camera movement or object motion is fast relative to the exposure period, it can be approximated as a linear motion blur, which is a 1D averaging filter. 

An example of horizontal motion blur model is given
by ($L$ an even integer)

\begin{equation}
    h(i,j) = \left\{
    \begin{gathered}
        \frac{1}{L+1},\;\;
        \begin{gathered}
            - \frac{L} {2} \leq i  \leq \frac{L}{2}, \\% \,\,\,
            j = 0
        \end{gathered}
    \hfill \\
    \,\,\,\,\,\,\,\,\,0\,\,\,\,\,\,  ,  \;\;\mbox{otherwise.}   \hfill \\
    \end{gathered}
    \right.
\label{ch2:eq:motionblur}
\end{equation}

\subsubsection*{Atmospheric Turbulence Blur}

This type of blur is common in remote sensing and aerial imaging applications. For long term exposure through the
atmosphere a Gaussian PSF model is a reasonably well approximation:
%
\begin{equation}
h\left( i,j \right) = K\,e^{ - \frac{\sqrt{i^2 + j^2}} {{2\sigma ^2 }}},
\end{equation}
where $K$ is the normalizing constant and $\sigma ^2 $  is the variance that determines the severity of the blur. Alternative blur atmospheric blur models have been suggested in \cite{Moffat:69,Molina:89}. In these works the PSF is approximated by the function 
\begin{equation}
h(i,j) \propto (1 + \frac{i^2+j^2}{R^2})^{-\delta}, \label{ch2:eq:atmospsf}
\end{equation}
where $\delta$ and $R$ are unknown parameters.

\subsubsection*{Out-of-Focus Blur}

Photographical defocusing is a very common type of blurring, and it is caused by primarily by the finite aperture of the camera. Although a complete model of defocus blur depends on many parameters such as focal length, aperture number of the lens and the distance between the objects and camera, a uniform circular PSF model is generally used as an approximation, that is,

\begin{equation}
    h\left( i,j \right) = \left\{
    \begin{gathered}
        \frac{1}{{\pi r^2 }} \, , \;\; \sqrt{i^2+j^2} \leq r \hfill \\
        \;\;\,\,0\;\;\,\, ,\;\; \mbox{otherwise.} \hfill \\
    \end{gathered}  \right.
\label{ch2:eq:outoffocus}
\end{equation}

The uniform 2D blur is sometimes used as a cruder approximation to the out-of-focus blur; and it is also used as a model for sensor pixel integration in super-resolution restoration.  This model is defined (with \(L\) an even integer) as

\begin{equation}
    h\left( i,j \right) = \left\{
    \begin{gathered}
        \frac{1}{{(L+1)^2 }}, \;\; - \frac{L} {2} \leq \left( {i,j } \right) \leq \frac{L}{2} \hfill \\
          \,\,\,\,\,\,\,\,\,\,\,\,\,\,0\,\,\,\,\,\,\,\,\,\,\,\,\,\,  ,  \;\; \mbox{otherwise.} \hfill \\
    \end{gathered}  \right.
\end{equation}


\subsection{Prior Image and Blur Models}
\label{ch2:sec:imagemodels}

The prior distributions $\p(\bx|\Omega)$ and $\p(\bh|\Omega)$ should reflect our beliefs about the structure of $\bx$ and $\bh$, and also constrain the space of possible solutions for them. This is necessary due to the ill-posed nature of the image restoration and blind deconvolution problems, and can also be interpreted under regularization. Several constraints on the image and the blur can be made, such as smooth, piecewise-smooth or textured. These descriptions can be modeled in a stochastic sense by forming prior distributions. A general exponential model is given by

\begin{subequations}
    \begin{align}
            \p(\bx){\Omega} &= \frac{1}{Z_x(\Omega)} \exp \left[- U_x(\bx,\Omega) \right]
            \label{ch2:eq:generalpriorsf} \\
            \p(\bh){\Omega} &= \frac{1}{Z_h(\Omega)} \exp \left[- U_h(\bh,\Omega) \right]
            \label{ch2:eq:generalpriorsh}
    \end{align}\label{ch2:eq:generalpriors}
\end{subequations}
where $U(\cdot)$ are called the energy functions, and $Z_x$ and $Z_h$ are normalizing terms. They may be assumed constant if the hyperparameters are known, or they must be calculated from $\int \exp \left [- U_x(\bx,\Omega) \right] \, \mbox{d}\bx$ and $\int \exp \left [- U_h(\bh,\Omega) \right] \, \mbox{d}\bh$, respectively. Many different image and blur models in the literature can be put in the form of these exponential models. In the following subsections we will give details of some particular cases.

\subsubsection{Stationary Gaussian Models}
The most common model is the class of Gaussian models provided by $U_x = \frac{1}{2}\alpha \parallel \mat L \bx \parallel^2$.  Then, if $ \det |\mat L|\neq0$, the term $Z_x$ in  \Eqref{ch2:eq:generalpriors} becomes simply $(2\pi)^{\frac{N}{2}} \alpha^{-\frac{N}{2}}\det |\mat L|^{-1}$, which if we use a fixed stationary form for $\mat L$ is simple to calculate.  These models are often termed \acf{Simultaneous Autoregression}{sar} or \acf{Conditional Autoregression}{car} models \cite{Ripley:81}. 

In the most basic case, where $\mat L = \mat I$, constraints are imposed on the magnitude of the intensity distribution of $\bx$. A more common choice is $\mat L = \mat C$, where $\mat C$ is the discrete Laplacian operator. Note that this selection for $\mat L$ imposes constraints on the detivatives of the image. For instance, Molina \emph{et al.} \cite{Molina:06} used this model for both image and blur, giving
\begin{subequations}
\begin{eqnarray}
    && \p(\vc x|\alpha_\im)   \propto \alpha_{\im}^{N/2} \exp \left [ -\frac{1}{2}\alpha_\im\, \parallel \mat C \vc x \parallel^2 \right ] \label{ch2:eq:imsarprior}\\
    && \p(\vc h|\alpha_{\bl}) \propto \alpha_{\bl}^{M/2} \exp \left [ -\frac{1}{2}\alpha_\bl\, \parallel \mat C\vc h\parallel^2 \right ].
    \label{ch2:eq:blsarprior}
\end{eqnarray}\label{ch2:eq:sarprior}
\end{subequations}

This \ac{sar} model is suitable for $\vc x$ and $\vc h$ if it is assumed that the luminosity distribution is smooth on the image domain, and that the blur is a partially smooth function.

\subsubsection{Autoregressive Models}
\label{ch2:sec:armamodel}
A class of algorithms (see e.g. \cite{Lagendijk:90,Katsaggelos:91b}) model the observation $\by$ as an \acf{Autoregressive Moving Average}{arma} process. The observation equation Eq.~(\ref{eq:ch2:general2}) forms the \acf{Moving Average}{ma} part of the model, whereas the original image is modeled as a 2-D \acf{Autoregressive}{ar} process:

\begin{equation}
\vc{x}  = \mat A\vc{x} + \vc{v},\label{ch2:eq:AR1}	
\end{equation}
where $\mat A$ has a \ac{bttb} form, and $\vc{v}$ is the excitation noise signal, driving the \ac{ar} process. Assuming that $\vc{v}$ is independent of the image, $\p(\bx)$ will be in the form of \Eqref{ch2:eq:generalpriors}, with $U_x = \frac{1}{2}\parallel(\mat I - \mat A)\vc x\parallel^2_{\Lambda_v}$ and $Z_x = (2\pi)^{\frac{N}{2}} \det |\Lambda_v|^{\frac{1}{2} }  \det |\mat I-\mat A|^{-1} $, where $\Lambda _v^{} $ is the covariance matrix of $\vc v$. Note that unlike the \ac{sar} model, the \ac{ar} coefficients also have to be estimated. 

A related formulation to the stationary \ac{arma} model is also considered by Katsaggelos and Lay in \cite{Katsaggelos:91b,Katsaggelos:90,Katsaggelos:91}.  In these works, the \ac{ar} model parameters are not estimated directly, but rather the defining sequence of the matrix ${\Lambda_x}$ is found in the discrete frequency domain, along with the other parameters, under the assumption that the image model is stationary.  

        
\subsubsection{Markov Random Field Models}\label{ch2:sec:MRF}

A class of models encountered extensively in image segmentation \cite{Derin87Modelling}, classical image restoration
\cite{Geman84Stochastic}, and also in super-resolution restoration \cite{Schultz96Extraction} and \ac{bd} \cite{Zhang:93,Chipman:99} are the \acf{Markov Random Field}{mrf} models \cite{Won04Stochastic}. They are usually
derived using local spatial dependencies.

We define the Gibbs distribution by setting $U = \sum_{c \in \mathcal{C}} V_c(\vc{x})$ in \Eqref{ch2:eq:generalpriors}, where $V_c(\vc{x})$ is a \textit{potential function} defined over \textit{cliques} $c$ in the image \cite{Won04Stochastic}, and $Z$ is termed the partition function. If quadratic potential functions are used, i.e., $V_c(\vc{x}) = \left({\vc{d}_c^T\vc{f}}\right)^2$, we obtain the \acf{Gaussian Markov Random Field}{gmrf}
\cite{Bouman93Generalized} or \ac{car} \cite{Ripley:81} model, and the Gibbs distribution becomes a Gaussian:

\begin{align}
    p(\vc{x}) = \frac{1}{Z} \exp{\left[-\vc{x}^T \mat B\vc{x}\right]} = \frac{1}{Z} \exp{\left[-\sum_{c \in \mathcal{C}}\vc{x}^T \mat B_c\vc{x}\right]},
\end{align}
where $\mat B_c$ is obtained from $\vc{d}_c$ and satisfies $[{\mat B_c}]_{\vc s_1,\vc s_2}$ are only non-zero when  pixels $\vc s_1$ and $\vc s_2$ are neighbors.  Typically the vectors $\vc{d}_c$ represent finite difference operators. The partition function is now equal to $(2\pi)^{\frac{N}{2}}\det|\mat B|^{-\frac{1}{2}}$. 

\acfp{Generalised Gaussian \acs{mrf}}{ggmrf} can also be obtained from this formulation with arbitrary non-quadratic
potentials of a similar functional form: $V_c(\vc{x}) = \rho(\vc{d}_c^T\vc{f})$, where $\rho$ is some (usually convex)
function, such as the \emph{Huber function} \cite{Bouman93Generalized} or p-norm (with $p\ge 1$) based
function, $\rho(u)=|u|^p$. This is similar to the use of potential functions used in anisotropic diffusion methods, the motivation being edge-preservation in the reconstructed image. Other extensions to the model consider hierarchical, or \acfp{Compound \acs{gmrf}}{cgmrf}, also with the goal of avoiding over-smoothing of edges \cite{Jeng91Compound,Geman84Stochastic}.

\subsubsection{Anisotropic Diffusion and Total Variation Type Models}
\label{ch2:sec:TV}

This class of priors incorporate non-quadratic functions on the image, with the aim of preserving edges by not over-penalizing discontinuities, i.e. outliers in the image gradient distribution, see \cite{hamza:02,chan:05} for a unifying view of the probabilistic and variational approaches. Methods using this type of priors usually begin with a regularization formulation in the continuous image domain resulting in a \acfp{Partial Differential Equation}{pde} to be solved. However, these approaches can also be represented in Bayesian formulation and also reformulated in discrete domain. 

The generalized regularization approach using anisotropic diffusion
has been proposed by You and Kaveh \cite{You:99}.  In this
formulation, convex functions \(\kappa(\cdot)\) and
\(\upsilon(\cdot)\) of the image gradient $|\nabla x(s)|$ and the
\ac{psf} gradient $|\nabla h(s)|$ respectively are used in defining
regularization functionals:
\begin{subequations}\label{eq:YouKaveh99}
    \begin{align}
            \mathcal{E}(x) & =  \int_{S_x}  \kappa \left(\abs{\nabla x(\vc s)}\right) \, \ud \vc s   \label{ch2:eq:YouKaveh99_1}  \\
            \mathcal{E}(h) & = \int_{S_h} \upsilon \left( \abs{\nabla h(\vc s)}\right) \, \ud \vc s .  \label{ch2:eq:YouKaveh99_2}
    \end{align}
\end{subequations}
This is in analogy with standard regularization procedures. Variational calculus is used to minimize Eqs.~(\ref{ch2:eq:YouKaveh99_1})-(\ref{ch2:eq:YouKaveh99_2}), which results in a \ac{pde} for each variable. For instance, the solution for $x$ in \Eqref{ch2:eq:YouKaveh99_1} is given by:
\begin{align}
    & \del_{x}\mathcal{E}(x) = \del\cdot\left( \frac{\kappa'(\abs{\del{x}})} {\abs{\del{x}}} \del{x} \right) = 0.
\end{align}
Using a time evolution steepest descent method, we obtain the following \ac{pde}
\begin{equation}
    \frac{\partial{{\hat x}}}{\partial t} = -\del_f {\mathcal E(\hat{x})}, \label{ch2:eq:anisodiff}
\end{equation}
which clearly represents an anisotropic diffusion process. Thus, as time $t$ progresses, directional smoothing occurs depending on the local image gradient. The strength and type of smoothing depends on the \textit{diffusion coefficient} or \textit{flux variable}, $c$, which is given by
\begin{align}
    c(\abs{\del{x}}) & = \frac{ \kappa'(\abs{\del{x}}) }{ \abs{\del{x}} }
%   c(x) & = \frac{ \kappa'(x) }{ x }
\end{align}

Appropriate choice of $c$ (or $\kappa$) results in various types of restorations. For instance, \(\kappa(x) = \frac{1}{2}x^2\) and hence $c(\abs{\del{x}}) = 1 $, and $ \del_{x}\mathcal{E}(x) = \del^2(x)$, i.e., a Laplacian operator \cite{You96Ringing}, results in standard spatially-invariant isotropic regularization, or a \ac{car} model. Another choice is $\kappa(x) = x$ and hence $c(\abs{\del{x}}) = \frac{1}{\abs{\del{x}}}$, which results in \textit{\acf{Total Variation}{tv}} norm \cite{Chan_TV98}. In this case, smoothing is only performed in the direction parallel to the edges, and smoothing orthogonal to the edges is completely suppressed. 

For the two cases represented above, the corresponding image priors can be written as

 \begin{equation}
            \p(\vc{x}) \propto \exp\left[-\alpha_{\im}\sum_i((\Delta^h\vc x)^2_i+(\Delta^v\vc x)^2_i)\right]
    \end{equation}
for the Laplacian; and
    \begin{equation}
            \p(\vc{x}) \propto \exp\left[-\alpha_{\im}\sum_i\sqrt{(\Delta^h\vc x)^2_i+(\Delta^v\vc x)^2_i)}\right]
    \end{equation}
for the \ac{tv} norm, where \(\Delta_i^h \) and  \(\Delta_i^v \) are linear operators
corresponding to horizontal and vertical first order differences, at
pixel \(i\), respectively. 

Many other diffusion coefficients are proposed in the literature, including very complex structural operators (see \cite{Weickert97review} for a review). An interesting method is \cite{You:99}, where a combination of the Laplacian and \ac{tv} is used. The smoothing strength is increased using the Laplacian in smooth areas with low gradient magnitude, and decreased using the \ac{tv} norm in areas where large intensity transitions occur in order to preserve edges while still removing noise. 

\v{S}roubek and Flusser \cite{Sroubek:05} use a similar scheme to those already mentioned, but they write the anisotropic diffusion model in the form of \Eqref{ch2:eq:generalpriors} using the following discretization of \Eqref{ch2:eq:YouKaveh99_1}
\begin{align}
    \p(\vc x,c(\vc x)) &= \frac{1}{Z_x} \exp \left[-\frac{1}{2} \vc x^T \mat{B}\left( c \right) \vc x \right]
\end{align}
The diffusion is set equal to the edge strengths between two pixels in a hidden line process, such that a spatially-varying weights matrix $\mat B$ can be formed from local image gradients. Similar formulations are also proposed in \cite{You:96,LagendijkBiemondBoekee88,Katsaggelos91regularized}. Note that a very generalized formulation is also proposed in the regularization context in \cite{KatsaggelosKang95}.

\subsection{Hyperprior Models}\label{ch2:sec:hyperparams}

The estimation of the hyperparameters $\Omega$ is an important problem since they determine the performance of the algorithms and therefore play an important role in Bayesian image restoration, blind deconvolution and superresolution. This estimation problem is introduced in the hierarchical Bayesian paradigm as a second stage, where , as explained before, the first stage consists of the formulation of $\p(\vc x|\Omega)$, $\p(\vc h|\Omega)$, and $\p(\vc y|\vc x,\vc h,\Omega)$. 

A large part of the Bayesian literature is devoted to finding hyperprior distributions $\p(\Omega)$ for which $\p(\Omega,\vc x,\vc
h|\vc y)$ can be calculated in a straightforward way or be approximated. These are the so called conjugate priors \cite{Berger:85}, which were developed extensively in Raiffa and Schlaifer \cite{Raiffa:61}. 

Besides providing for easy calculation or approximations of $\p(\Omega, \vc x,\vc h|\vc y)$, conjugate priors have, as we will
see later, the intuitive feature of allowing one to begin with a certain functional form for the prior and end up with a posterior of
the same functional form, but with the parameters updated by the sample information. 

The \emph{a priori} models for the hyperparameters depend on the type of the unknown parameters, and different models are proposed in the literature. For parameters corresponding to inverses of variances the gamma distribution is used, given by

\begin{equation}
\p(\omega) = \Gamma(\omega|a_{\omega}^o,b_{\omega}^o) =
\frac{{(b_{\omega}^o)}^{a_{\omega}^o}}{\Gamma(a_{\omega}^o)}
\omega^{a_{\omega}^o-1}\exp[-b_{\omega}^o\,\omega],
\label{ch2:eq:general_hyperprior}
\end{equation}
where $\omega>0$ denotes a hyperparameter, $b_{\omega}^o>0$ is the scale parameter, and $a_{\omega}^o>0$ is the shape parameter. These parameters are assumed known.  The gamma distribution has the following mean, variance and mode:
\begin{equation}
E[\omega] = \frac{a_{\omega}^o}{b_{\omega}^o} \ , \qquad Var[\omega]
= \frac{a_{\omega}^o}{{(b_{\omega}^o)}^2}\ , \qquad
\mbox{Mode}[\omega]=\frac{a_{\omega}^o-1}{b_{\omega}^o}.
\label{ch2:eq:gammaproperties}
\end{equation}
Note that the mode does not exist when $a_{\omega}^o\le 1$ and that
mean and mode do not coincide.

There are several important reasons for selecting Gamma distributions for the hyperpriors. First, gamma distribution is conjugate for the variance of the Gaussian, and therefore the posteriors will also have Gamma distributions in the Bayesian formulation. Second, as will be shown later,  their update equations will exhibit interesting similarities to some previously derived results in the literature.

For components of mean vectors the corresponding conjugate prior is a normal distribution. Additionally, for covariance matrices the hyperprior is given by an inverse Wishart distribution (see \cite{gelman:03}).
 
We observe, however, that in general most of the methods proposed in the literature use the \textit{uninformative} prior model

\begin{equation}
\p(\Omega) = constant. \label{ch2:eq:improper}
\end{equation}

\section{Bayesian Inference Methods}\label{ch2:sec:Bay_inf}

There are a number of different ways to estimate the image and blur using \Eqref{ch2:eq:bayesian}. Many methods in the literature attempt to provide point estimates to the parameters $\bx$ and $\bh$, which reduces the problem to the one of optimization. However, different methodologies estimate the distributions of these parameters \cite{gelman:03,Neal93Probabilistic,Jordan:98}, which have some advantages over other methods. The distributions can either be approximated or simulated. In this section we will present different inference methods. 

\subsection{Maximum a Posteriori and Maximum Likelihood}\label{ch2:sec:mapml}
 
The \textit{\acf{Maximum {A Posteriori}}{map}} solution is obtained by maximizing the posterior probability density, that is,

\begin{equation}
     \{\hvc x,\hvc h, \hat \Omega\}_{\mathrm{MAP}} = \argmax_{\vc x,\vc h, \Omega} \,\, \cpdf{\vc y}{\vc x, \vc h, \Omega}\cpdf{\vc x}{\Omega} \cpdf{\vc h}{\Omega} \pdf{\Omega}.
    \label{ch2:eq:map}
\end{equation}

On the other hand, the \emph{\acf{Maximum Likelihood}{ml}} solution attempts to maximize the likelihood $\cpdf{\vc y}{\vc x,\vc h,\Omega}$ with respect to the parameters:
\begin{equation}
    \{\hvc x,\hvc h, \hat \Omega\}_{\mathrm{ML}} = \argmax_{\vc f,\vc h, \Omega} \,\, \cpdf{\vc y}{\vc f, \vc h, \Omega}.
    \label{ch2:eq:ml}
\end{equation}
Note that in this case the parameters present only in $\p(\vc x, \vc h | \Omega)$ cannot be estimated. The \ac{ml} method is widely used, and can also be seen as a non-Bayesian method. It should be noted that \ac{ml} solution is identical to the \ac{map} solution when uninformative (flat) priors for $\bx$ and $\bh$ are used in Eq.~(\ref{ch2:eq:map}). Some approaches utilize flat priors for some parameters but not for others. Assuming known values for the hyperparameters is equivalent to forming degenerate distributions (impulse functions) for the hyperpriors. For instance, by assuming 

\begin{equation}
\p(\Omega)=\delta\left(\Omega,\Omega_0 \right) =
\left\{\begin{array}{ll} 1,& \; \mbox{if } \Omega=\Omega_0\\
                                    0,& \; \mbox{otherwise}

                    \end{array}
            \right.
\end{equation}
the \textit{\ac{map}} and \emph{\ac{ml}}  solutions become
respectively
\begin{equation}
    \{\hvc x,\hvc h\}_{\mathrm{MAP}} = \argmax_{\vc x,\vc h} \,\, \cpdf{\vc y}{\vc x, \vc h, \Omega_0}\cpdf{\vc x}{\Omega_0}
    \cpdf{\vc h}{\Omega_0}
    \label{ch2:eq:map_deg}
\end{equation}
\begin{equation}
    \{\hvc x,\hvc h\}_{\mathrm{ML}}  = \argmax_{\vc x,\vc h} \,\, \cpdf{\vc y}{\vc x, \vc h, \Omega_0}.
    \label{ch2:eq:ml_deg}
\end{equation}

Many deconvolution methods can fit into these formulations by using different forms of likelihood functions, image, blur and hyperparameter priors, and the optimization methods. It should be noted that many regularization-based approaches can also be derived using this formulation. In regularization approaches, the inverse problem is formulated as a constrained optimization problem, where the cost function is $\norm{\vc y - \mat H\vc x }_W^2$. This term ensures fidelity to data. Additionally, constraints on the solutions are imposed by using regularization terms. Generally, these constraints ensure smoothness of the image and the blur, that is, the high frequency energy of the image and the blur is minimized. The effect of the regularization terms is controlled by the regularization parameters, which basically represent the trade off between fidelity to the data and desirable properties (smoothness)
of the solutions.

For example, the classical regularized image restoration formulation used in \cite{KatsaggelosPhD:85,Katsaggelos91regularized,LagendijkBiemondBoekee88}
can be derived using Eqs.~(\ref{ch2:eq:map_deg}) and (\ref{ch2:eq:ml_deg}). This formulation is extended to blind deconvolution problem in \cite{You:96}, which can be given in relaxed minimization form as follows:
\begin{align}
    \hvc x,\hvc h =  \argmin_{\vc f,\vc h} \,\, \left[ \norm{ \vc y - \mat H\vc x } _W^2 + \lambda_1 \norm{\mat L_x\vc x}^2 + \lambda_2 \norm{\mat L_h\vc h}^2 \right], \label{ch2:eq:reg}
\end{align}
where $\lambda_1 $ and $\lambda_2 $ are the Lagrange multipliers for each constraint, and \(\mat L_x\) and \(\mat L_h\) are the regularization operators. The regularization operators are chosen to be Laplacians multiplied by a spatially-varying weights term, calculated as in \cite{KatsaggelosPhD:85,Katsaggelos91regularized,efstratiadis:90,KatsaggelosKang95} from the local image variance in order to provide some spatial adaptivity to avoid oversmoothing edges.

\subsubsection{Iterated Conditional Modes}

A major problem in the solution of \Eqref{ch2:eq:map_deg} is the simultaneous estimation of the variables $\bx$ and $\bh$. A widely used approach is \emph{\acf{Alternating Minimization}{am}}, which basically is minimization with respect to one unknown while holding the others constant. The main advantage of this algorithm is its simplicity due to the linearization of the objective function (see \Eqref{ch2:eq:reg}). This optimization procedure corresponds to the \acf{Iterated Conditional Modes}{icm} proposed by Besag \cite{Besag86On}. \ac{am} has been applied to standard regularization approaches \cite{You:96,Chen05Soft}, and to the anisotropic diffusion and \ac{tv} type models. 

There are various numerical methods to solve the associated \acp{pde} resulting from \ac{am}. These include the classical
Euler, Newton or Runge-Kutta methods; or recently developed approaches, such as time-marching \cite{RudinOsherFatemi92}, primal-dual methods \cite{ChanGolubMulet96}, lagged diffusivity fixed point schemes \cite{VogelOman96}, and half-quadratic regularization \cite{Chambolle97Image} (similar to the discrete schemes in \cite{Geman95Nonlinear,Geman92Constrained}). All of these methods employ techniques to discretize and linearize the \acp{pde} to approximate the solution.

\subsection{Minimum Mean-Squared Error}

The \ac{map} estimate does not take into account the whole posterior \ac{pdf}. If the posterior is sharply peaked about the maximum \ac{map} estimate gives the best possible solution. However, there are cases where the posterior can be broad (heavy-tailed) or even multimodal. As mentioned in \cite{Molina:99}, for a Gaussian in high dimensions most of the probability \textit{mass} is concentrated away from the probability \textit{density} peak.

The  \acf{Minimum Mean-Squared Error}{mmse} estimate instead minimizes the expected mean square error between the estimates and the true values, and therefore calculates the mean value of $\p(\vc x,\vc h,\Omega|\vc y)$. In practice finding \ac{mmse} estimates analytically is generally difficult,
though it is possible with sampling based methods (\sref{ch2:sec:sampling}) and can be approximated using variational Bayesian methods (\sref{ch2:sec:var}) .

\subsection{Marginalizing Hidden Variables}

Another method of approaching the problem is to marginalize out some unknowns and perform inference on the others. For instance, in the Evidence analysis \cite{Molina:99,Molina:94}, where we first calculate 

\begin{equation}
\hat {\vc  h},\hat \Omega=\argmax_{\vc h,\Omega}\int_{\vc
x}\p(\Omega)\p(\vc x, \vc h|\Omega)\p(\vc y|\Omega,\vc x, \vc h)\ud
\vc{x} \label{ch2:eq:rms:marginal}
\end{equation}
and then finding the restoration as
\begin{equation}
    \left. \hat {\vc  x}\right|_{\hat{\vc h},\hat\Omega}=\argmax_{\vc x}\p(\vc
    x|\hat \Omega)\p(\vc y|\hat \Omega,\vc x, \hat{\vc h}).
\end{equation}

Another way is to marginalize $\bh$ and $\Omega$ to directly obtain 
\begin{equation}
\hat {\vc  x}=\argmax_{\vc x}\int_{\vc
h,\Omega}\p(\Omega)\p(\vc x, \vc h|\Omega)\p(\vc y|\Omega,\vc x, \vc
h)\ud \vc{h}\cdot  \ud\Omega,
\end{equation}
which is called the Empirical analysis \cite{Molina:94}. Note that the marginalized variables are also called hidden variables. 

The \acf{expectation-maximization}{EM} algorithm, first described in \cite{Dempster:77}, is a very popular techniques in signal processing for iteratively solving \ac{ML} and \ac{MAP} problems. Its convergence to a \textit{local} maximum of the likelihood or the posterior distribution is guaranteed. It is also particularly well-suited to providing solutions to inverse problems in image restoration, blind deconvolution, and super resolution. 




\subsection{Variational Bayesian Approach}\label{ch2:sec:var}

\subsection{Sampling Methods}\label{ch2:sec:sampling}