\chapter{Refining subpixel displacements - an EM-approach}
\label{chapter:subpix}
When doing a ptychography experiment, the object is moved around by a piezo element to achieve the different illumination positions. In the previous treatment the experimental error of the motor movement, leading to 
a subpixel shift of the displacement vector was not accounted for.
In this section we will extend the ML-model of ptychography to account for subpixel displacements of the probe positions $\vec{r}_j$. In order to account for the optimisation of subpixel displacements, we split up the probe displacement vector
$\vec{r}_j$ into the assumed position $\vec{\tilde r}_j$ and the subpixel displacement vector $\vec{\hat r}_j$: $\vec{r}_j=\vec{\tilde r}_j+\vec{\hat r}_j$. 
The probability of measuring $n_{j\vec{q}}$ photons therefore is dependent on this additional parameter:
\beq
 p(n_{j\vec{q}}|P_{\vec{r}},O_{\vec{r}},\vec{\hat r}_j) = \frac{(I_{j\vec{q}}(\vec{\hat r}_j))^{n_{j\vec{q}}}}{n_{j\vec{q}}!}e^{-I_{j\vec{q}}(\vec{\hat r}_j)}
\eeq
In Machine Learning the problem setting of maximising the likelihood $p(X|\theta)$ of an observed quantity $\mathbf{X}$ that depends on  a set of latent variables $\mathbf{Z}$ and a set of unknown parameters $\theta$ is usually solved with
some variant of the Expectation-Maximisation \index{Expectation-Maximisation} (EM) algorithm. In our case the observed quantities are the measured photon counts $n_{j\vec{q}}$, the latent variables are the probe and object function $(P_{\vec{r}},O_{\vec{r}})$
and the unknown parameters are the subpixel displacements $\vec{\hat r}$.
The general version of the EM algorithm can be stated as follows \cite{bishop}:
\begin{enumerate}
 \item choose initial $\vec{\theta}^{old}$
 \item E - step: evaluate the posterior distribution $p(\mathbf{Z}|\mathbf{X},\mathbf{\theta}^{old})$ \\
 and calculate $\mathcal{Q}(\vec{\theta},\vec{\theta^{old}})=\sum_{\mathbf{Z}} p(\mathbf{Z}|\mathbf{X},\mathbf{\theta}^{old}) \ln p(\mathbf{X},\mathbf{Z}|\vec{\theta})$
 \item M - step: calculate $\vec{\theta}^{new} = \underset{\vec \theta}{\mathrm{argmax}} \mathcal{Q}(\vec{\theta},\vec{\theta}^{old})$  
 \item check for convergence of the log-likelihood or the parameter values. If the convergence criterion is not satisfied, set $\vec{\theta^{old}}$ to $\vec{\theta^{new}}$ and go to step 2.
\end{enumerate}
In the case of ptychography, the execution of the M - step of the general version would mean to sum over the whole parameter space of probes and object to determine the expectation value of the complete-data
log-likelihood. This is not feasible in a reasonable amount of computation time, and therefore a version usually called hard EM \index{hard EM} (citation missing) is used here.
In hard EM\index{hard EM}, a simple maximisation of the log-likelihood over the latent variables $\mathbf{Z}$ replaces the evaluation of the expectation value. The hard EM algorithm looks as follows:
\begin{enumerate}
 \item choose initial $\vec{\theta}^{old}$
 \item E - step: $\mathbf{Z'} = \underset{\mathbf{Z}}{\mathrm{argmax}} \, p(\mathbf{X},\mathbf{Z}|\vec{\theta}^{old})$
 \item M - step: $\vec{\theta}^{new} = \underset{\vec{\theta}^{old}}{\mathrm{argmax}} \, p(\mathbf{X},\mathbf{Z'}|\vec{\theta}^{old})$ \\
 \item check for convergence of the log-likelihood or the parameter values. If the convergence criterion is not satisfied, set $\vec{\theta^{old}}$ to $\vec{\theta^{new}}$ and $\mathbf{Z'}$ to $\mathbf{Z'}$ go to step 2.
\end{enumerate}

\subsection{E-step}
The maximization of the likelihood with regards to the latent variables $(P_{\vec r},O_{\vec{r}})$ is carried out using conjugate gradients and is described in detail in \cite{thibault2012}. Only a short 
summary of the methods used shall be given here. \citeauthor{thibault2012} compute the gradient of the log-likelihood \refequ{equ:logll} as follows:
\beq
g_{O_{\vec{r}}} = \pardevF{\mathcal{L}}{O_{\vec{r}}} = \sum_{j} P_{\vec{r}-\vec{r_j}} \chi^{*}_{j\vec{r}}
\eeq
\beq
g_{O_{\vec{r}}} = \pardevF{\mathcal{L}}{P_{\vec{r}}} = \sum_{j} O_{\vec{r}+\vec{r}_j} \chi^{*}_{j\vec{r}+\vec{r}_j}
\eeq
with the function $\chi$ defined in Fourier space:
\beq
\tilde{\chi}= \pardevF{\mathcal{L}}{I_{j\vec{q}}}\tilde{\psi}_{j\vec{q}} = w_{j\vec{q}} \left( \frac{I_{j\vec{q}}}{n_{j\vec{q}}}-1 \right)  \tilde{\psi}_{j\vec{q}}
\eeq
At the $n$th iteration step, a new gradient is computed as 
\beq
g^{(n)} = \left( \sum_j O^{(n)}_{\vec{r}+\vec{r_j}} \chi^{(n)}_{j\vec{r}} , \sum_j P^{(n)}_{\vec{r}-\vec{r_j}} \chi^{(n)}_{j\vec{r}} \right)
\eeq
The new conjugate search direction is then computed by 
\beq
 \Delta^{(n)} = -g^{(n)} + \beta^{(n)}\Delta^{(n-1)}
\eeq
with $\beta^{(n)}$ the Polak-Ribiere formula \cite{Press2007}:
\beq
  \beta^{(n)} = \frac{\langle g^{(n)}, g^{(n)}\rangle - \langle g^{(n)},g^{(n-1)}\rangle}{\langle g^{(n-1)},g^{(n-1)}\rangle}
\eeq
A new point in the conjugate direction is found by line search. For detailed calculations refer to \cite{thibault2012}.
\subsection{M-step}
In the Maximization step, a new set of displacement vectors $\vec{\hat r_j}=(\hat x_j, \hat y_j)$ is calculated by minimizing the likelihood function obtained in the E-step with regards to the old displacement 
vectors $\vec{\hat r}_j^{old}$. Because the search space has only 2 dimensions, namely the x- and y-coordinate of the displacement vector, a simple iterative method involving the Hessian
of the log-likelihood with regards to $\hat x$ and $\hat y$ can be used. 
Newton's method approximates the function to minimize with a second-order multidimensional Taylor series.
\begin{align}
  \mathcal{L}_{j\vec{q}}(\vec{\hat r}_j^{(n)} + \Delta\vec{\hat r}_j) &= \mathcal{L}_{j\vec{q}}(\vec{\hat r}_j^{(n)}) + \{D\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j{(n)})\}^T\Delta\vec{\hat r}_j \notag  \\ 
      &+\frac{1}{2!} (\Delta\vec{\hat r}_j^{(n)})^T \{D^2\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j^{(n)})\}\Delta\vec{\hat r}_j^{(n)}\notag\\
      &+O((\Delta\vec{\hat r}_j^{(n)})^3) \label{equ:taylor}
\end{align}
Here, $D\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j)$ is the gradient of $\mathcal{L}_{j\vec{q}}$ evaluated at $\vec{\hat r}_j$ and $D^2\mathcal{L}_{j\vec{q}}$ is the Hessian matrix.
Setting the derivative of \refequ{equ:taylor} to 0 and solving for $\Delta\vec{\hat r}_j^{(n)}$ yields 
\beq
  \Delta\vec{\hat r}_j^{(n)} = - \left[ D^2\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j^{(n)}) \right]^{-1}\left[D\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j{(n)})\right]
\eeq
This can now be used to define an iterative scheme which converges to the minimum in the following way:
\beq
  \vec{\hat r}_j^{(n+1)} = \vec{\hat r}_j^{(n)} - \gamma \left[ D^2\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j^{(n)}) \right]^{-1}\left[D\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j{(n)})\right] := \vec{\hat r}_j^{(n)} - \gamma \vec{p}_j^{(n)}
\eeq
A $\gamma$ value of \num{0.5} was used throughout the reconstructions to satisfy the Wolfe conditions. The Hessian matrix was computed using central differences on a 9-point grid with axis length 
\num{0.5} pixel around the current displacement vector.
\beq
  D^2\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j^{(n)}) = \begin{pmatrix}
 4\,\delta_{(h,\hat x)}^2[\mathcal{L}_{j\vec{q}}](\vec{\hat r}_j^{(n)}) & \delta_{(h,\hat x)}\delta_{(h,\hat y)}[\mathcal{L}_{j\vec{q}}](\vec{\hat r}_j^{(n)}) \\
 \delta_{(h,\hat y)}\delta_{(h,\hat x)}[\mathcal{L}_{j\vec{q}}](\vec{\hat r}_j^{(n)}) & 4\,\delta_{(h,\hat y)}^2[\mathcal{L}_{j\vec{q}}](\vec{\hat r}_j^{(n)})
\end{pmatrix}/4h^2
\eeq
with the central differences defined in \refsec{sec:diff}. This implementation was found to produce non-positive semidifinite Hessian matrizes at some iterations. Newton-minimization with 
non-positive semidifinite Hessian matrizes would lead to the approach of a saddle point rather than a minimum (cite). In these cases the Newton method was replaced by a simple steepest descent(cite) with
and update rule 
\beq
  \vec{\hat r}_j^{(n+1)} =  \vec{\hat r}_j^{(n)} - \gamma \left[D\mathcal{L}_{j\vec{q}}(\vec{\hat r}_j{(n)})\right]
\eeq
with a $\gamma$ value of \num{0.5}.


























