\documentclass[12pt]{article}
\usepackage{layout,pdfsync,latexsym, array, enumerate, amsmath, amsthm,amssymb, amsfonts,natbib, subfigure}
\usepackage[mathscr]{eucal}
\usepackage{epsf,epsfig}

\bibliographystyle{apalike}

\textwidth 6.5in \textheight 9.00in \oddsidemargin -0.15in
\evensidemargin -0.15in \topmargin -0.25in
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}{Corollary}[theorem]
\newtheorem{example}{Example}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{defn}{Definition}[section]
\newcommand{\lowtilde}[1]{\mathop{#1}\limits_{\textstyle\tilde{}}}
%\renewcommand{\baselinestretch}{1.4}


\newcommand{\off}{\mathcal{O}}
\newcommand{\poly}{{\cal P}(u-x;\mathbf{a})}
\newcommand{\lik}{\ensuremath{\mathcal{L}}}
\newcommand{\map}{\ensuremath{\mathcal{M}}}
\newcommand{\area}[1]{\ensuremath{|\!|#1|\!|}}
\newcommand{\dee}[1]{\ensuremath{\,\mathrm{d}#1}}
\newcommand{\Steve}[1]{{\tt (#1)}}

\begin{document}
\title{Local Likelihood and the EMS Algorithm}
\author{Chun-Po Steve Fan\footnote{Chun-Po Steve Fan is a doctoral student in the Department of Public Health Sciences at the University of Toronto}, Jamie Stafford\footnote{University of Toronto} and Patrick E. Brown\footnote{University of Toronto and Cancer Care Ontario} }
\thispagestyle{empty}
\date{}
\maketitle
%\renewcommand{\baselinestretch}{1.4}
\begin{abstract}
The use of local likelihood methods (Hastie and Tibshirani 1987, Loader 1999) in the presence of data that is either interval or areal censored leads naturally to the
consideration of EM-type strategies, or rather local-EM algorithms.
In this paper we consider a class of local-EM algorithms suitable for density or intensity estimation in the temporal or spatial context. We demonstrate that using a piecewise constant density function at the E-step results in the algorithm collapsing
explicitly into an EMS algorithm of the type considered by \cite{ silvermanems}. 

The advantages of identifying a relationship between local
likelihood and the EMS algorithm is that the former provides a
natural context for the latter, which is often referred to as ad hoc
in the literature, while the latter provides a set of tools to guide
the use, and implementation, of local-EM algorithms. For example, we
expose a previously unknown connection between local-EM algorithms
and penalized likelihood that is analogous to the more familiar
pairing of EM and likelihood. Examples include exploring the spatial structure of the disease Lupus in the City of Toronto.
\end{abstract}

\vspace{10pt} \small \it Keywords: density estimation; intensity
estimation; interval/areal censoring; local-EM; panel counts; penalized
likelihood; self-consistency \rm \normalsize

\section{Introduction}
In this paper, we consider the use of local likelihoods for density and intensity estimation when data are only partially observed. Here data may be interval censored,
it may be temporal and come in the form of panels counts, or it may
be spatial and areal censored. Whatever form the censoring takes, the
use of local likelihood techniques naturally leads to the
consideration of local-EM algorithms.

What we propose here may be thought of as an extension of the methods of \cite{
braun2005lld} for density estimation. However,
despite the interesting developments given in this paper, convergence of the local-EM
algorithm was difficult to demonstrate and it was not clear whether
its fixed point maximized any particular criterion. We address these concerns by identifying a relationship between local-EM and the EMS algorithm.

In this paper we consider simplifying the E-step of a local-EM
algorithm by approximating conditional expectations using a
piecewise constant density function {\tt (Steve: piecewise constant approximation)}. This results in the local-EM 
algorithm collapsing explicitly into an EMS algorithm of the type proposed in \cite{ silvermanems}. An EMS algorithm results when a smoothing step is added to the expectation and maximization steps of
the usual EM algorithm. \cite{ silvermanems} refers to the method as being ad hoc. Identifying the relationship between local-EM and EMS has two
advantages. First, it embeds the EMS algorithm in the local
likelihood context where it is seen to arise naturally as an implementation of a local-EM algorithm; thus it is not \emph{ad hoc}\footnote[1]{\cite{nychka1990spa} demonstrates that a
modified EMS algorithm is related to penalized likelihood. As a
result he also suggests that the EMS algorithm is not ad hoc.}.
Secondly, the EMS algorithm has been extensively studied, and much
is known about its convergence \cite[]{latham1996ems} and its
relationship to the penalized likelihood~\cite[]{nychka1990spa} . The latter
suggests a previously unknown connection between local-EM algorithms
and penalized likelihood, which is analogous to the more familiar pairing of
EM and likelihood.

The paper has the following structure. In \S 2 we summarize notation for three contexts, namely failure time processes and non-homogeneous Poisson processes in time and, separately, in space. We return to each context in the examples given in \S 5. Here we consider density estimation for failure time data in one and two dimensions, intensity estimation for panel count data where a simulation study is performed, and the final example concerns the spatial structure of the disease Lupus in the City of Toronto. One point of interest in \S 5 is that the local-EM algorithm is seen to explicitly extend the self-consistency algorithms of \cite{turnbull1976edf}, \cite{hu2008gls} and \cite{vardi1985pet}.

In \S 3 \& 4 we explore the relationship between local-EM and EMS in a general way that captures each of the aboves contexts. We describe local-EM in \S 3 and demonstrate that implementing the E-step with a piecewise density function results in local-EM collapsing explicitly into an EMS algorithm. In \S 4 we exploit the relationship between local-EM and the EMS algorithm to gain insight into convergence issues and to expose the role of local-EM. In particular, in \S 4.2 we summarize results meant to strengthen the suggestion that,
for the contexts considered in this paper, local-EM and penalized
likelihood may be paired in a manner analogous to the pairing of EM
and likelihood. Details are presented in an appendix where in \S A.1 we demonstrate that the use of an equivalent
kernel in a local-EM algorithm leads to the modification necessary
to maximize of a penalized likelihood (Nychka 1990). In \S A.2 we prove
the ${\cal L}^1$
convergence of the EMS iterate to its local-EM counterpart. This,
and the developments of \S 3, suggest local-EM and EMS
techniques may be thought of synonymously. Finally, in \S A.3 we study
the penalty of \S A.1 under the conditions of \S A.2 and conclude that local-EM penalizes the usual nonparametric likelihood for departures of the target function from the class of eigenfunctions for the kernel.
\section{Some initial details}

In the context of this paper we assume a study consists of $n$
independent subjects where the time or location of events follows
a non-homogeneous Poisson process. Events are assumed to be partially observed.
That is, they are only known to have fallen into a particular interval of time or region in space. In this section we introduce notation $\ldots$

\subsection{Processes in time}
For processes in time each subject is observed at a set of points
${\cal T}_i=\{\tau_{ij}\mid j = 1 \ldots J_i\}$ that are either prearranged or determined by a random visit
process, which is assumed to be independent of the event process. 

When the event process for each subject is a failure time process, the failure time, which we denote by
$X_i$, falls either between two adjacent elements of ${\cal T}_i$
or be right censored. In either case, $X_i$ is interval censored. 
We denote the relevant interval as $S_i=[L_i, R_i]$ where
$L_i,R_i\in{\cal T}_i\cup \{\infty\}$ and, of course, $X_i\in S_i$.
Note that we set $R_i=\infty$ if the event time for the $i$th subject is right censored. The observed data consist of a sequence of
independent intervals $S_1 \ldots S_n$, some of which may overlap.
Let ${\cal Q}=\{Q_j \mid j=1 \ldots, J \}$ denote the partition of the data
defined by the collection of endpoints $\{L_i,R_i \mid i=1,\ldots,n\}$.
For example, if $n=2$ and $S_1=[0,3],~S_2=[1,2]$ then we would have
${\cal Q}=\{[0,1],[1,2],[2,3]\}$. In this setting, the density, $\lambda(x)$ {\tt (Steve: $f$?)}, of the failure time process is the object of central interest.

When the event process for each subject is an inhomogeneous Poisson process, 
we denote the collection of events for the $i$th subject by $X_i=\{X_{ik} \mid k=1 \ldots N_i\}$, where $N_i$ denotes the number of events observed
for the $i$th subject. Here 
event times are still interval
censored but there may be multiple events in each interval. In this setting,
we refer to $S_{ij}=[\tau_{i\,j-1}, \tau_{ij}]$ as the $j$th panel
for the $i$th individual and denote the number of events in the
interval $S_{ij}$ by $N_{ij}=\#\{k \mid X_{ik} \in S_{ij}\}$.
Following the setup of \cite{hu2008gls}, we let
$${\cal T}=\cup_i^n {\cal T}_i=\{\tau_j \mid j= 0 \ldots J\}$$
and again
let ${\cal Q}=\{Q_j \mid j=1 \ldots J\}$ denote a partition of the data
where now $Q_j=[\tau_{j-1},\tau_j]$. For this setting the intensity, $\lambda(x)$, of the Poisson process is the object of central interest.

\subsection{Processes in space}
Rather than time, we may consider a series of inhomogeneous Poisson processes in space. Here
the $X_i=\{X_{ik} \mid k=1 \ldots N_i\}$ now represent sets of event locations for overlapping regions, or maps, ${\cal M}_i$. To accommodate nuisance sources of spatial variation we assume the intensity surface of $X_i$ takes the form
$$
\rho_i(s) = \off_i(s) \lambda(s).
$$
Here $\off_i(s)$ is an offset surface known \emph{a priori} and
$\lambda(s)$ is a relative risk surface that is assumed to be smooth. 

The challenge in this context is that the maps have differing tessellations. We denote a subregion of map ${\cal M}_i$ by $S_{ij}$ and the partition ${\cal Q}$ now results from overlaying the $n$ individual maps. Here the $X_{ik}$ are now assumed to be areal censored and hence they are only known to fall in some subregion $S_{ij}$. Finally, for reasons given in \S 6.3 it is reasonable to assume that the offset surface $\off_i$ is constant over all regions $S_{ij}$ such that
$$
\off_i(u)=\off_{ij}, \quad u\in S_{ij}.
$$

\section{Local-EM and the EMS algorithm}\label{sec:llems_alg}
In this section, we consider the use of the local likelihood for the flexible estimation of the function $\lambda$ for the settings considered in \S 2. The treatment is general with the local likelihood that takes the following form:
\begin{eqnarray}\label{llge}
{\cal L}_x(\lambda)=\sum_{ik} K_h(X_{ik}-x) \log\lambda(X_{ik})
- \sum_{i}\int_{\cal M} \off_i(u)K_h(u-x)\lambda(u) \,\mathrm{d}u,
\end{eqnarray}
where $K_h(z)= K(z/h)/h$ is a positive kernel function with
$\int K(z) \,\mathrm{d}z = 1$. Following \cite{loader1999lra}, we consider
approximating $\log\lambda(u)$ in the neighbourhood of $x$ with a polynomial, i.e.
\begin{eqnarray}
\log\lambda(u) \approxeq {\cal P}(u-x)=\sum_{j=0}^p a_j(u - x)^j.
\label{approx_poly}
\end{eqnarray}
Let $\mathbf{a}=\{a_{0},{a}_{1},\ldots,{a}_{p}\}$ denote the collection of the polynomial coefficients.
Replacing $\lambda$ in (\ref{llge}) with the approximating polynomial in (\ref{approx_poly}) and maximizing the local likelihood with respect to $\mathbf{a}$ yields the estimate
$\hat\lambda(x) = \exp(\hat{a}_0)$.

\subsection{Local-EM}
When the data are interval or areal censored, $X_{ik} \in S_{ij}$.  In this case, one may consider
replacing (\ref{llge}) with
\begin{multline}\label{llicdp}
{\cal L}_x({\bf a})= E_\lambda \left[\sum_{ik}K_h(X_{ik}-x){\cal P}(X_{ik}-x)|X_{ik}\in S_{ij}\right]\\
- \sum_{i}\int_{\cal M} \off_i(u) K_h(u-x) \exp\{{\cal P}(u - x) \} \dee{u}
\end{multline}
leading to a local-EM algorithm that cycles through two steps at
each iteration:
\begin{quote}
\begin{description}
\item[E-step:] compute the relevant expectations using the current
estimate $\hat\lambda$;
\item[M-step:] maximize ${\cal L}_x({\bf a})$ to get updated estimates of
$\mathbf{a}$ and hence a new estimate for $\lambda$.
\end{description}
\end{quote}
The algorithm differs from a typical EM algorithm because, at the
E-step, expectation is computed with respect to an estimate of the
infinite dimensional parameter $\lambda$ while, at the M-step, we only
estimate this parameter locally at $x$. As such the typical
arguments concerning convergence of the EM algorithm cannot be
brought to bear. Furthermore, if the local-EM algorithm converges to
a fixed point $\hat{\lambda}$, it isn't clear what criterion this fixed point
optimizes.

%Since the $X_i$ are assumed to be realizations of a Poisson process,
Assuming a Poisson process, the first term of (\ref{llicdp}) may be written as $\sum_{ij}N_{ij} \mbox{E}_\lambda [K_h(X-x){\cal P}(X-x) \mid X \in S_{ij}]$ and in general the local-EM algorithm can be written as
\begin{eqnarray}\label{localem}
\hat{\lambda}^{(r+1)}(x)=\sum_{ij}
N_{ij}\mbox{E}_{\hat{\lambda}^{(r)}}\left[K_h(X-x)\mid X\in S_{ij}\right]/\Psi_h(\hat{\bf a}^{(r+1)})
\end{eqnarray}
where
\begin{eqnarray*}
\Psi_h(x;\bf a)&=&\sum_{i}\int_{{\cal M}_i}{\cal O}_i(u)K_h(u-x)\exp\left\{{\cal P}(u - x)-a_0\right\}\, \mathrm{d}u
\end{eqnarray*}
and ${\bf \hat{a}}^{(r+1)}$ solves the local likelihood equations based on
${\cal L}({\bf a},x)$ with $\lambda$ replaced by $\hat{\lambda}^{(r)}$. Note that, because the offset surface is assumed to be constant over the region $S_{ij}$, the expectation is computed with respect to the following conditional density at each iteration 
\begin{equation} \label{conden}
\frac{\hat{\rho}_{r}(u) }{\int_{S_{ij}}{\hat{\rho}_{r}(x)} \, \mathrm{d}x}=
\frac{\hat{\lambda}_{r}(u) }{\int_{S_{ij}}{\hat{\lambda}_{r}(x)} \, \mathrm{d}x}.
%\lambda(t){\Bigg /}\int_{I_{ij}}\lambda(u)\,\mathrm{d}u.
\end{equation}

\subsection{Implementation as an EMS algorithm}
The above algorithm may be implemented using a discretization of
$\lambda$. This reduces an infinite dimensional
estimation problem to one that has finite
dimension. To this end, provided the partition $\mathcal{Q}$, we define a piecewise constant function as follows.
$$
g_{\phi}(x; \mathcal{Q}) = \area{Q_j}^{-1} \int_{Q_j} \phi(u)\dee{u} \quad \mbox{for $x \in Q_j$,}
$$ where $\phi$ is any integrable function over all $Q_{j}$'s.  Note that the function $g_{\phi}$ may be formally referred to as the $\mathcal{Q}$-approximant of $\phi$ (see \cite{royden1988ra} for details). 

Since $\lambda$ is unknown,  the implementation of the local-EM algorithm is facilitated by replacing $g_{\lambda}$ with $g_{\hat{\lambda}^{(r)}}$ at the $r$th iteration.  Here, let ${\cal I}_{ij\ell}$ indicate if $Q_{\ell} \subseteq S_{ij}$ and  denote the collection $\left\{ \Lambda_j \mid \Lambda_j=\int_{Q_j} \lambda(u)\dee{u} \right\}$ by $\boldsymbol{\Lambda}$.
%as a piecewise
%constant function where $\bar{\lambda}(x)=\Lambda_j/\area{Q_j}$ for $x\in Q_j$. Here
%We denote the collection of all $\Lambda_j=\int_{Q_j} \lambda(u)\dee{u}$ by the
%vector $\bf \Lambda$. 
%Note that formally $\bar{\lambda}$ is the ${\cal J}$-approximant of $\lambda$ (reference). 
%Of course, $\lambda$ is unknown and for the sake of implementation is approximated at the $r$th iteration by $g(x; \hat{}, \mathcal{Q})$ where $\hat{\Lambda}^r_j=\int_{J_j} \hat{\lambda}_r(u)\, \mathrm{d}u$. Finally, we denote the indicator
%function of the set $S_{ij}\cap Q_l$ by ${\cal I}_{ijl}$ .
Next, rather than using $\hat{\lambda}^{(r)}$ to compute the conditional
expectation in (\ref{conden}), consider simplifying the iteration by approximating the expectation with the piecewise constant function $\bar{\lambda}_r = g_{\hat{\lambda}^{(r)}}$; that is, 
% $$
% \mbox{E}_{\hat{\lambda}^{(r)}}\left[ K_h\left({X-x}\right)\mid X\in S_{ij}\right] \approxeq \mbox{E}_{\bar{\lambda}^{(r)}}\left[ K_h\left({X-x}\right)\mid X\in S_{ij}\right]
% $$
% It is equivalent to replace the conditional
% density (\ref{conden}) with
%$$
%%\bar{\lambda}^{(r)}_{S_{ij}}(x)=\frac{\hat{\Lambda}^{(r)}_{\ell}{\cal
%I}_{ij\ell}}{\area{Q_\ell} \sum_m \hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}}
%\text{ for $x\in Q_\ell$.}
%$$
%Consequently, $\mbox{E}_{\hat{\lambda}^{(r)}}\left[\left. K_h\left({T-t}\right)\right|X\in S_{ij}\right]$ becomes
\begin{align}
\mbox{E}_{\hat{\lambda}^{(r)}}\left[ K_h\left({X-x}\right)\mid X\in S_{ij}\right] &\approxeq \mbox{E}_{\bar{\lambda}^{(r)}}\left[ K_h\left({X-x}\right)\mid X\in S_{ij}\right] \notag \\
%
%\mbox{E}_{\hat{\bar{\lambda}}_{r}}\left[\left. K_h(T-t) \right| X \in S_{ij} \right]
&= \int_{S_{ij}} K_h(u-x) \frac{\bar{\lambda}^{(r)}(u)}{\int_{S_{ij}} \bar{\lambda}^{(r)}(u) \dee{u}} \dee{u} \notag\\
%
&=\sum_l \int_{Q_l}
K_h(u-x) \dee{u} \frac{\hat{\Lambda}^{(r)}_{l}{\cal I}_{ijl} }{\area{Q_l} \sum_m
\hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}} \label{Cev}
%
% &=\sum_l\frac{\hat{\Lambda}^{(r)}_{l} {\cal
% I}_{ijl}\int_{Q_l}K_h\left({u-t}\right)\,
% \mathrm{d}u}{||Q_l||\sum_m \hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}}.\label{Cev}
\end{align}
%
Having substituted the conditional expectation in (\ref{localem}) with the expression (\ref{Cev}), we are required to compute $\hat{\Lambda}^{(r+1)}_{s}$ at the next iteration. This leads to the simple iteration
\begin{eqnarray*}
\hat{\Lambda}^{(r+1)}_{s}&=&\int_{Q_s} \hat{\lambda}^{(r+1)}(x)\dee{x}\\
%
&=&\int_{Q_s} \left\{ \sum_{ijl} N_{ij}\frac{\hat{\Lambda}^{(r)}_{l}{\cal
I}_{ijl}}{\area{Q_l}\sum_m \hat{\Lambda}^{(r)}_{m} {\cal
I}_{ijm}} \frac{\int_{Q_l} K_h(u-x)\dee{u}}{\Psi_h(x; \hat{\bf a}^{(r+1)})}\right \} \, \mathrm{d}x\\
%
&=&\sum_{ijl}N_{ij}
\frac{\hat{\Lambda}^{(r)}_{l} {\cal
I}_{ijl}}{\area{Q_l}\sum_m \hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}}
\int_{Q_s} \frac{\int_{Q_l} K_h(u-x)\,\mathrm{d}u}{\Psi_h(x; \hat{\bf a}^{(r+1)})}\dee{x}.
\end{eqnarray*}
Note that $\hat{\bf a}^{(r+1)}=\mathbf{a}(\hat{\boldsymbol{\Lambda}}^{(r)})$ since $\hat{\bf a}^{(r+1)}$ is a function of $\hat{\boldsymbol{\Lambda}}^{(r)}$.  

Let $\tilde{\off}_l = \sum_{ij} \mathcal{I}_{ijl} \off_{ij}$.  The iteration above may be conveniently expressed in terms of matrices as follows:
\begin{equation}\label{EMSic}
\hat{\bf \Lambda}^{(r+1)}={\cal M}(\hat{\bf \Lambda}^{(r)}) {\cal K}_h(\hat{\bf \Lambda}^{(r)}).
\end{equation}
The weight ${\cal K}_h$ is a $J$-by-$J$ smoothing matrix with
entries
\begin{equation}\label{smoothstep}
[{\cal K}_h]_{ls}=
\frac{\tilde{\off}_l}{\area{Q_l}}\int_{Q_s}\frac{\int_{Q_l}K_h(u-t) \dee{u}}{\Psi_h(x; \hat{\bf a}^{(r+1)})}\dee{x},
\end{equation}
and ${\cal M}(\hat{\bf \Lambda}^{(r)})$ is a $J$-dimensional row vector with the $l$th entry equal to
\begin{equation}\label{EMstep}
[{\cal M}(\hat{\bf \Lambda}^{(r)})]_l=\sum_{ij} \dfrac{N_{ij}}{\tilde{\off}_l}
\frac{\hat{\Lambda}^{(r)}_{l} {\cal I}_{ijl}}{\sum_m \hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}}.
\end{equation}
The latter is recognized as a step in an EM algorithm (see \S 5); hence the iteration (\ref{EMSic}) is
seen to explicitly involve an expectation, maximization \emph{and} smoothing step.
That is, by discretizing $\lambda$, our implementation of the local-EM algorithm has resulted explicitly in an EMS algorithm.
The EMS algorithm was first proposed by Silverman et al.\ as an ad hoc method for improving the behaviour of the EM algorithm by including a smoothing step. Here, the algorithm is seen to formally arise from the local likelihood consideration when data are interval or areal censored. Further comparisons with Silverman et al.\ are given in \S 6.3.

\section{Convergence and the Role of Local-EM}
Thus far we have exposed an interesting relationship between classes
of algorithms that demonstrates the EMS algorithm arises naturally
from local likelihood considerations. This occurs because of the way
we have chosen to implement the local-EM algorithm. However, we
could have instead chosen to implement this algorithm through
multiple imputation, or by using MCEM, or through some other
favorite techniques. So why EMS?

In this section, we exploit the relationship between local-EM and the EMS algorithm to gain insights into convergence issues and to expose the role of local-EM as being paired with penalized likelihood in a manner analogous to the pairing of EM and likelihood.

\subsection{An Upper Bound for the Convergence Rate}

Our objective is to allow the algorithm (\ref{EMSic}) to iterate until it converges to a fixed point
$\hat{\boldsymbol{\Lambda}}$ that solves
$$
\hat{\bf \Lambda}={\cal M}(\hat{\bf \Lambda}) {\cal K}_h(\hat{\bf \Lambda}).
$$
We rely on the results given in \cite{latham1995ems}, \cite{green1990uap} and \cite{silvermanems}  to demonstrate the uniqueness of $\hat{\boldsymbol{\Lambda}}$ and convergence of the algorithm to this solution. 
%In addition, we restrict developments to the locally constant case.

% For expression (\ref{smoothstep}) note that ${\bf \hat{a}}_{r}$ depends on $\hat{\bf
% \Lambda}_r$, and hence so does the smoothing step. Here we refer to
% algorithm as \emph{adaptive}. However, 
In the locally constant case in which the polynomial $\cal P$ is truncated at its leading term,
$\Psi_h$ simplifies to
$$
\Psi_h(x;\mathbf{a})=\sum_{i} \int_{M_i} \off_i(u) K_h(u-x) \dee{u} = \sum_{\ell} \tilde{\off}_\ell \int_{Q_\ell} K_h(u-x) \dee{u}.
$$
The smoother matrix $\mathcal{K}_h$ no longer depends on the parameter $\boldsymbol{\Lambda}$. In this case, \cite{latham1995ems} shows the uniqueness of
$\hat{\boldsymbol{\Lambda}}$ in the region where $\Lambda_k > 0$ for all $k$, which implies that
the iteration will converge to the unique solution if it converges
at all. Following \cite{latham1995ems}, we restrict developments to the locally constant case and demonstrate the convergence of (\ref{EMSic}) by showing its
spectral radius at $\hat{\boldsymbol{\Lambda}}$ decreasing toward zero as the bandwidth value increases.

\cite{green1990uap} shows that the spectral radius of the
$\mathcal{M}(\Lambda)$ at an EM solution is less than 1, and
\cite{silvermanems} claims that a smoothing matrix reduces the
spectral radius of $\mathcal{M}(\Lambda)$. Nonetheless, these two
properties are not sufficient conditions for the algorithmic
convergence because the spectral radii are evaluated at different
values of $\boldsymbol{\Lambda}$. Empirical evidences suggest that
(\ref{EMSic}) usually converges. It is also observed that (\ref{EMSic}) converges faster than EM when $h$ is sufficiently large, but
there exist some $h >0$ such that the convergence rate of EMS is
lower than that of EM.

Without th loss of generality, assume $\off_i(u)= 1$ for all $u \in \map_i$.  Let
$\gamma$ be the spectral radius of $\partial\mathcal{M}=\partial
\mathcal{M}(\hat{\boldsymbol{\Lambda}})/\partial
\boldsymbol{\Lambda}$. Here, we derive an upper bound for $\gamma$
and show this upper bound increases with respect to the
``roughness'' of $\boldsymbol{\Lambda}$ and decreases as the value
of bandwidth increases. By the Perroni-Frobenius theorem,
$$
\gamma \leq \max_s \sum_t \left[ \partial \mathcal{M}\,
\mathcal{K}_h \right]_{ts},
$$
where $\partial \mathcal{M}$ is a $J \times J$ matrix with
\begin{align*}
\left[ \partial \mathcal{M} \right]_{tk} &\equiv \dfrac{\partial
\mathcal{M}_k}{\partial \Lambda_t} =\begin{cases} \sum_{ij} \dfrac{N_{ij}}{\tilde{\off}_k}
\dfrac{\sum_{\ell \ne k} \mathcal{I}_{ijk} \mathcal{I}_{ij\ell}
\Lambda_\ell}{\left(\sum_{\ell} \mathcal{I}_{ij\ell} \Lambda_\ell \right)^2} & \mbox{when $k=t$}\\
& \\
\sum_{ij} \dfrac{N_{ij}}{\tilde{\off}_k} \dfrac{-\mathcal{I}_{ijt} \mathcal{I}_{ijk}
\Lambda_k}{\left(\sum_{\ell} \mathcal{I}_{ij\ell} \Lambda_\ell \right)^2} & \mbox{otherwise}
\end{cases}
\end{align*}
After some algebraic manipulations, it can be shown that
\begin{align}
\gamma & \le \max_s \sum_k \left( \sum_{ij} \dfrac{N_{ij}}{\tilde{\off}_k} \dfrac{\sum_{t
\ne k} \mathcal{I}_{ijk} \mathcal{I}_{ijt} (\hat{\Lambda}_{t} -
\hat{\Lambda}_k) }{\left(\sum_{\ell} \mathcal{I}_{ij\ell}
\hat{\Lambda}_\ell \right)^2} \right) \mathcal{K}_{ks}.
\label{e:spectral_upper}
\end{align}
%Note that $\hat{\boldsymbol{\Lambda}}$ is a function of the
%bandwidth value $h$.

\noindent \Steve{I don't think the following should be here.} Noting the partition $\cal Q$ can in fact be defined in an arbitrary way (see \S A.2 for details). 

\noindent Increasing in $h$ renders the smoothing matrix more uniform within rows; that is,
$$
\mathcal{K}_{h} \to \text{diag}\left( \dfrac{\tilde{\off}_{j}/\area{Q_{j}}}{\sum_{\ell} \tilde{\off}_{\ell}/\area{Q_{\ell}}} \right) 
\begin{bmatrix} 
1 & \cdots & 1 \\
  & \ddots &   \\
1 & \cdots & 1  
\end{bmatrix}
\quad \text{as $h \to \infty$.}
$$
This smoothing matrix would narrow the spread of $\hat{\boldsymbol{\Lambda}}$, i.e.\  
$ (\hat{\Lambda}_\ell - \hat{\Lambda}_k) \to 0$.
% If the $Q_j$'s are of equal length, then not only does increasing
% $h$ narrow the spread of $\hat{\boldsymbol{\Lambda}}$, but it also
% makes $\mathcal{K}_{\ell s}$ more uniform within rows. That is,
% $$b
% (\hat{\Lambda}_\ell - \hat{\Lambda}_k) \to 0 \ \forall \ \ell, k
% \quad \mbox{and}\quad \mathcal{K}_{\ell s} \to \frac{\tilde{\off}_{\ell}}{\sum_{\ell} \tilde{\off}_{\ell}} \ \forall\ s.
% $$
%$$
%(\hat{\Lambda}_\ell - \hat{\Lambda}_k) \to 0 \ \forall \ \ell, k
%\quad \mbox{and}\quad \mathcal{K}_{\ell s} \to \frac{\sum_i
%\off_{i\ell}}{\sum_{ik} \off_{ik}} \ \forall\ s.
%$$
As a result, increasing value of bandwidth pushes the upper bound
toward 0, thus accelerating the algorithmic convergence.


\subsection{The Role of local-EM} \label{sec:llem_role}
Below we summarize results meant to strengthen the
suggestion that, for the context considered in this paper, local-EM
and penalized likelihood may be paired in a manner analogous to the
pairing of EM and likelihood. We do this in three steps. Details are given in the appendix.

\vspace{20pt}
%
\noindent {\bf Local-EM and the Modified EMS algorithm:}
\cite{nychka1990spa} identified a relationship between EMS and
penalized likelihood by demonstrating that a modified EMS algorithm
maximizes 
\begin{equation} \label{e:npllk}
\mathcal{L}(\boldsymbol{\theta}) 
+{\bf Pen}(\boldsymbol{\theta}, K_h).
\end{equation} 
Here $\mathcal{L}(\boldsymbol{\theta})$ is the appropriate nonparametric likelihood, $\boldsymbol{\theta}$ is a vector with components $\theta_{ij}^2 ={\cal O}_{ij} \Lambda_{ij},~\forall i,j$, and ${\bf Pen}(\boldsymbol{\theta}, \mathcal{K}_h)$ is a penalty function that depends on both $\boldsymbol{\theta}$ and the kernel $\mathcal{K}_h$.

%Moreover, this modified EMS algorithm was very much similar to the one step late algorithm of \cite{green1990uap}.
In Appendix \ref{appx_ems} we demonstrate that, with the choice of an equivalence kernel, the local-EM algorithm may be used to maximize a penalized likelihood function. This occurs because the equivalent kernel leads, under the discretization of $\lambda$, to Nychka's modification of the EMS algorithm.

\bigskip
\noindent {\bf $\mathbf{\mathcal{L}}^1$-Convergence of EMS to local-EM:} In
\S\ref{sec:llems_alg} the discretization of $\lambda$ over the partition ${\cal
Q}$ resulted in a local-EM algorithm collapsing explicitly into an
EMS algorithm. There $\cal Q$ was identified by overlaying maps, or combining sets of visit times, and so on. In appendix \ref{appx_convergence}, we consider the
discretization of $\lambda$ for an arbitrary partition where we let $k\to \infty$ and $\max_k |\!|Q_k|\!| \downarrow
0$. We demonstrate that the EMS iteration, and the modified EMS iteration, both converge to their
local-EM counterparts in the $\mathcal{L}^1$ norm. This result suggests local-EM and EMS
techniques may be thought of synonymously.

\bigskip
\noindent {\bf Local-EM and Penalized Likelihood:} In appendix A.3 we consider the penalized likelihood of \ref{appx_ems} under the limiting conditions of \ref{appx_convergence}. In particular, we interpret the penalty in terms of a class of functions that we then study as $\max_k |\!|Q_k|\!| \downarrow 0$.

This ultimately allows us to speculate that the role of local-EM is to penalize the usual nonparametric likelihood for departures of the target function from the class $${\cal Z}=\left\{f \ \Bigg | \, f^{1/2}(x)=\int_{\cal
M} K_h(u-x)f^{1/2}(u)\, \mathrm{d}u \ \mbox{for all $x \in {\cal S}$,}
\right\}$$ identified by the eigenfunction of the kernel.

\section{Examples}

\subsection{Density Estimation for Failure Time Data}

In studies of the AIDS epidemic the time of infection with the HIV
virus is often of central interest. As infections are only revealed
through repeated testing the event times are interval censored and
only known to fall between two consecutive clinic visits, one where
the patient tests negative for the presence of the virus and a
follow-up visit where they test positive. Density estimation in this
context may be facilitated by (\ref{llicdp}) which now simplifies to
\begin{align*}
{\cal L}({\bf a},x) &= \sum_{i=1}^n E_\lambda[K_h(X_i-x){\cal
P}(X_i-x)|I_i]-n \int_{\cal M} K_h(u-x)\exp\{{\cal P}(u-x)\}\,
\mathrm{d}u.
\end{align*}
\subsubsection{One dimension}
Here ${\cal M}=\Re$, $\mathcal{O}_i(u) = 1$ for all $u \in \Re$ and
$X_i$ is the event time for the $i$th individual which is only known
to fall in the interval $I_i$. For the iteration
(\ref{EMSic}) we recognize ${\cal M}(\hat{\bf \Lambda}_{r})$ as a
step in the EM algorithm of \cite{turnbull1976edf}.
Also, in the case of a histogram where $I_i\in {\cal Q}$ $ \forall i$ the local-EM algorithm (\ref{localem})
will only iterate once and reduces to methods of Jones (1989) for smoothing histograms. Finally,
\cite{braun2005lld} proposes a local-EM algorithm based on ${\cal
L}_x({\bf a})$ and, without being aware of it, develop an EMS
implementation. 
\subsubsection{Two dimensions}

If both the time of infection with the HIV virus and time of AIDS
onset are interval censored, density estimation for the joint
distribution can still be facilitated by ${\cal L}({\bf a},t)$. Here
$X_i$ would be bivariate and ${\cal M}=\Re^2$. Typically for doubly
interval censored data estimation of the NPMLE for the joint
distribution is complex and a treatment of some of the issues can be
found in \cite{maathuis2005ran}. In the rest of this example we
consider a simple case that suggests local-EM in this context may
ease these complexities and thus deserves further exploration. For
example, use of a local-EM algorithm does not require identifying
maximal sets (the analog of inner most intervals in the univariate
case) and thus does require use of the height map algorithm of
\cite{maathuis2005ran}. Also the solution is unique while the NPMLE
is not. A Bayesian interpretation of local-EM provides insight into
the latter.

We give a hypothetical example of bivariate interval-censored data
(Figure 1a) that consists
of eights observations, represented by four horizontal and four
vertical rectangles. Overlaying these observations forms a partition
of 81 unit squares, and the intersections of these rectangles form
the maximal sets. The NPMLE places all probability on these sets however it is not unique.
For example, a uniform weight of 1/16 on all 16 maximal intersections, a weight of
1/4 on the positive diagonal maximal intersections, and a weight of
1/4 on the negative diagonal maximal intersections all maximize the
nonparametric likelihood. Consequently, the EM iteration will
converge to one of the solutions depending on the initial value.
However, empirical evidence suggests otherwise for the local-EM
algorithm.

When a radially symmetrical kernel is used, the EMS iteration will
always converge to a solution that favours the uniform weighting regardless of the starting value. However, if the kernel bandwidths
are changed to 1.5 and .15 in the x- and y-direction so that the kernel is elliptical, and the kernel is rotated
by 45 degrees, the local-EM algorithm will converge to a solution that
favours the positive diagonal. Similarly, if the kernel is instead rotated by -45
degrees the local-EM algorithm converges to a solution that favours the
negative diagonal. This behaviour can be interpreted in terms of \S 3.2. Here the local-EM algorithm aims to maximize a penalized likelihood where the penalty depends on the choice of kernel and the kernel leads us to favour one NPMLE over another a-priori.  Explicitly when the kernel is radially symmetrical, any deviations
from the maximal eigenfunction are equally penalized. However, as
the kernel becomes more elliptical, deviations in the direction of
the major axis of the elliptical contour are penalized less than
those in the direction of the minor axis.

%\vspace{-1.5cm}
% \begin{center}
% \begin{figure} \centering
% \includegraphics[width=2.5in, height=2.5in, angle=-90]{bivariate_example.pdf}
% \includegraphics[scale=.6, angle=-90]{bivariate_interval_censored_30jan2009.pdf}
% \caption{The proposed local EM intensity estimate achieves the lowest
% overall MISE with a small bandwidth of 0.195, comparing to the smoothed EM
% estimate by placing expected increments at the centres of pixels.}
% \label{f: sim_mise}
% \end{figure}
% \end{center}
%\begin{figure} \centering
%\includegraphics[width=2in, height=2in, angle=0]{bivariate_example}
%\includegraphics[scale=.5]{bivariate_example_penality.png}
%\caption{??}
%\end{figure}

\begin{figure}
%\vspace{-.05in} 
\centering
\subfigure{\includegraphics[width=1.5in, height=1.5in, clip=true, trim=.8in 0.45in 1in .75in]{bivariate_example.pdf}}\\
\subfigure{\includegraphics[scale=.15, clip=true, trim=1.75in 0.45in
2in .55in]{ems_1_08mar2009.pdf}}
\subfigure{\includegraphics[scale=.15, clip=true, trim=2.25in 0.45in
2in .55in]{ems_2_08mar2009.pdf}}
\subfigure{\includegraphics[scale=.15, clip=true, trim=2.25in 0.45in
2in .55in]{ems_3_08mar2009.pdf}} \caption{EMS Estimates with
Different Kernels}
\end{figure}


\subsection{Intensity Estimation for Panel Count Data}
In this example we consider the situation described in \S 2.1.2 where individuals are monitored in time and events follow a non-homogeneous Poisson process. Monitoring involves periodic assessments are so events are interval censored. Here individuals may drop out of the study at different time
points and we use $Y_i(t)$ to indicate if the $i$th individual is in
the study at time $t$. Since an
individual's event process is observable only before he/she drops
out the intensity of this process equals
$Y_i(t) \lambda(t)$ (see \cite{andersen1993smb} for details). Finally, the number of
at-risk individuals is denoted by $Y(t)=\sum_i Z_i(t)$.

We assume that the event, assessment and drop-out processes are
independent of one another, and that the drop-out process is monotone (and
discrete?). Under this setting, the local local-EM algorithm derives from
$$
\sum_{ij}n_{ij}Y_i(x)E_\lambda [K_h(X -x){\cal P}(X -x;\mathbf{a})|X \in S_{ij}]
- Y(u)\int_{\cal M} K_h(u-x) \exp\{{\cal P}(u - x;\mathbf{a}) \} \, \mathrm{d}u
$$
and the corresponding EMS implementation involves the mapping (\ref{EMSic}) where
$$
[\mathcal{M}(\hat{\boldsymbol{\Lambda}}_{r})]_{l} = \sum_{ij} n_{ij}
\frac{Y_i(\tau_l)\hat{\Lambda}_{rl} {\cal I}_{ijl}}{Y(\tau_l)\sum_m
\hat{\Lambda}_{rm} {\cal I}_{ijm}}
$$
and $$[\mathcal{K}_h]_{ls}= \frac{Y(\tau_l)}{|\!|Q_l|\!|}\int_{J_s}
\dfrac{\int_{J_l}K_h\left({u-t}\right)\,
\mathrm{d}u}{\Psi_h[\mathbf{a}(t; \mathbf{\hat{\Lambda}}_r)]}\,
\mathrm{d}t.$$
Note $\mathcal{M}(\hat{\boldsymbol{\Lambda}}_{r})$ is a step in the self-consistent algorithm
algorithm of \cite{hu2008gls}.

%\noindent Note that $\int_{J_l} K_h( u-t)\, \mathrm{d}u \rightarrow \mathcal{I}_{J_l}(t)$ and $\Psi_h[\mathbf{a}(t;\mathbf{\hat{\Lambda}}_r)] \rightarrow Y(t)$ as $h \searrow 0$. It follows
%$$\lim_{h\downarrow 0} [\mathcal{K}_h]_{ls} =
%\lim_{h\downarrow
%0}\frac{Y(\tau_l)}{||J_l||}\int_{J_s}\dfrac{\int_{J_l}K_h\left({u-t}\right)\,
%\mathrm{d}u}{\Psi_h[\mathbf{a}(t; \mathbf{\hat{\Lambda}}_r)]}\,
%\mathrm{d}t
%=\dfrac{Y(\tau_l)}{||J_l||}\int_{J_s}\frac{1_{J_l}(t)}{Y(t)}\,
%\mathrm{d}t
%=\delta_{sl}.$$ Consequently,
%\noindent As $h \downarrow 0$, %the smoothing matrix ${\cal K}_h$
%converges to the identity matrix and %
\subsubsection{A simulation study}

For this setting we performed a simulation study and examined the mean integrated
squared error (MISE) of the local-EM estimator as well as several alternatives. Event times follow
a Poisson process with intensity $\lambda(t)$ equal to a re-scaled
gamma density function (shape = 9 and rate=3/4). A subject's event
times are simulated by thinning an unit-intensity Poisson process
where event times are either accepted or rejected with a probability
equal to $\lambda(t)$. Each subject is assumed to have a sequence of
predetermined observation times $t_1, t_2, \ldots, t_K$ where $t_i =
i$ and $K=20$. However, subjects miss a visit with increasing
probability, specifically, the probability of missing a visit equals
$(t_i/20)^{1/4} - 0.05$. Finally, a subject's panel counts are
obtained by aggregating events times among consecutive observed
visits. Note that each subject is assumed to have no event at time
0.

For each of $S$ samples, and for a fixed window size $h$, we compute several
estimators of the intensity using a Gaussian kernel. For each estimator, $\hat{\lambda}$, we approximate its MISE as $S^{-1} \sum_k
\int (\hat{\lambda}_k(u) - \lambda(u))^2 \, \mathrm{d}u$. This was performed for 40 different values of $h$
between $0.05$ and $3.95$ with $S=300$. The resulting MISE's for
each estimator are plotted in Figure 1.
The first estimator assumes no interval censoring has taken place
and uses the exact event times themselves, rather than the panel counts. This is the gold standard. For the panel counts we use the data-dependent partition and compute the local-EM
estimator in both the constant and linear cases. In addition, as an alternative to, and competitor of, the local-EM estimators we consider simply smoothing the self-consistent estimator of \cite{hu2008gls} after their
EM algorithm has converged. 

The results favor the local-EM estimator considerably. While the gold standard achieves the smallest MISE, the local-EM estimators track it quite closely and attain the next smallest MISE for a similar window size. Smoothing after the EM algorithm converges has the worst performance achieving a minimum MISE that is larger for a larger window size. This result is perhaps not all that surprising given that the $\lambda$ is quite
non-linear. In cases where $\lambda$ is linear the improvements
in MISE for the local-EM estimator are not as dramatic. Another simulation was performed in the spatial context with similar results.

\begin{figure}\centering
\includegraphics[scale=.6, angle=0]{mise_intensity.pdf}
\caption{The proposed local EM intensity estimate achieves the
lowest overall MISE with a small bandwidth of 0.195, comparing to
the smoothed EM estimate by placing expected increments at the
centres of pixels.} \label{f: sim_mise}
\end{figure}

\subsection{The Spatial Structure of Lupus in the City of Toronto}


 In disease mapping
applications, the $X_i$ are the residential locations of individual
cases during the $i$th census period, $O_i(s)$ is the population
density in the region during this period (which is not smooth), and
$\lambda(s)$ is the smoothly-varying risk surface due perhaps to
spatially varying social or environmental conditions. 

In this example, we consider the setting described in \S 2.2 and investigate the spatial structure of lupus in the City of Toronto. The lupus clinic at the Toronto Western Hospital has the census location of individuals with lupus for the period from 1965 to 2007 (Give reference). Lupus may have an environmental risk factor (cite some papers) which might be expected to result in lupus cases having a spatially structured intensity surface.

Disease incidence is assumed to arise from a non-homogeneous Poisson process in space and time where the intensity is given as $\rho_\ell(x,t) = \lambda(x) {\cal O}_\ell(x,t)$ with offset ${\cal O}_\ell$ given as
\[
{\cal O}_\ell(x,t) =  \beta(t) \theta_\ell P_\ell(x,t).
\]
Here the subscript denotes the $\ell$th age-sex group, $\theta_\ell$ is the incidence rate for this group, $P_\ell(x,t)$ is the population intensity (in persons per km square) and $\beta(t)$ is the time trend. Using regionally aggregated case counts to estimate relative risk surface $\lambda(s)$ is the objective. The main complication is that the census regions used to aggregate the data change repeatedly over the study period.

Census periods are defined as beginning and ending at the mid-points between census years before and after a given census. Period $i$ covers the years $t_{i-1}$ to $t_i$ and  $i=1\ldots T$ where $T$ is the total number of census periods during the study. The $j^{th}$ census region for the $i^{th}$ census period is denoted as $S_{ij}$ and these regions have boundaries that vary between census periods.  For simplicity we assume $\beta(t)$ and the population $P_\ell(s,t)$ are constant within a census period so that $\beta(t)=\beta_i$ when $t$ is in period $i$ and 
\[
P_\ell(x,t)=P_{i\ell}(x) = P_{ij\ell}/ |R_{ij}|; x \in S_{ij}.
\]
where $P_{ij\ell}$ is the population count for group $\ell$ in region $S_{ij}$. As a result of these simplifications the offset is also constant within a census period and region $S_{ij}$
\[
{\cal O}_\ell(x,t) =  {\cal O}_{i\ell}(x) =\beta_i \theta_\ell P_{i\ell}(x).
\]
Finally the data available are case counts are of the form $N_{ij\ell}$ of individuals in group $\ell$ who were diagnosed with lupus during census period $i$ while living in region $S_{ij}$.

The model is fit in two-stages. At the first stage the spatial variation in $\lambda(x)$ is ignored so that case counts $N_{ij\ell}$ may be assumed to be distributed as
\[
N_{ij\ell} \sim \text{Poisson}(\theta_\ell \beta_i (t_{i}-t_{i-1}) P_{ij\ell}).
\]
This allows $\beta_i$ and $\theta_\ell$ to be estimated from a generalised linear model. At the second stage $\beta_i$ and $\theta_\ell$ are set to the values estimated at the first stage and treated as known. They are then are used to construct the offsets
$${\cal O}_i(x) = \sum_\ell {\cal O}_{i\ell}(x).$$
These are in turn used in the iteration (\ref{EMSic}) to estimate $\lambda(x)$.

\subsubsection{Results}

The $Q_m$ must be disjoint, the boundaries of the $Q_m$ must not cross those of the $S_{ij}$, and each of the maps $M_i$ must be entirely covered by ${\cal Q}$. One way of accomplishing this is for ${\cal Q}$ to be the regions within a partition obtained by overlaying the boundaries of each of the $S_{ij}$. When the boundaries of the $S_{ij}$ correspond to edges on a lattice (or the $S_{ij}$ are approximated by forcing them to lie on a lattice), the $Q_m$ can be the cells within the lattice.
\begin{figure}\centering
\includegraphics[scale=.6, angle=0]{Relative_Risk.pdf}
%\caption{} \label{f: sim_mise}
\end{figure}

\subsubsection{Comments}

We conclude this example by making a comparison between the local-EM
algorithm in this context to methods in the literature. Note that if we only have a
single map ($j=1$) then $Q_l$ \& $S_{jl}$ coincide so that
$I_{ijl}=0$ for all $j\neq l$. As a result the kernel weight given in (\ref{Cev})
simplifies to $\int_{Q_l}K_h\left({u-t}\right)\,
\mathrm{d}u/||Q_l||$, the algorithm (\ref{localem}) iterates once
and the local-EM estimator simply becomes the Nadaraya-Watson
estimator advocated by Brillinger (1990, 1991, 1994) in a series of
papers concerning spatial smoothing where data is aggregated to
regions within a map.

In addition, while we have already demonstrated that we may formally motivate the EMS algorithm through an EM-type strategy applied to local
likelihood, some detailed comparison of (\ref{EMSic}) to
\cite{silvermanems} permits further insight. In \cite{silvermanems} 
quantities analogous to $S_{jl}$ \& $Q_l$ are referred to as observation and
reconstruction bins respectively and the context concerns image reconstruction involving  a
single image rather than multiple maps. Nevertheless this example could well be thought of as an extension of
the image reconstruction techniques of \cite{silvermanems} to an
epidemiological setting. Furthermore the expression (2.2) of
\cite{silvermanems} and ${\cal M}(\hat{\bf \Lambda}^{r})$ are
related. For example, their weights $p_{st}$ simplify in our setting to the
indicator variables ${\cal I}_{ijl}$ because we have assumed the locations of events
have been measured without error. This observation provides
an avenue for extending the local-EM toolbox to settings where data
is mis-measured but this is beyond the scope of this paper.

\vspace{15pt}
\noindent
{\it should we mention Vardi anywhere, or do we need to demonstrate that ${\cal M}(\hat{\bf \Lambda}^{r})$ is an EM step}

\bibliography{llems.bib}

\appendix
\section{Appendix}
\subsection{Local-EM and the Modified EMS algorithm}\label{appx_ems}
\cite{nychka1990spa} identified a relationship between EMS and a
penalized likelihood by demonstrating that a modified EMS algorithm
solves the system of score equation of the penalized likelihood. In
this section, we demonstrate that the local-EM algorithm may be used
to maximize a penalized likelihood with an appropriate choice of
kernel. We refer to this kernel as an equivalent kernel. This
occurs because the equivalent kernel leads to Nychka's modification
of the EMS algorithm.

We begin by first considering the following nonparametric penalized
likelihood with $\off_i(Q_k) = \off_{k}$ and $\map_i = \map$:
%$$
%\mathcal{L}(\boldsymbol{\theta}) = \sum_{ij} Y_{ij} \log\left(
%\sum_k \mathcal{I}_{ijk} \theta_k^2 \right) - n \sum_{k} \theta_k^2.
%$$
%$$
%\mathcal{L}(\boldsymbol{\theta}) = \sum_{ij} Y_{ij} \log\left(
%\sum_{k;Q_k \subseteq S_{ij}} \theta_k^2 \right) - n \sum_{k}
%\theta_k^2.
%$$
%The inclusion of the roughness penalty yields
$$
\mathcal{L}_{p}(\boldsymbol{\Theta}) =
\mathcal{L}(\boldsymbol{\Theta}) - \boldsymbol{\Theta}^{T}
\mathbf{R} \boldsymbol{\Theta}.
$$ where $\mathcal{L}(\boldsymbol{\theta})$ is the nonparametric likelihood in (\ref{e:npllk}), $\mathbf{R} = n (\mathbf{S}^{-1} - \mathbf{I})$ and $\mathbf{S}$ is
any symmetrical smoothing matrix. Note that the nonparametric
likelihood for density estimation in \cite{turnbull1976edf} is
equivalent to likelihood in (\ref{e:npllk}) with $\theta_k^2 = p_k$
and $\sum_k p_k = 1$. Likewise, the likelihood for intensity
estimation in \cite{wellner2000npmle} is also equivalent to
(\ref{e:npllk}) with $\off_k = 1$ for all $k$.

We explore the relationship between this penalized likelihood and
the local-EM algorithm in the locally constant case by first
considering the following function
\begin{equation} \label{e:equivalent_kernel}
(1/\rho(u))^{1/2} K_h(u-x),
\end{equation}
where $\rho$ is the true density or intensity, and $K_h(u-x)$ is any symmetric positive kernel with compact support. Renormalization of (\ref{e:equivalent_kernel}) gives
$$
K_{h}^{\ast}(u - t) = (\rho(x)/\rho(u))^{1/2} K_h(u-x),
$$
and $\int K_{h}^{\ast}(u - x) \, \mathrm{d}u = 1+o(h)$ for any
interior point, $x$. We refer to the kernel $K_h^{\ast}$ as an
equivalent kernel. Next, consider the use of the equivalent kernel
with the $\cal Q$-approximant $\bar{\lambda}_r$ in our local-EM
algorithm. This combination results in the conditional expectation
$\mbox{E}_{\hat{\lambda}_{r}}\left[ K_h^{\ast}(X-x) \mid X \in
S_{ij} \right]$ being approximated by
\begin{eqnarray*}
\mbox{E}_{\bar{\lambda}_{r}}\left[K_h^{\ast}(X-x) \mid S_{ij}
\right]&=& \sum_{k; Q_k \subseteq S_{ij}} \left( \dfrac{\off_\ell
\hat{\Lambda}_{\ell}^{(r)}}{\off_k \hat{\Lambda}_{k}^{(r)}}
\right)^{1/2} \int_{Q_{k}} K_h(u-x)\,\mathrm{d}u
\dfrac{\hat{\Lambda}_{k}^{(r)}}{ \sum_{k; Q_k \subseteq S_{ij}}
\hat{\Lambda}_{k}^{(r)}}
\end{eqnarray*}
for $x \in J_\ell$. This in turn gives the following iteration for
$\boldsymbol{\Lambda}$:
\begin{align}
\hat{\Lambda}_{\ell}^{(r+1)} &= n^{-1} \sum_{ij} \sum_{k; Q_k \subseteq S_{ij}} \left( \dfrac{\off_\ell
\hat{\Lambda}_{\ell}^{(r)}}{\off_k \hat{\Lambda}_{k}^{(r)}} \right)^{1/2} |\!| J |\!|^{-1} \int_{J_\ell} \dfrac{\int_{J_k} K_h(u-x) \, \mathrm{d}u}{\int_{\mathcal{M}} K_h^{\ast}(u-x) \, \mathrm{d}u} \, \mathrm{d}x \dfrac{ \mathcal{I}_{ijk} \hat{\Lambda}_{k}^{(r)}}{\sum_m \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} \notag \\
& \approxeq n^{-1} \sum_{ij} \sum_{k; Q_k \subseteq S_{ij}} \left( \dfrac{\off_\ell
\hat{\Lambda}_{\ell}^{(r)}}{\off_k \hat{\Lambda}_{k}^{(r)}}
\right)^{1/2} |\!| J |\!|^{-1} \int_{J_\ell} \int_{J_k} K_h(u-x) \,\mathrm{d}u \, \mathrm{d}x \dfrac{ \mathcal{I}_{ijk} \hat{\Lambda}_{k}^{(r)}}{\sum_m \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} \label{e:modified_EMS}
\end{align}
The expression (\ref{e:modified_EMS}) can be re-expressed in the following matrix form:
\begin{equation} \label{modified_EMSpl}
\hat{\boldsymbol{\Lambda}}^{(r+1)}={\cal
M}(\hat{\boldsymbol{\Lambda}}^{(r)}) {\cal
K}_h^{\ast}(\hat{\boldsymbol{\Lambda}}^{(r)}).
\end{equation}
${\cal K}_h^{\ast}\left(\hat{\boldsymbol{\Lambda}}^{(r)} \right) =
\hat{\boldsymbol{\Theta}}^{(r)} \, \mathcal{K}_h \,
\left(\hat{\boldsymbol{\Theta}}^{(r)}\right)^{-1}$, where
$\hat{\boldsymbol{\Theta}}^{(r)} = \text{diag}(\hat{\theta}_{k}^{(r)})$. Note
that $\off_k = 1$ in the context considered by \cite{silvermanems}
and \cite{nychka1990spa}. The iteration (\ref{modified_EMSpl}) is
recognized as Nychka's modified EMS algorithm with ${\bf S}={\cal
K}_h$.

In summary, the use of equivalent kernel in the local-EM algorithm
leads to an EMS equivalent to Nychka's modified EMS algorithm that
maximize the penalized likelihood
$\mathcal{L}_{p}(\boldsymbol{\Theta})$.

\subsection{Convergence of EMS to Local-EM}\label{appx_convergence}
%
Although the choice of the partition $\mathcal{Q}$ had depended on
the data in a natural way, it is quite arbitrary. For example, we
could consider a partition based on a set of $k$ equally spaced grid
points over a finite region $\mathcal{M}$. Without the loss of
generality, we consider the partition, elements of which are squares
centered at these grid points, and demonstrate that an EMS iteration
will converge to its local-EM counterpart as $k \rightarrow \infty$
in $\mathcal{L}^1$. For the sake of clarity, we restrict the
attention to the locally constant case.
%
% We begin by first considering the density estimation, a special case
% in which $\off_i(x)=1$ for all $x \in \mathcal{M} = \Re$.


Denote the EMS and local-EM iterates as $\hat{\lambda}_{k}^{(r)}$ and $\hat{\lambda}_{\infty}^{(r)}$, respectively. In addition, we assume $S_{ij} \subseteq \mathcal{M}$ for all $i, j$ and $|\!| \mathcal{M} |\!| \leq \infty$. $K(z)$ is a symmetric positive kernel with compact support and $\int K(z) \, \mathrm{d}z =1$. Finally, define a norm on $\mathcal{M}$ to be $|\!|\lambda|\!|_1 = \int_{\cal M} | \lambda(u) |\, \mathrm{d}u$ and interpret the convergence of the function $f$ to the function $g$ to mean that $|\!|f - g|\!|_1 \rightarrow 0$ as $k \to \infty$. This we denote as $f \xrightarrow{{\cal L}^1} g$. These details permit the statement of the following theorem:
%
\begin{theorem} \label{t:uniform_convergence} \end{theorem}
%\vspace{-5pt}
\begin{description}
\item{\bf I.} Define $\mathcal{F}_1 = \left\{ \lambda \in \mathcal{L}^1 \mid
\text{$\lambda$ is nonnegative with $\lambda(x) > 0$ for all $x \in
\mathcal{M}$} \right\}.$ For a common initial value $\hat{\lambda}_0
\in \mathcal{F}_1$, we have, for all $r = 1, 2, \ldots$,
\begin{description}
\item{A.} $\hat{\lambda}_{k}^{(r)} \stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{(r)}$, and
\item{B.} $\hat{\lambda}_{k}^{(r)}$, $\hat{\lambda}_{\infty}^{(r)} \in \mathcal{F}_1$.
\end{description} \label{e: ems_iterate_v1} %%
\item{\bf II.} When the equivalence kernel of \S\ref{appx_ems} is used, we
instead define
$$
\mathcal{F}_2 = \left\{ \lambda \in \mathcal{L}^1 {\Big |} \text{$\lambda$ is
nonnegative with $\lambda(x) > 0$ for all $x \in \mathcal{M}$ and
$\int_\mathcal{M} \lambda^{1/2} < \infty$}\right\}.
$$
For a common initial value $\hat{\lambda}_0 \in \mathcal{F}_2$, the
result A and B still hold for all $r$, where ${\cal F}_1$ is
replaced with ${\cal F}_2$ in B. \label{e:ems_iterate_v2}
\end{description}

\noindent Let the square root of a function $\lambda$ denoted by
$\lambda^{1/2}$ and $K(\cdot)$ be a symmetric positive kernel with
$K_h(\cdot)=K(\cdot/h)/h$ for some $h>0$. Define $\mathcal{H}_x$ to be
%\begin{enumerate}
%\item
$\mathcal{H}_x: \mathcal{L}^1 \mapsto \mathcal{L}^1$ such that
$$\mathcal{H}_x(\lambda) = \int \dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u)
K_h(u-x)\,\mathrm{d}u} \lambda(u)\, \mathrm{d}u,\ \mbox{and}$$
%\item $\mathcal{G}_x: \mathcal{L}^1 \times \mathcal{L}^1 \mapsto \mathcal{L}^1$ such that $\mathcal{G}_x(\lambda, g) = \int g^{1/2}(x) K_h (u-x) \lambda^{1/2}(u) \, \mathrm{d}u$.
%\end{enumerate}
% Note that if an equivalent kernel is used, then
% \[
% \int K_h^{\ast}(u-x) \lambda(u) \, \mathrm{d}u = \int
% \lambda^{1/2}(t) K_h(u-x) \lambda^{1/2}(u) \, \mathrm{d}u =
% \mathcal{G}_x(\lambda, \lambda)
% \]
The proof relies on $\mathcal{H}_x(\lambda)$ being a bounded linear functional as well as some other basic results in operator theory stated as lemmas below. These lemma that may be found in \cite{royden1988ra}.

\begin{lemma}\label{lemma:piece_approx}
Let $\lambda \in \mathcal{L}^1$. Then the
$\mathcal{Q}_k$-approximant $g$ of $\lambda$ converges in ${\cal
L}^1$ to $\lambda$ on $\mathcal{M}$ as $k \to \infty$; that is, $g
\stackrel{\mathcal{L}^1}{\longrightarrow} \lambda$.
\end{lemma}
%
\begin{lemma}\label{lemma:bl_functional}
If $\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u \geq c > 0$, then
$\mathcal{H}_x$ is a linear bounded functional for all $f \in
\mathcal{L}^1$. That is, for all $x, a, b \in \Re$,
$\mathcal{H}_t(af+b) = a\mathcal{H}_x(f)+b$, and there exists a real
number $M_h$ such that $\mathcal{H}_x (f) \leq M_h |\!| f |\!|$.
\end{lemma}

\begin{lemma} \label{lemma:integral_op1}
Let $\gamma_h(u, x) = g(x) K_h(u-x) f(u)$, where $f$, $g \in \mathcal{L}^1$ and $K_h \in \mathcal{L}^\infty$. Then $\gamma(u, t)$ is an
$\mathcal{L}^1$ function on $\Re^2$ with
\[
\iint \left| \gamma_h(u, x) \right| \, \mathrm{d}u \, \mathrm{d}x \leq
M_h \cdot \left(\int |g| \right) \cdot \left(\int |f| \right) .
\]
\end{lemma}


\noindent \textbf{Main Result:} Consider a fixed $h$ and $n$ throughout the entire proof.
\begin{enumerate}
\item[I.]
%\begin{enumerate}
%\item[A.]
%
Let $r=1$. Assume $\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u
\geq c > 0$. Note that $\int_{S_{ij}} \bar{\lambda}_{0}(u)
\,\mathrm{d}u = \int_{S_{ij}} \hat{\lambda}_0(u) \,\mathrm{d}u$ by
the definition of $\bar{\lambda}_{0}$. Repeated use of the triangle
inequality gives
\begin{equation*}
\begin{split}
\left|\!\left| \hat{\lambda}_{k}^{(1)} -
\hat{\lambda}_{\infty}^{(1)} \right|\!\right|_1
%
%= n^{-1} \int_{\mathcal{M}}\left | \sum_{ij}
%\dfrac{\int_{A_{ij}} \dfrac{K_h(u-x)}{\int O(u) K_h(u-x)\,\mathrm{d}u} (\bar{\lambda}_{0}(u) - \hat{\lambda}_0(u)) \,
%\mathrm{d}u}{\int_{A_{ij}} \hat{\lambda}_0(u) \,\mathrm{d}u}\right | \,\mathrm{d}x\\
%
%& \hspace{.25in}
&\leq n^{-1} \sum_{ij} \left( \int_{S_{ij}}
\hat{\lambda}_0(u)\,\mathrm{d}u \right)^{-1} \int_{\mathcal{M}}
\int_{S_{ij}} \dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u)
K_h(u-x)\,\mathrm{d}u}\left| \bar{\lambda}_{0}(u) -
\hat{\lambda}_0(u)\right|\, \mathrm{d}u \, \mathrm{d}x \\
%line 2
&\leq n^{-1} \sum_{ij} \left( \int_{S_{ij}}
\hat{\lambda}_0(u)\,\mathrm{d}u \right)^{-1} \int_{\mathcal{M}}
\int_{\mathcal{M}} \dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u)
K_h(u-x)\,\mathrm{d}u} \left| \bar{\lambda}_{0}(u) -
\hat{\lambda}_0(u)\right| \,\mathrm{d}u \,\mathrm{d}x \\
%line 3
%& \hspace{.25in} \leq n^{-1} \sum_{ij} \left( \int_{S_{ij}}
%\hat{\lambda}_0(u)\,\mathrm{d}u \right)^{-1} M_h |\!|
%\bar{\lambda}_{0} - \hat{\lambda}_0 |\!|_1 \int_{\mathcal{M}}\,\mathrm{d}x, \\
%line 4
&\leq n^{-1} M_h |\!| \mathcal{S}|\!| \sum_{ij} \left( \int_{S_{ij}}
\hat{\lambda}_0(u)\,\mathrm{d}u \right)^{-1} |\!| \bar{\lambda}_{0}
- \hat{\lambda}_0 |\!|_1.
\end{split}
\end{equation*}
Here the last inequality is due to the finiteness of $\mathcal{S}
\subset \mathcal{M}$ and Lemma \ref{lemma:bl_functional}. By Lemma
\ref{lemma:piece_approx}, $\bar{\lambda}_{0}\xrightarrow{{\cal L}^1}
\hat{\lambda}_0$, thus $\hat{\lambda}_{k}^{(1)}
\stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{(1)}$. Moreover, Lemma
\ref{lemma:bl_functional} also ensures that
$\hat{\lambda}_{k}^{(1)}$ and $\hat{\lambda}_{\infty}^{(1)}$ both
belong to the class
$\mathcal{F}_1$.\\[20pt]
%
\noindent {\it Induction Step:}
%
Let $b_{ij}^{(r)} = \int_{S_{ij}}
\hat{\lambda}_{\infty}^{(r)}(v)\,\mathrm{d}v/ \int_{S_{ij}}
\bar{\lambda}^{(r)}(v)\,\mathrm{d}v$. Assume that
$\hat{\lambda}_{k}^{(r)} \stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{(r)}$ and $\hat{\lambda}_{k}^{(r)}$,
$\hat{\lambda}_{\infty}^{(r)} \in \mathcal{F}_1$. With the repeated
use of the triangle inequality, we have
\begin{align*}
%line 1
& \left|\!\left| \hat{\lambda}_{k}^{(r+1)} - \hat{\lambda}_{\infty}^{(r+1)} \right|\!\right|_1 \\
%= n^{-1} \int_{\mathcal{S}}\left | \sum_{ij}
%\int_{A_{ij}} \dfrac{K_h(u-x)}{\int O(u) K_h(u-x)\,\mathrm{d}u} \left(\dfrac{\bar{\lambda}_{r}(u)}{\int_{A_{ij}}
%\bar{\lambda}_{r}(v)\,\mathrm{d}v}-\dfrac{\hat{\lambda}_{\infty\,
%r}(u)}{\int_{A_{ij}} \hat{\lambda}_{\infty\, r}(v) \,
%\mathrm{d}v}\,\mathrm{d}u\right)\right | \,\mathrm{d}x\\
%%line 2
&\hspace{.15in} \leq n^{-1} \sum_{ij}
\int_{\mathcal{M}}\int_{S_{ij}} \dfrac{K_h(u-x)}{\int_{\mathcal{M}}
O(u) K_h(u-x)\,\mathrm{d}u}\left|
\left(\dfrac{\bar{\lambda}^{(r)}(u)}{\int_{S_{ij}}
\bar{\lambda}^{(r)}(v)\,\mathrm{d}v}-\dfrac{\hat{\lambda}_{\infty}^{(r)}(u)}{\int_{S_{ij}}
\hat{\lambda}_{\infty}^{(r)}(v)\,\mathrm{d}v}\right)\right |\,\mathrm{d}u\,\mathrm{d}x\\
%line 3
&\hspace{.15in} \leq n^{-1} \sum_{ij} \left( \int_{S_{ij}}
\hat{\lambda}_{\infty}^{(r)}(v)\,\mathrm{d}v\right)^{-1}
\int_{\mathcal{M}}\int_{\mathcal{M}}
\dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u}
\left | b_{ij}^{(r)} \,
\bar{\lambda}^{(r)}(u)-\hat{\lambda}_{\infty}^{(r)}(u) \right
|\,\mathrm{d}u \,\mathrm{d}x \\
%line 4
%&\hspace{-.25in} \leq n^{-1} \sum_{ij} \left( \int_{A_{ij}}
%\hat{\lambda}_{\infty\, r}(v)\,\mathrm{d}v\right)^{-1} M_h
%\left|\!\left| b_{ri} \, \bar{\lambda}_{r} -\hat{\lambda}_{\infty\, r}
%\right|\!\right|_{1} \int_{\mathcal{M}} \,\mathrm{d}x \\
%line 5
&\hspace{.15in} \leq n^{-1} M_h |\!| \mathcal{S}|\!| \sum_{ij}
\left( \int_{S_{ij}} \hat{\lambda}_{\infty}^{(r)}(v)\,\mathrm{d}v
\right)^{-1} \left|\!\left| b_{ij}^{(r)} \, \bar{\lambda}^{(r)} -
\hat{\lambda}_{\infty}^{(r)} \right|\!\right|_{1},
\end{align*}
The last inequality is again due to Lemma \ref{lemma:bl_functional}.
Now since the induction assumption implies $\bar{\lambda}^{(r)}
\xrightarrow{{\cal L}^1} \hat{\lambda}_{\infty}^{(r)}$ and
$b_{ij}^{(r)} \to 1$, we have, for all $i, j$,
\begin{equation*}
\Big|\!\Big|b_{ij}^{(r)} \, \bar{\lambda}^{(r)} -
\hat{\lambda}_{\infty}^{(r)} \Big|\!\Big|_1 \leq \Big|\!\Big|
b_{ij}^{(r)} \, \bar{\lambda}^{(r)} - \bar{\lambda}^{(r)}
\Big|\!\Big|_1 + \Big|\!\Big| \bar{\lambda}^{(r)} -
\hat{\lambda}_{\infty}^{(r)} \Big|\!\Big|_1 \to 0.
\end{equation*}%
In addition, it is evident that $\hat{\lambda}_{k}^{(r+1)}$ and
$\hat{\lambda}_{\infty}^{(r+1)}$ belong to $\mathcal{F}_1$ provided
that $\hat{\lambda}_{k}^{(r)}$, $\hat{\lambda}_{\infty}^{(r)} \in
\mathcal{F}_1$. Hence, we have (\textbf{A})
$\hat{\lambda}_{k}^{(r+1)} \stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{(r+1)}$ on $\mathcal{S}$, and ({\bf B})
$\hat{\lambda}_{k}^{(r+1)}$, $\hat{\lambda}_{\infty}^{(r+1)} \in
\mathcal{F}_1$ by induction.
%
%
% Equivalent Kernel
%
%
\item[II.]
%\begin{enumerate}
%\item[A.]
Let $r=1$. Assume $\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u
\geq c > 0$. By the triangle inequality and Lemma
\ref{lemma:integral_op1}, we have
%
\begin{align*}
&\left|\!\left| \hat{\lambda}_{k}^{(1)} -
\hat{\lambda}_{\infty}^{(1)}
\right|\!\right|_1 %= n^{-1} \int_{\mathcal{M}} \left| \sum_{ij}
%\dfrac{\int_{A_{ij}} K_h(u-t) \left( \bar{\lambda}_{0}^{1/2}(u) \,
%\bar{\lambda}_{0}^{1/2}(t) - \hat{\lambda}_{0}^{1/2}(u) \,
%\hat{\lambda}_{0}^{1/2}(t) \right) \, \mathrm{d}u }{\int_{A_{ij}}
%\hat{\lambda}_{0}(t) \, \mathrm{d}t} \,
%\right| \mathrm{d}t \\
%%
%% line2
\leq n^{-1} \sum_{ij} \int_{\mathcal{M}}\int_{S_{ij}}
\dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u}
\left| \dfrac{\bar{\lambda}_{0}^{1/2}(u) \,
\bar{\lambda}_{0}^{1/2}(x) - \hat{\lambda}_{0}^{1/2}(u) \,
\hat{\lambda}_{0}^{1/2}(x) }{\int_{S_{ij}} \hat{\lambda}_{0}(v) \,
\mathrm{d}v} \,\right| \,\mathrm{d}u \,\mathrm{d}x \\
%%
&\\
% line 3
&\hspace{.15in} \leq n^{-1} \sum_{ij} \left( \int_{S_{ij}}
\hat{\lambda}_{0}(v) \, \mathrm{d}v \right)^{-1} \left\{
\int_{\mathcal{M}} \int_{\mathcal{M}}
\dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u}
\bar{\lambda}_{0}^{1/2}(x) \left| \bar{\lambda}_{0}^{1/2}(u) -
\hat{\lambda}_{0}^{1/2}(u) \right| \, \mathrm{d}u \, \mathrm{d}x \right. \\
%
& \hspace{2in} + \left. \int_{\mathcal{M}} \int_{\mathcal{M}}
\dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u}
\hat{\lambda}_{0}^{1/2}(u) \left| \bar{\lambda}_{0}^{1/2}(x) -
\hat{\lambda}_{0}^{1/2}(x) \right| \, \mathrm{d}u \, \mathrm{d}x \right\} \\
%%
&\\
%%
&\hspace{.15in} \leq n^{-1} M_h \sum_{ij} \dfrac{ \left|\!\left|
\bar{\lambda}_{0}^{1/2} \right|\!\right|_{1} \cdot \left|\!\left|
\bar{\lambda}_{0}^{1/2} - \hat{\lambda}_{0}^{1/2}
\right|\!\right|_{1} + \left|\!\left| \hat{\lambda}_{0}^{1/2}
\right|\!\right|_{1} \cdot \left|\!\left| \bar{\lambda}_{0}^{1/2} -
\hat{\lambda}_{0}^{1/2} \right|\!\right|_{1} }{\int_{A_{ij}}
\hat{\lambda}_{0}(v) \, \mathrm{d}v}
\end{align*}
%
%Because $\bar{\lambda}_{0} \stackrel{\mathcal{L}^1}{\longrightarrow}
%\hat{\lambda}_0$ by Lemma \ref{lemma:piece_approx}, $\left|\!\left|
%\bar{\lambda}_{0}^{1/2} - \hat{\lambda}_{0}^{1/2}
%\right|\!\right|_{1} \to 0$ by the continuous mapping theorem
%(\textsc{cmt}).
By Lemma \ref{lemma:piece_approx} and the continuous mapping theorem
(\textsc{cmt}), $\left|\!\left| \bar{\lambda}_{0}^{1/2} -
\hat{\lambda}_{0}^{1/2} \right|\!\right|_{1} \to 0$. Therefore,
$\hat{\lambda}_{k}^{(1)} \stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{(1)}$. Furthermore, if we choose
$\hat{\lambda}_0^{1/2}$ to be bounded above, then
$\hat{\lambda}_{k}^{(1)}$ and $\hat{\lambda}_{\infty}^{(1)}$ will be
also bounded. This, in turn, ensures
that $\hat{\lambda}_{k}^{(1)}, \ \hat{\lambda}_{\infty}^{(1)} \in \mathcal{F}_2$.\\[20pt]
%
\noindent{\it Induction Step:}
%
Assume that $\hat{\lambda}_{k}^{(r)} \stackrel{\mathcal{L}^1}{\longrightarrow} \hat{\lambda}_{\infty}^{(r)}$, $\hat{\lambda}_{k}^{(r)}$, $\hat{\lambda}_{\infty}^{(r)} \in \mathcal{F}_2$, and bounded. %Let $c_{ri} = \int_{A_{ij}} \hat{\lambda}_{\infty\, r}(v) \, \mathrm{d}v /\int_{A_{ij}} \bar{\lambda}_{r}(v) \, \mathrm{d}v$.
Then the induction assumption immediately implies that
$$
c_{ij}^{(r)} = \frac{\int_{S_{ij}} \hat{\lambda}_{\infty}^{(r)}(v)
\, \mathrm{d}v}{\int_{S_{ij}} \hat{\lambda}_{k}^{(r)}(v) \,
\mathrm{d}v} \to 1\ \mbox{for all $i, j$ and }
(\bar{\lambda}^{(r)})^{1/2}\stackrel{\mathcal{L}^1}{\longrightarrow}
(\hat{\lambda}_{\infty}^{(r)})^{1/2}\ \mbox{on $\mathcal{M}$.}$$
Similar to part (I), Lemma \ref{lemma:piece_approx} and
\ref{lemma:integral_op1} imply that
%
\begin{align*}
& \left|\!\left| \hat{\lambda}_{k}^{(r+1)} - \hat{\lambda}_{\infty}^{(r+1)} \right|\!\right|_1 \\
% line 2
&\hspace{.1in} \leq n^{-1} \sum_{ij} \left( \int_{S_{ij}} \hat{\lambda}_{\infty}^{(r)}(x) \, \mathrm{d}x \right)^{-1} \int_{\mathcal{M}} \int_{S_{ij}} \dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u} \left| c_{ij}^{(r)} \left[\bar{\lambda}^{(r)}(u) \, \bar{\lambda}^{(r)}(x)\right]^{1/2} \right.\\
& \hspace{4.5in}- \left.\left[\hat{\lambda}_{\infty}^{(r)}(u) \, \hat{\lambda}_{\infty}^{(r)}(x)\right]^{1/2} \right| \, \mathrm{d}u \, \mathrm{d}x \\
% line 3
&\hspace{.15in} \leq n^{-1} \sum_{ij} \left(\int_{S_{ij}} \hat{\lambda}_{\infty}^{(r)}(x) \, \mathrm{d}x \right)^{-1}
\int_{\mathcal{M}} \int_{\mathcal{M}} \dfrac{K_h(u-x)}{\int_{\mathcal{M}} O(u) K_h(u-x)\,\mathrm{d}u} \left\{ c_{ij}^{(r)}
\left( \bar{\lambda}^{(r)}(x) \left| \bar{\lambda}^{(r)}(u) - \hat{\lambda}_{\infty}^{(r)}(u) \right|\right)^{1/2} \right. \\
% line 4
& \hspace{.9in} + \left. \left| c_{ij}^{(r)} - 1 \right|
\left(\hat{\lambda}_{\infty}^{(r)}(u) \bar{\lambda}^{(r)}(x)
\right)^{1/2} + \left(\hat{\lambda}_{\infty}^{(r)}(u) \left|
\bar{\lambda}^{(r)}(x)
- \hat{\lambda}_{\infty}^{(r)}(x) \right| \right)^{1/2} \right\} \, \mathrm{d}u \, \mathrm{d}x \\
% line 5
&\hspace{.15in} \leq n^{-1} M_h \sum_{ij} \left( \int_{S_{ij}}
\hat{\lambda}_{\infty}^{(r)}(x) \, \mathrm{d}x\right)^{-1} \Bigg[
|c_{ij}^{(r)}| \left|\!\left| (\bar{\lambda}^{(r)})^{1/2}
\right|\!\right|_{1} \cdot \left|\!\left|
(\bar{\lambda}^{(r)})^{1/2} - (\hat{\lambda}_{\infty}^{(r)})^{1/2}
\right|\!\right|_{1} \\
% line 6
& \hspace{.9in} + |c_{ij}^{(r)} - 1| \left|\!\left|
(\bar{\lambda}^{(r)})^{1/2} \right|\!\right|_{1} \cdot
\left|\!\left| (\hat{\lambda}_{\infty}^{(r)})^{1/2}
\right|\!\right|_{1} + \left|\!\left|
(\hat{\lambda}_{\infty}^{(r)})^{1/2} \right|\!\right|_{1} \cdot
\left|\!\left| (\bar{\lambda}^{(r)})^{1/2} -
(\hat{\lambda}_{\infty}^{(r)})^{1/2} \right|\!\right|_{1}
\Bigg] \to 0.
\end{align*}
%
% Next, define two functions, $\hat{\zeta}_{k\, r}$ and
% $\hat{\zeta}_{\infty\, r}$, to be
% $$
% \hat{\zeta}_{k\, r}(x) = \int_{\mathcal{M}} K_h(u-x) \bar{\lambda}_{r}^{1/2}(u) \,
% \mathrm{d}u \text{ and } \hat{\zeta}_{\infty\, r}(x) = \int_{\mathcal{M}}
% K_h(u-x) \hat{\lambda}_{\infty\, r}^{1/2}(u) \, \mathrm{d}u.
% $$
%By Lemma \ref{lemma:integral_op2}, $\hat{\zeta}_{k\, r}\ \hat{\zeta}_{\infty\, r} \in \mathcal{F}_2$.
Provided that $\hat{\lambda}_{k}^{(r)}$ and $\hat{\lambda}_{\infty}^{(r)}$ are bounded, $\hat{\lambda}_{k}^{(r+1)}$ and $\hat{\lambda}_{\infty}^{(r+1)}$ are bounded, implying that $\int_{\mathcal{M}} (\hat{\lambda}_{k}^{(r)})^{1/2} < \infty$ and $\int_{\mathcal{M}} (\hat{\lambda}_{\infty}^{(r+1)})^{1/2} < \infty$.
%
%
It follows that ({\bf A}) $\hat{\lambda}_{k}^{(r)}(x) \stackrel{\mathcal{L}^1}{\longrightarrow} \hat{\lambda}_{\infty}^{(r)}(x)$, and ({\bf B}) $\hat{\lambda}_{k}^{(r)}$, $\hat{\lambda}_{\infty}^{(r)} \in \mathcal{F}_2$ by induction.


\end{enumerate}




\subsection{Local-EM and the Penalized Likelihood} \label{appx_penalized}

%In this section we study the penalized likelihood of \S\ref{appx_ems} under the conditions of \S\ref{appx_convergence} in which $k\rightarrow \infty$.
We begin by considering those values of $\boldsymbol{\theta}$ for which the penalty $\boldsymbol{\theta}^{T} \mathbf{R} \boldsymbol{\theta}$ is minimized. For such $\boldsymbol{\theta}$, we have
\begin{eqnarray*}
\mathbf{R} \boldsymbol{\theta}&=&(\mathbf{\cal K}_h^{-1}
-\mathbf{I})\boldsymbol{\theta}=0
\end{eqnarray*}
or rather
\begin{eqnarray}\label{eigen}
\boldsymbol{\theta}&=&\mathbf{\cal K}_h\ \boldsymbol{\theta}.
\end{eqnarray}
This permits an interpretation of ${\cal L}_p(\boldsymbol{\theta})$
as penalizing the nonparametric likelihood on the basis of the
proximity of $\boldsymbol{\theta}$ to the maximal eigenvector of
the smoothing matrix $\mathbf{\cal K}_h$. To see this, let
$\varrho_{(\ell)}$ denote the $\ell$th largest eigenvalue of
$\mathcal{K}_h$ with its corresponding eigenvector
$\boldsymbol{\gamma}_{(\ell)}$. Let
$\boldsymbol{\Gamma}=\begin{bmatrix} \boldsymbol{\gamma}_{(k)} &
\boldsymbol{\gamma}_{(k-1)}& \cdots & \boldsymbol{\gamma}_{(1)}
\end{bmatrix}$. Then the spectral decomposition of $\mathbf{R}$ is
$ \boldsymbol{\Gamma} \mathbf{D} \boldsymbol{\Gamma}^{T},$ where
$\mathbf{D}=\text{diag}\left( \varrho_{(k)}^{-1}, \ \varrho_{(k-1)}^{-1},
\ \ldots, \ \varrho_{(1)}^{-1}\right) - \mathbf{I}$.
%
Since $\varrho_{(1)} \leq 1$, $\mathbf{R}$ penalizes eigenvectors with small eigenvalues more than
that with large ones.

We note that for $\boldsymbol{\theta}$ satisfying the condition
(\ref{eigen})
\begin{equation}
\boldsymbol{\theta}^T\boldsymbol{\theta}-\boldsymbol{\theta}^T{\cal
K}_h\ \boldsymbol{\theta}=0 \label{eigen_2}.
\end{equation}
We may consider the limit of the left-hand side as $k
\rightarrow \infty$. Let $\Delta = \Delta_u \Delta_x$. ${\cal K}_{jk}$ and $\theta_j$ may be
approximated as
\begin{eqnarray*}
{\cal K}_{kj}&=&\Delta^{-1} \int_{J_j}\int_{J_k} K_h(u-x)\,
\mathrm{d}u \mathrm{d}x
\approx K_h(x_j-x_k) \, \Delta \\
\theta_j^2 &=& \left( \int_{J_j} \rho(u)\, \mathrm{d}u \right)^{1/2} \left( \int_{J_j} \rho(x)\, \mathrm{d}u \right)^{1/2} =
\rho(x_j) \Delta
\end{eqnarray*}
and so the left-hand side of (\ref{eigen_2}) becomes
$$
\sum_j \rho(t_j)\Delta - \sum_{jk} \rho^{1/2}(x_j)K_h(x_j-x_k) \rho^{1/2}(x_k)\, \Delta^2.
$$
If we let $\Delta \downarrow 0$, the above expression becomes
$$
\int_{\cal T}\rho(u)\, \mathrm{d}u-\int_{\cal T}\int_{\cal
T} \rho^{1/2}(x) K_h(u-x) \rho^{1/2}(u)\, \mathrm{d}u\,\mathrm{d}x,
$$
As a result, the penalty will equal 0 for any function $f$ belonging
to the following class
$$
{\cal Z}=\left\{f \ \Bigg | \, f^{1/2}(x)=\int_{\cal
T}K_h(u-x)f^{1/2}(u)\, \mathrm{d}u \mbox{ for all $x \in {\cal S}$}
\right\}.
$$
Given this and the results of \S\ref{appx_convergence}, we speculate that the local-EM
algorithm maximizes the likelihood
\begin{eqnarray*}
\mathcal{L}(\rho)=\sum_{ij} n_{ij} \log\left(\int_{A_{ij}} \rho(u)\, \mathrm{d}u \right)
- \sum_i \int_{\mathcal{M}_i} \rho(u)\, \mathrm{d}u
\end{eqnarray*}
on the basis of the proximity of the function $\rho$ to the class of
maximal eigenfunctions $\cal Z$.


\vspace{20pt} \noindent {\bf General Remark:} \bigskip

\noindent As $h\downarrow 0$ the integral $
\int_{Q_l}K_h\left({u-x}\right)\, \mathrm{d}u$ becomes the indicator
function $1_{Q_l}(t)$ for $Q_l$, $\int_0^{\tau_{k_i}} K_h(u-t)\,
\mathrm{d}u \rightarrow Y_i(t)$ and $\int_0^{\infty}Y(u)K_h(u-t)\,
\mathrm{d}u\rightarrow Y(t)$. As a result we have
\[
\lim_{h\downarrow 0}{\cal K}_{ls}=
\lim_{h\downarrow0}\frac{Y(\tau_l)}{||J_l||}
\int_{J_s}\frac{\int_{J_l}K_h\left({u-t}\right)\,
\mathrm{d}u}{\int_0^{\infty}Y(u)K_h(u-t)\, \mathrm{d}u}\,
\mathrm{d}t =\frac{Y(\tau_l)}{||J_l||}\int_{J_s}
\frac{1_{J_l}(t)}{Y(t)}\, \mathrm{d}t =\delta_{jl}.
\]
Consequently ${\cal K}_h$ converges to the identity matrix and the
iteration (\ref{EMSic}) becomes the EM algorithm algorithm of
\cite{hu2008gls}.

\end{document}


\section*{Notes}

\begin{itemize}
\item $i = 1 \ldots n$ denotes map, or individual

\item $j = 1 \ldots J_i$ denotes cell $S_{ij}$ in a tessellation of map $i$ or panels for individual $i$.

\item $k = 1 \ldots K_i$ is the observed events $X_{ik}$ or $T_{ik}$ for individual or in map $i$

\item $\ell$ is an age-sex group

\item $m = 1 \ldots M$ denotes cell $Q_m$ in a partition over all $i$

\item $\lambda(t)$ is the true intensity or density, and $\hat\lambda(t)$ is it's estimate. $\Lambda(t)$ is the cumulative intensity or CDF.

\item $Y$ denotes a case count

\item $Z(t)$ is a dropout indicator
\end{itemize}

I've added a lot of section headings, just to keep things clear, we can remove some of them later.
