\documentclass[12pt]{article}
\usepackage{layout,latexsym, array, enumerate, amsmath, amsthm,amssymb, amsfonts, natbib, subfigure, color}
\usepackage[mathscr]{eucal}
\usepackage{epsf,epsfig}
%\usepackage{epsf,epsfig,eufrak,dsfont}

\definecolor{grey}{RGB}{190,190,190}

\textwidth 6.5in \textheight 9.00in \oddsidemargin -0.15in
\evensidemargin -0.15in \topmargin -0.25in
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}{Corollary}[section]
\newtheorem{example}{Example}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{defn}{Definition}[section]
\newcommand{\lowtilde}[1]{\mathop{#1}\limits_{\textstyle\tilde{}}}
%\renewcommand{\baselinestretch}{1.4}

\newcommand{\off}{\mathcal{O}}
\newcommand{\poly}{{\cal P}(u-s;\mathbf{a})}
\newcommand{\coef}{\mathbf{a}}
\newcommand{\lik}{\ensuremath{\mathcal{L}}}
\newcommand{\map}{\ensuremath{\mathcal{M}}}
\newcommand{\area}[1]{\ensuremath{|\!|#1|\!|}}
\newcommand{\norm}[1]{\ensuremath{\left|\!\left|#1\right|\!\right|_1}}
\newcommand{\dee}[1]{\ensuremath{\,\mathrm{d}#1}}
\newcommand{\Steve}[1]{{\tt (#1)}}

\newcommand{\comment}[1]{}

\begin{document}
\title{Local-EM and the EMS Algorithm}
\author{Chun-Po Steve Fan\footnote{Chun-Po Steve Fan is a doctoral student in the Dalla Lana School of Public Health, University of Toronto}, Jamie Stafford\footnote{University of Toronto} and Patrick E. Brown\footnote{University of Toronto and Cancer Care Ontario} }
\thispagestyle{empty}
\date{} 
\maketitle
%\renewcommand{\baselinestretch}{1.4}
\begin{abstract}
The use of local likelihood methods (Hastie and Tibshirani 1987, Loader 1999) in the presence of data that is either interval or area censored leads naturally to the
consideration of EM-type strategies, or rather local-EM algorithms.
In this paper we consider a class of local-EM algorithms suitable for density or intensity estimation in the temporal or spatial context. We demonstrate that using a piecewise constant density function at the E-step results in the algorithm collapsing
explicitly into an EMS algorithm of the type considered by \citep{silvermanems}. 

This discovery has two advantages. Identifying a relationship between local likelihood and the EMS algorithm means the former provides a natural context for the latter, which is often referred to as {\it ad hoc} in the literature. In addition, the latter provides a set of tools to guide the use, and implementation, of local-EM algorithms. For example, we
expose a previously unknown connection between local-EM algorithms and penalized likelihood that is analogous to the more familiar pairing of EM and likelihood. Examples include exploring the spatial structure of the disease Lupus in the City of Toronto.
\end{abstract}

\vspace{10pt} \small \it Keywords: density estimation; intensity
estimation; interval and area censoring; local-EM; panel counts; lupus; penalized
likelihood; self-consistency \rm \normalsize

\section{Introduction}
In this paper we consider the use of local likelihoods for density and intensity estimation when data are only partially observed. Here data may be interval censored,
they may be temporal and come in the form of panels counts, or they may be spatial and area censored. Whatever form the censoring takes, the use of local likelihood techniques naturally leads to the consideration of local-EM algorithms.

The use of local-EM algorithms has precedents in the literature. Examples include \cite{tanner1987mi}, \cite{betensky1999leh}, \cite{braun2005lld}, \cite{tolusso2008red} and so on. Here the method of implementation can vary where, for instance, implementation of the E-step might involve multiple imputation or Monte Carlo integration or numerical integration and so on. Despite this, and the interesting developments given in these papers, one common challenge that remains is demonstrating convergence of the local-EM algorithm to a fixed point. Furthermore, it is not clear whether this fixed point maximized any particular criterion. For example, while we know EM is a tool useful for maximizing likelihoods or posterior distributions, what is the analogy for local-EM?

In this paper we consider implementing the E-step of a local-EM algorithm by approximating conditional expectations using a piecewise constant density function. This results in the local-EM algorithm collapsing explicitly into an EMS algorithm of the type proposed in \cite{silvermanems}. There an EMS algorithm is constructed by simply adding a smoothing step to the expectation and maximization steps of the usual EM algorithm. \cite{silvermanems} refer to this method as being {\it ad hoc}. 

Identifying a relationship between local-EM and the EMS algorithm has two advantages. First, it embeds the EMS algorithm in the local likelihood context where it is seen to arise naturally as an implementation of a local-EM algorithm; thus it is not {\it ad hoc}%
\footnote[1]{\cite{nychka1990spa} demonstrates that a modified EMS algorithm is related to penalized likelihood. As a result he also suggests that the EMS algorithm is not {\it ad hoc}.}.
Secondly, the EMS algorithm has been extensively studied. Much is known about its convergence \cite[]{latham1995ems} and its relationship to penalized likelihood~\cite[]{nychka1990spa}. This can be used to inform the use of local-EM. For example, the latter suggests a previously unknown connection between local-EM and penalized likelihood that is analogous to the more familiar pairing of EM and likelihood.

This paper has the following structure. In \S 2, we summarize a common notation used for three contexts: failure time processes, temporal and spatial  inhomogeneous Poisson processes. We return to each context in the e $\coef$ is then a vector of polynomial coefficients.  Here we consider locally constant models (polynomials of degxamples given in \S 5. Here we consider multivariate density estimation for failure time data, intensity estimation for panel count data, and finally, estimation of the spatial distribution of the disease lupus in the Greater Toronto Area. This last example may be viewed as extending the image reconstruction techniques of \cite{silvermanems} to an epidemiological setting. One common theme throughout is that the local-EM algorithm is seen to explicitly extend the self-consistency algorithms of \cite{turnbull1976edf}, \cite{hu2008gls} and \cite{vardi1985pet}. Another is that, in simple cases, the local-EM algorithm reduces to the methods of Jones (1989) for smoothing histograms and \cite{brillinger1990st, brillinger1991si, brillinger1994esp} for smoothing spatially aggregated data.

In \S 3 we expose the relationship between local-EM and the EMS algorithm in a way that captures each of the above contexts. We describe local-EM in \S 3.1 and in \S 3.2 demonstrate that EMS arises naturally as an implementation of the local-EM algorithm. In \S 3.3 we prove the resulting EMS iterate converges uniformly to its local-EM counterpart.

In \S 4 we exploit the relationship between local-EM and the EMS algorithm to gain insight into convergence issues and to expose the role of local-EM. In \S 4.1 In \S 4.2 we demonstrate that the use of an equivalent
kernel in a local-EM algorithm leads to the modification necessary
to maximize a penalized likelihood \cite{?}(Nychka 1990). This result suggests that, at least
for the contexts considered in this paper, local-EM and penalized
likelihood may be paired in a manner analogous to the pairing of EM
and likelihood. Furthermore, consideration of
the penalty under the limiting conditions of \S 3.3 suggests local-EM penalizes the usual nonparametric likelihood for departures of the target function from the maximal eigenfunction of the kernel.  The first example of \S 5 is useful in exemplifying this result.  

Concluding remarks may be found in \S 6 but it is the results of \S 3 \& 4 that constitute the major contributions of this paper. Important issues that concern consistency of local-EM estimators, asymptotic IMSE, and bandwidth selection are not treated. However, they are not ignored either. Simple cross validation is used to select an approriate bandwidth in the third example and the second example presents a simple simulation study that favour local-EM and suggest is has reasonable asymptotic properties. Rigorous treatment of these issues is beyond the scope of this paper.

\section{Methods}

\subsection{Point processes and intensity functions}


An Inhomogeneous Poisson process with intensity function $\rho(s); s \in \Re^d$ and the number of events in a region $A$ is Poisson distributed with mean $\int_A \rho(u) du$.  The numbers of events in two disjoint regions are independent.  Here we are concerned with estimating the intensity from a series of Poisson processes $X = \{X_i; i=1 \ldots M\}$ where process $X_i$ has event locations $X_i = \{X_{ik}; k=1\ldots K_i\} \subset D_i$.  Further, the intensities differ by known ``offset'' values $\off_i(s)$ with
\[
X_i \sim \text{IPP}[\off_i(s)\lambda(s)].
\]
The term ``IPP' denotes the $X_i$ are inhomogeneous Poisson processes, and the unknown relative intensity $\lambda(s)$ is the quantity to be estimated.  
 
A spatial process has dimension $d=2$, and the notation above could correspond to the $X_{ik}$ being residential locations of individuals having contracted a given disease.  The relisations might have $i=1$ being male cases and $i=2$ being females.  The $\off_i(s)$ would be the expected intensity of cases given the population distribution and the disease incidence rate for each sex.  The $\lambda(s)$ surface would be caused by spatially varying risk factors such as air pollution.  

A temporal process could result from $i$ denoting subjects in a cohort and the $X_{ik}$ being incidence times of a medical event.  The $\off_i(s)$ would relate to subject-specific characteristics such as smoking status, and the $\lambda(s)$ would model the tendency for the relative intensity of events to increase with age.  In both the spatial and temporal case, a value of $\off_i(s)=0$ would correspond to events at time or location $s$ being unobservable.

Given a realisation $\{x_{ik}\}$, the likelihood for a given intensity $\lambda$ is: 
\begin{equation}\label{eq:ippLikelihood}
L(\lambda;\{x_{ik}\}) =  \prod_{i=1}^M \left[\prod_{j=1}^{K_i} \off_i(x_{ik})\lambda(x_{ik})\right] \exp\left[ -  \int_{D_i} \off_i(u) \lambda(u) du\right].
\end{equation}
For further details, see for example \citet{illian2008statistical}.
Local likelihood produces a non-parametric estimate of $\lambda(s)$ by giving more weight to the data close to $s$ when esti.  
Applying a kernel $K_h(u)$  with bandwidth $h$ to the log of the likehood in (\ref{eq:ippLikelihood}), and bringing the kernel inside the sums and integral, yields
\begin{equation}\label{eq:ippLocalLikelihood}
\ell(\lambda;s) =  \sum_{i=1}^M \left[\sum_{k=1}^{K_i}K_h(x_{ik}-s) \log[\lambda(x_{ik};\theta)] \right]  -  \int_{D_i} K_h(u-s) \off_i(u) \lambda(u;\theta) du +C.
\end{equation}
The constant $C$ depends on the $\log\off_i(x_{ij})$, which will be finite since events cannot occur at locations where $\off_i(s)=0$.  

Maximizing (\ref{eq:ippLocalLikelihood}) without restrictions on $\lambda$ gives infinite spikes at the $x_{ik}$, which can be prevented by positing a parametric formula for $\lambda$.  
\citet{loader1999lra} suggests approximating $\lambda$ with exponentiated polynomials, with coefficients varying with $s$.  For a location $u$ in the vicinity of $s$, write $\lambda(u;\coef,s) = \exp[\poly]$, where ${\cal P}:\Re^2 \rightarrow \Re$ is a polynomial of order $p$ and $\coef$ is a vector of polynomial coefficients.  Here we consider locally constant models (polynomials of degree zero) with $\coef$ being a single coefficient and  locally log-linear models where $\coef$ of length 2 in one dimension and length 4 in two dimensions.  At each location $s$ a different $\hat\coef(s)$ is estimated as maximising $\ell(\lambda;s)$.
  


\subsection{Aggregated data and Local EM}



\label{sec:llems_alg}


Area- or interval-censored data can be described as the observations from realisation $i$ being observed aggregated to a tesselation or map $\map_i = \{S_{ij}; j = 1 \ldots J_i\}$. For spatial data the $S_{ij}$ are geographical reporting regions, and for temporal data they could refer to intervals between clinic visits.  The observed data are the case counts $Y_{ij}$ in region $S_{ij}$ from realisation $i$, or more formally $Y_{ij} = ||\{k; X_{ik} \in S_{ij}\}||$.  When the censoring regions are the same across realisations ($\map_i = \map$), estimation of $\lambda(s)$ is well studied.  The central problem addressed by this paper is non-parametric estimation of $\lambda(s)$ when the censoring regions are different for each realisation.    

Local EM maximises the expected likelihood in place of (\ref{eq:ippLocalLikelihood}), to account for the locations $x_{ik}$ being unknown.  As the expectation depends on the intensity $\lambda$, the procedure is iterative.  At iteration $r$, the intensity  $\lambda^{(r)}(\cdot)$ is used to take the expectation of $\ell(\lambda;s)$ conditioning the $Y_{ij}$. By moving the expectation inside the sums and noting that the contributions from each $x_{ik}$ in a given  $S_{ij}$ are identical, the local-EM recursion at iteration $r$ becomes: 
\begin{multline}
\label{eq:formalEM}
\ell^{(r)}(\coef;s, Y) = \sum_{i=1}^M 
\sum_{j=1}^{J_i} Y_{ij}\ \text{E}_{X} \left\{  K_h(X-s) {\cal P}(X-s;\coef)] 
\ |X \in S_{ij}\ ;\ \hat\lambda^{(r-1)}(\cdot)  \right\} 
\\  -  \int_{\map_i} K_h(u-s) \off_i(u) \exp[{\cal P}(u-s;\coef)] du.
\end{multline}
Maximizing $\ell^{(r)}$ for different values of $s$ gives the updated coefficients 
\[
\hat\coef(s)^{(r)} = \text{argmax}_\coef \ell^{(r)}(\coef;s,Y)
\]
and resulting intensities $\hat\lambda^{(r)}(s) = \lambda[s;\hat\coef^{(r)}(s),s] = \exp[\hat a_1^{(r)}(s)]$.  Here $a_1$ is the intercept term in the coefficients $\coef$, as the higher order coefficients are multiplied by zero when evaluating the polynomial at $s$.

The algorithm differs from a typical EM algorithm because, at the
E-step, expectation is computed with respect to an estimate of the
infinite dimensional parameter $\lambda(\cdot)$ while, at the M-step, we only
estimate this parameter locally at $s$. As such the typical
arguments concerning convergence of the EM algorithm cannot be
brought to bear. Furthermore, if the local-EM algorithm converges to
a fixed point $\hat{\lambda}$, it is not clear what criterion this fixed point
optimizes.

A more efficient implimentation of local-EM is obtained by estimating the intercept parameter  $a_1$ separately from the higher-order coefficients.  Writing $\coef_{-1}$ as the vector of polynomial coefficients with the intercept replaced by zero, define
\[
\Psi(s;\coef) = \sum_{i}\int_{{\cal M}_i}{\cal O}_i(u)K_h(u-s)\exp[{\cal P}(u - s;\coef_{-1}) du.
\] 
Differentiating (\ref{eq:ippLocalLikelihood}) with respect to $a_1$ gives
\begin{equation}\label{eq:efficientEMS}
\hat{\lambda}^{(r)}(s)=\sum_{ij}
Y_{ij}\mbox{E}\left[
K_h(X-s) | X \in S_{ij};\hat\lambda^{(r-1)}(\cdot) \right]\left/\Psi\left(s;\hat{\coef}^{(r)}_{-1}(s)\right)\right. .
\end{equation}
Using (\ref{eq:efficientEMS}) requires estimating the higher order coefficients, which is accomplished by solving the local likelihood equations {\tt which are in some reference Jamie will have to fill in}.  Notice that for locally-constant EM, the above gives a closed-form solution to the the iterations. 



\subsection{Implementation as an EMS algorithm}


An EMS algorithm results from a local-EM by discretizating 
$\lambda$ as a peicewise constant function. This effectively reduces an infinite dimensional
estimation problem to one that has finite
dimension.
We divide the study area (or time period)  $\map=\cup_{i=1}^M \map_i$ into a number of disjoint regions (or intervals) $Q_\ell$ with ${\cal Q}=\{Q_\ell; \ell=1 \ldots,L \}$, and further insist on the $Q_\ell$ being entirely contained within (and not overlapping with) the censoring regions $S_{ij}$.  So for every $i$ and $\ell$, there exists a single $j$ such that $Q_\ell \subset S_{ij}$ (unless $Q_\ell$ is outside $\map_i$).  The $\cal Q$ with the smallest number of elements would be the regions obtained by overlaying all of the boundaries of the maps $\map_i$, which in one dimension is the time intervals formed by ordering all of followup times for each subject.   Another possible $\cal Q$ is the cells of a pixelated grid sufficiently fine that none of the cells 
intersects with more than one region on each map.



We approximate the risk surface $\lambda(s)$ by the piecewise constant function $\bar\lambda(s)$ formed from the integrated vaules 
\[
\bar\lambda_\ell = \int_{Q_\ell} \lambda(s) ds / ||Q_\ell||,
\]
with $\bar\lambda(s) = \bar\lambda_\ell; s \in Q_\ell$. 
Here $\bar\lambda$ may be formally referred to as the $\mathcal{Q}$-approximant of $\lambda$ (see \cite{royden1988ra} for details).  The estimate of $\bar\lambda$ is used in place of $\hat\lambda$ when calculating the conditional expectations in (\ref{eq:formalEM}), with the effect that a finite number of quantities ($\bar\lambda_\ell$) need be estimated rather than the infinit-dimensional $\lambda(s)$.

The condition required for local-EM to be equivalent to EMS is that the offsets $\off_{i}(s)$ must be constant within the censoring areas $S_{ij}$.  For example, population is assumed to be evenly distributed within census regions.  When this is the case, the relative intensity $\lambda$ is proportional to the intensity $\rho$ within each region $S_{ij}$.  As the conditional distribution $f(x|x\in S_{ij})$  of an event $x$ within a region is proportional to $\rho(s)$ (and must integrate to one), this distribution is also proportional to $\lambda(s)$ and we can write 
\[
f(x|x\in S_{ij}) = \lambda(x)\left/ \int_{S_{ij}} \lambda(u) du\right. .
\]


The notation which follows is somewhat simplified by working with $\Lambda_\ell=||Q_\ell||\bar\lambda_\ell$ in place of the $\lambda_\ell$.  Writing ${\cal I}_{ijl}$ as an indicator function for   $S_{ij}$ intersecting with $Q_l$, the conditional
density of an event known to be in region $S_{ij}$ is then
\[
f\left[x | x \in S_{ij};\bar\lambda\right] = {\cal I}_{ijl} \Lambda_\ell \left/ ||Q_\ell|| \sum_{m=1}^L \Lambda_m {\cal I}_{ijm}\ ; \ x \in Q_\ell \right. .
\]
The conditional expectation from (\ref{eq:efficientEMS}), using $\bar\lambda$ in place of $\lambda$ is then
\begin{multline*}
\text{E}_{X} \left[  K_h(X-s)  
\ |X \in S_{ij}\ ;\ \bar\lambda(\cdot) \right] =  \\
\sum_\ell pr(X \in Q_\ell | X \in S_{ij};\bar\lambda)
 \int_{Q_\ell} K_h(x-s) f(x|x\in Q_\ell;\bar\lambda) dx
\end{multline*}
\begin{align}\nonumber
=& \sum_\ell
\frac{{\Lambda}_{\ell} {\cal
I}_{ij\ell} }{
\sum_m
\Lambda_{m} {\cal I}_{ijm} 
}
\int_{Q_\ell} K_h(x-s)  / ||Q_\ell||  dx
\\
=&\sum_{\ell} \frac{{\Lambda}_{\ell} {\cal
I}_{ij\ell} \int_{Q_\ell} K_h(u-s) \dee{u}}{\area{Q_{\ell}}\sum_m {\Lambda}_{m} {\cal I}_{ijm}}\cdot \label{Cev}
\end{align}
%



Substitution of (\ref{Cev}) into expression (\ref{eq:efficientEMS}) leads to the iteration
\begin{eqnarray*}
\hat{\Lambda}^{(r+1)}_\ell&=&\int_{Q_\ell} \hat{\lambda}^{(r+1)}(x)\, \mathrm{d}x\\
%
&=&\int_{Q_\ell} \left\{ \sum_{ijm}Y_{ij}\frac{\hat{\Lambda}^{(r)}_{m}{\cal
I}_{ijm}}{\area{Q_m} \sum_n \hat{\Lambda}^{(r)}_{n} {\cal
I}_{ijn}} \frac{\int_{Q_m} K_h({u-x})\,\mathrm{d}u}{\Psi(x; \hat\coef_{-1}^{(r)}(x))}\right \} \dee{x}\\
%
&=&\sum_{ijm}Y_{ij}
\frac{\hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}}{\area{Q_m}\sum_n \hat{\Lambda}^{(r)}_{n} {\cal I}_{ijn}} \int_{Q_\ell} \frac{\int_{Q_m}K_h(u-x)\dee{u}}{\Psi(x; \hat\coef_{-1}^{(r)}(x),h)}\dee{x}.
\end{eqnarray*}

 The iteration may be conveniently expressed in terms of matrices by writing $\bf\Lambda$  as the vector of $\Lambda_\ell$'s and
\begin{eqnarray}\label{EMSic}
\hat{\bf \Lambda}^{(r+1)}={\mathfrak M}(\hat{\bf \Lambda}^{(r)}) {\cal K}(\hat{\bf \coef}_{-1}^{(r)}(\cdot)).
\end{eqnarray}
Here ${\cal K}$ is an $L$-by-$L$ smoothing matrix with entries
\begin{equation}\label{smoothstep}
[{\cal K}_h]_{\ell m}=
\frac{\tilde{\off}_{\ell}}{\area{Q_{\ell}}}\int_{Q_m}\frac{\int_{Q_\ell}K(u-x;h)\dee{u}}{\Psi(x; \hat\coef^{(r+1)}(x))}\dee{x}
\end{equation}
and ${\mathfrak M}(\hat{\bf \Lambda}^{(p)})$ is a $L$
dimensional row vector whose $\ell$th entry is
\begin{equation}\label{EMstep}
[{\mathfrak M}(\hat{\bf \Lambda}^{(r)})]_\ell=\sum_{ij} \frac{Y_{ij}}{\tilde{\off}_\ell}
\frac{\hat{\Lambda}^{(r)}_{\ell} {\cal
I}_{ij\ell}}{\sum_m \hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}},
\end{equation}
where $\tilde{\off}_\ell = \sum_{ij}\mathcal{I}_{ij\ell} \off_{ij}$.
The latter is recognized as a step in an EM algorithm (\S 5) and hence the iteration (\ref{EMSic}) is seen to explicitly involve an expectation, maximization \emph{and} smoothing step.
That is, by discretizing $\lambda$ our implementation of the local-EM algorithm has resulted explicitly in an EMS algorithm. EMS algorithms were first proposed by \cite{silvermanems}  as an {\it ad hoc} method for improving the behaviour of the EM algorithm by including a smoothing step. Here they are seen to arise formally from local likelihood considerations when data are interval or area censored. Further comparisons with Silverman are given in \S 6.

\vspace{20pt}
\noindent {\bf Remark:} The discretization of $\lambda$ over the partition ${\cal
Q}$ resulted in a local-EM algorithm collapsing explicitly into an EMS algorithm. We made a distinction between the local-EM iterate (\ref{localem}) and its EMS counterpart (\ref{emsiterate}). At the $(r+1)$st iteration our estimate of $\lambda$ will be given by
\begin{eqnarray}\label{emsiterate}
 \left\{ \sum_{ij\ell}N_{ij}\frac{\hat{\Lambda}^r_{\ell}{\cal
I}_{ij\ell}}{ \sum_m \hat{\Lambda}^r_{m} {\cal I}_{ijm}} \frac{\int_{Q_l}K_h\left({u-x}\right)\dee{u}}{\area{Q_{\ell}}\Psi_h(x; \hat{\bf a}^{r+1})}\right \}.
\end{eqnarray}
This differs from (\ref{localem}), and the relation between the two is discussed in the next section.




\subsection{Uniform Convergence of EMS to local-EM}

We consider this discretization for an arbitrary partition where we let the number of regions $L \to \infty$ and $\max_\ell \area{Q_\ell} \downarrow 0$. We demonstrate that the EMS iterate converges to their local-EM counterparts in the $\mathcal{L}^1$ norm as well as uniformly. This result suggests local-EM and EMS techniques may be thought of synonymously.

Consider a partition $\mathcal{Q}$ based on a set of $L$ equally spaced grid points over a finite region $\mathcal{M}$. Without the loss of generality, we consider the partition where elements are squares centred at these grid points. We demonstrate that the EMS iterate will converge in $\mathcal{L}^1$ to its local-EM counterpart as $J \rightarrow \infty$ and $\max_\ell |\!|Q_\ell|\!| \downarrow 0$. For the sake of clarity, we restrict the attention to the locally constant case.

Without the loss of generality, assume $\map_i = \map$ for all $i$. Denote the EMS and local-EM iterates as $\hat{\lambda}_{L}^{(r)}$ and $\hat{\lambda}_{\infty}^{(r)}$, respectively. In addition, assume $S_{ij} \subseteq \mathcal{M}$ for all $i, j$ and $\area{\map} < \infty$. $K(z)$ is a symmetric positive kernel with compact support and $\int K(z) \, \mathrm{d}z =1$. Finally, define a norm on $\mathcal{M}$ to be $\norm{\lambda} = \int_{\cal M} | \lambda(u) |\, \mathrm{d}u$ and interpret the convergence of the function $f$ to the function $g$ to mean that $\norm{f - g} \rightarrow 0$ as $L \to \infty$. This we denote as $f \stackrel{\mathcal{L}^1}{\longrightarrow} g$. These details permit the statement of the following theorem:
%
%
\begin{theorem} \label{t:L1_convergence} 
%
Define $\mathcal{F}_1 = \left\{ \lambda \in \mathcal{L}^1 \mid
\text{$\lambda$ is non-negative with $\lambda(x) > 0$ for all $x \in \mathcal{M}$} \right\}.$ For a common initial value $\hat{\lambda}_0 \in \mathcal{F}_1$, we have, for all $r = 1, 2, \ldots$,
\begin{description}
\item{A.} $\hat{\lambda}_{L}^{(r)} \stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{(r)}$, and
\item{B.} $\hat{\lambda}_{L}^{(r)}$, $\hat{\lambda}_{\infty}^{(r)} \in \mathcal{F}_1$.
\end{description} \label{e: ems_iterate_v1} %%

\end{theorem}

\noindent Theorem~\ref{t:L1_convergence} can be proved by induction using results in operator theory.   
Define $\mathcal{H}_x$ to be $\mathcal{H}_x: \mathcal{L}^1 \mapsto \mathcal{L}^1$ such that
$$\mathcal{H}_x(\lambda) = \int \dfrac{K_h(u-x)}{\int_{\map} \off(u)
K_h(u-x)\dee{u}} \lambda(u)\, \mathrm{d}u,$$ where $K_h(\cdot)=K(\cdot/h)/h$ for some $h>0$ and $\off = \sum_i \off_i$.
The proof relies on $\mathcal{H}_x(\lambda)$ being a bounded linear functional as well as some other basic results in operator theory stated as lemmas below. These lemma that may be found in \cite{royden1988ra}.

\begin{lemma}\label{lemma:piece_approx}
Let $\lambda \in \mathcal{L}^1$. Then the $\mathcal{Q}_J$-approximant of $\lambda$ converges in ${\cal L}^1$ to $\lambda$ on $\mathcal{M}$ as $L \to \infty$; that is, $\bar{\lambda} \stackrel{\mathcal{L}^1}{\longrightarrow} \lambda$.
\end{lemma}
%
\begin{lemma}\label{lemma:bl_functional}
If $\int_{\mathcal{M}} \off(u) K_h(u-x)\dee{u} \geq c > 0$, then
$\mathcal{H}_x$ is a linear bounded functional for all $f \in
\mathcal{L}^1$. That is, for all $x, a, b \in \Re$,
$\mathcal{H}_t(af+b) = a\mathcal{H}_x(f)+b$, and there exists a real
number $M_h$ such that $\mathcal{H}_x (f) \leq M_h \norm{f}$.
\end{lemma}

\vspace{20pt}
\noindent \textbf{Proof of Theorem~\ref{t:L1_convergence}:} Consider a fixed $h$ and $n$ throughout the entire proof. Let $r=1$. Assume $\int_{\mathcal{M}} \off(u) K_h(u-x)\dee{u} \geq c > 0$. Note that $\int_{S_{ij}} \bar{\lambda}_{0}(u) \dee{u} = \int_{S_{ij}} \hat{\lambda}_0(u) \dee{u}$ by the definition of $\bar{\lambda}_{0}$. Repeated use of the triangle
inequality gives
\begin{equation*}
\begin{split}
\norm{\hat{\lambda}_{L}^{1} - \hat{\lambda}_{\infty}^{1}} 
&\leq \sum_{ij} N_{ij} \left( \int_{S_{ij}} \hat{\lambda}^0(u)\dee{u} \right)^{-1} \int_{\mathcal{M}} \int_{S_{ij}} \dfrac{K_h(u-x)}{\int_{\mathcal{M}} \off(u)
K_h(u-x)\dee{u}}\left| \bar{\lambda}^{0}(u) - \hat{\lambda}^0(u) \right| \dee{u} \dee{x} \\
%line 2
&\leq \sum_{ij} N_{ij} \left( \int_{S_{ij}} \hat{\lambda}^0(u) \dee{u} \right)^{-1} \int_{\mathcal{M}} \int_{\mathcal{M}} \dfrac{K_h(u-x)}{\int_{\mathcal{M}} \off(u)
K_h(u-x)\dee{u}} \left| \bar{\lambda}^{0}(u) - \hat{\lambda}^0(u)\right| \dee{u} \dee{x} \\
&\leq \area{\map} \sum_{ij} N_{ij} \left( \int_{S_{ij}} \hat{\lambda}^0(u) \dee{u} \right)^{-1} M_h \norm{\bar{\lambda}^{0} - \hat{\lambda}^0}.
\end{split}
\end{equation*}
Here the last inequality is due to the finiteness of $\mathcal{S} \subset \mathcal{M}$ and Lemma \ref{lemma:bl_functional}. By Lemma \ref{lemma:piece_approx}, $\bar{\lambda}^{0} \xrightarrow{{\cal L}^1} \hat{\lambda}^0$, thus $\hat{\lambda}_{L}^{1} \stackrel{\mathcal{L}^1}{\longrightarrow} \hat{\lambda}_{\infty}^{1}$. Moreover, Lemma \ref{lemma:bl_functional} also ensures that $\hat{\lambda}_{L}^{1}$ and $\hat{\lambda}_{\infty}^{1}$ both belong to the class $\mathcal{F}_1$.\\[20pt]
%
\noindent {\it Induction Step:}
%
Assume that $\hat{\lambda}_{L}^{(r)} \stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{(r)}$ and $\hat{\lambda}_{L}^{(r)}$,
$\hat{\lambda}_{\infty}^{(r)} \in \mathcal{F}_1$. Let $b_{ij}^{(r)} = \dfrac{\int_{S_{ij}} \hat{\lambda}_{\infty}^{(r)}(v)\dee{v}}{\int_{S_{ij}} \bar{\lambda}_{L}^{(r)}(v) \dee{v}}.$ With the repeated use of the triangle inequality, we have
\begin{align*}
%line 1
& \area{\hat{\lambda}_{L}^{r+1} - \hat{\lambda}_{\infty}^{r+1}} \\
&\hspace{.15in} \leq \sum_{ij} N_{ij}
\int_{\mathcal{M}}\int_{S_{ij}} 
\dfrac{K_h(u-x)}{\int_{\mathcal{M}} \off(u) K_h(u-x)\dee{u}} 
\left| \left( \dfrac{\bar{\lambda}_{L}^{(r)}(u)}{\int_{S_{ij}}
\bar{\lambda}_{L}^{(r)}(v)\dee{v}} - \dfrac{\hat{\lambda}_{\infty}^{(r)}(u)}{\int_{S_{ij}}
\hat{\lambda}_{\infty}^{(r)}(v)\,\mathrm{d}v}\right)\right| \dee{u}\dee{x}\\
%line 3
&\hspace{.15in} \leq \sum_{ij} N_{ij} 
\left( \int_{S_{ij}} \hat{\lambda}_{\infty}^{(r)}(v)\,\mathrm{d}v \right)^{-1}
\int_{\mathcal{M}} \int_{\mathcal{M}}
\dfrac{K_h(u-x)}{\int_{\mathcal{M}} \off(u) K_h(u-x)\dee{u}}
\left | b_{ij}^{(r)} \bar{\lambda}_{L}^{(r)}(u)-\hat{\lambda}_{\infty}^{(r)}(u) \right
|\dee{u} \,\mathrm{d}x \\
%line 5
&\hspace{.15in} \leq \area{\map} \sum_{ij} N_{ij}
\left( \int_{S_{ij}} \hat{\lambda}_{\infty}^{(r)}(v)\,\mathrm{d}v \right)^{-1} 
M_h \norm{b_{ij}^{(r)} \bar{\lambda}_{L}^{(r)} - \hat{\lambda}_{\infty}^{(r)}}.
\end{align*}
The last inequality is again due to Lemma \ref{lemma:bl_functional}. Now since the induction assumption implies $\bar{\lambda}_{L}^{(r)} \xrightarrow{{\cal L}^1} \hat{\lambda}_{\infty}^{(r)}$ and
$b_{ij}^{(r)} \to 1$, we have, for all $i, j$,
\begin{equation*}
\Big|\!\Big| b_{ij}^{(r)} \bar{\lambda}_{L}^{(r)} -
\hat{\lambda}_{\infty}^{(r)} \Big|\!\Big|_1 \leq \Big|\!\Big|
b_{ij}^{(r)} \, \bar{\lambda}_{L}^{(r)} - \bar{\lambda}_{L}^{(r)}
\Big|\!\Big|_1 + \Big|\!\Big| \bar{\lambda}_{L}^{(r)} -
\hat{\lambda}_{\infty}^{(r)} \Big|\!\Big|_1 \to 0.
\end{equation*}%
In addition, it is evident that $\hat{\lambda}_{k}^{r+1}$ and $\hat{\lambda}_{\infty}^{r+1}$ belong to $\mathcal{F}_1$ provided that $\hat{\lambda}_{k}^{(r)}$, $\hat{\lambda}_{\infty}^{(r)} \in \mathcal{F}_1$. Hence, we have (\textbf{A}) $\hat{\lambda}_{k}^{r+1} \stackrel{\mathcal{L}^1}{\longrightarrow}
\hat{\lambda}_{\infty}^{r+1}$ on $\mathcal{S}$, and ({\bf B}) $\hat{\lambda}_{k}^{r+1}$, $\hat{\lambda}_{\infty}^{r+1} \in \mathcal{F}_1$ by induction.\\[20pt]


The mode of convergence can be further strengthened to uniform convergence should the kernel function be continuous and the initial estimate $|\!| \hat{\lambda}_0 |\!|_1$ be bounded on $\mathcal{M}$.  To see this, note that the mode of the convergence in (\textbf{A}) can be \emph{pointwise}, i.e.\  $\hat{\lambda}_{L}^{(r)}(x) \longrightarrow \hat{\lambda}_{\infty}^{(r)}(x)$ for all $x \in \mathcal{M}$. Furthermore, the functional class defined by $\mathcal{H}_x(\lambda)$, where $\lambda \in \mathcal{F}_1$, forms a class of \emph{equicontinuous} functions.  In this case, pointwise convergence implies uniform convergence  by Ascoli-Arzel\'{a} Theorem \cite[page 169]{royden1988ra}. We will state this property as a corollary without proof since the proof is very much similar to the one above.

\begin{corollary}
Assume the conditions in Theorem~\ref{t:L1_convergence}.  Furthermore, if $K$ is continuous and $|\!| \hat{\lambda}_0 |\!|_1$ is bounded on $\mathcal{M}$, then $\hat{\lambda}_{L}^{(r)} \longrightarrow \hat{\lambda}_{\infty}^{(r)}$ uniformly.
\end{corollary}






\section{Role of Local-EM} \label{sec:llem_role}
Thus far we have exposed an interesting relationship between classes of algorithms that demonstrates the EMS algorithm arises naturally from local likelihood considerations. This occurs because of the way
we have chosen to implement the local-EM algorithm. However, we could have instead chosen to implement this algorithm through multiple imputation, or by using MCEM, or through some other favorite techniques. So why EMS?

In this section, we exploit the relationship between local-EM and the EMS algorithm to gain insight into convergence issues and to expose the role of local-EM as being paired with penalized likelihood in a manner analogous to the pairing of EM and likelihood.
 
Below we summarize results meant to strengthen the suggestion that, for the context considered in this paper, local-EM and penalized likelihood may be paired in a manner analogous to the pairing of EM and likelihood. We primarily focus on the local constant case as details are easier to follow.

\subsection{Local-EM and the Modified EMS algorithm}
\cite{nychka1990spa} identified a relationship between the EMS algorithm and a penalized likelihood by demonstrating that a modified EMS algorithm maximizes this penalized likelihood. In this section, we demonstrate that the local-EM algorithm may be used to maximize the penalized likelihood with the appropriate choice of kernel, namely an equivalent kernel. This occurs because the equivalent kernel leads to Nychka's modification of the EMS algorithm.

We begin by first considering the nonparametric likleihood with the following penalty:
$$
{\bf Pen}(\boldsymbol{\theta}, {\cal K})=\boldsymbol{\theta}^{T} \mathbf{R} \boldsymbol{\theta}.
$$ 
Here $\mathbf{R} = \mathcal{K}^{-1} - \tilde{\boldsymbol{\off}}$, where $\mathcal{K}$ is the smoothing matrix in (\ref{smoothstep}) and $\tilde{\boldsymbol{\off}}$ is a diagonal matrix with $\tilde{\off}_\ell$.
We explore the relationship between this penalized likelihood and the local-EM algorithm by considering the following function
\begin{equation} \label{e:equivalent_kernel}
(1/\lambda(u))^{1/2} K_h(u-x),
\end{equation}
where $\lambda$ is the smooth component of the true density or intensity, and $K_h(u-x)$ is any symmetric positive kernel with compact support. Renormalizing (\ref{e:equivalent_kernel}), using the first-order Taylor expansion, gives
$$
K_{h}^{\ast}(u - x) \approxeq (\lambda(x)/\lambda(u))^{1/2} K_h(u-x)
$$
and $\int K_{h}^{\ast}(u - x) \dee{u} = 1+o(h)$ for any interior point, $x$. We refer to the kernel $K_h^{\ast}$ as an equivalent kernel. Next, consider the use of the equivalent kernel with the $\cal Q$-approximant $\bar{\lambda}^{(r)}$ in our local-EM algorithm while assuming $\area{Q_j} = \area{Q}$ for all $j$. This combination results in the conditional expectation $\mbox{E}_{\hat{\lambda}^{(r)}}\left[ K_h^{\ast}(X-x) \mid X \in S_{ij} \right]$ being approximated by
\begin{eqnarray*}
\mbox{E}_{\bar{\lambda}^{(r)}}\left[K_h^{\ast}(X-x) \mid S_{ij}\right] &=& 
\sum_{k} \left(
\dfrac{\hat{\Lambda}_{\ell}^{(r)}}{\hat{\Lambda}_{k}^{(r)}} 
\right)^{1/2} 
\area{Q}^{-1} \int_{Q_{k}} K_h(u-x)\dee{u}
\dfrac{\mathcal{I}_{ijk}\hat{\Lambda}_{k}^{(r)}}{\sum_{m} \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} 
\quad \text{for $x \in J_\ell$.}
\end{eqnarray*}
This in turn gives the following iteration for $\boldsymbol{\Lambda}$:
\begin{align}
\hat{\Lambda}_{\ell}^{r+1} &= n^{-1} \sum_{ijk} \left( \dfrac{\hat{\Lambda}_{\ell}^{(r)}}%
{\hat{\Lambda}_{k}^{(r)}} \right)^{1/2} \area{Q}^{-1} \int_{Q_\ell} \dfrac{\int_{Q_k} K_h(u-x) \dee{u}}{\int_{\sum_i \mathcal{M}} \off_i(u) K_h^{\ast}(u-x) \, \mathrm{d}u} \, \mathrm{d}x \dfrac{ \mathcal{I}_{ijk} \hat{\Lambda}_{k}^{(r)}}{\sum_m \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} \notag \\
%
& \approxeq n^{-1} \sum_{ijk} \left( \dfrac{\hat{\Lambda}_{\ell}^{(r)}}%
{\hat{\Lambda}_{k}^{(r)}} \right)^{1/2} \area{Q}^{-1} \int_{Q_\ell} \int_{Q_k} K_h(u-x) \dee{u} \dee{x} %
\frac{1}{\tilde{\off}_\ell} \dfrac{ \mathcal{I}_{ijk} \hat{\Lambda}_{k}^{(r)}}{\sum_m \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} 
\label{e:modified_EMS}
\end{align}
The expression (\ref{e:modified_EMS}) can be re-expressed in the following matrix form:
\begin{equation} \label{modified_EMSpl}
\hat{\boldsymbol{\Lambda}}^{r+1}={\mathfrak M}(\hat{\boldsymbol{\Lambda}}^{(r)}) {\cal K}_h^{\ast}\left(\hat{\boldsymbol{\Lambda}}^{(r)}\right).
\end{equation}
${\cal K}_h^{\ast}\left(\hat{\boldsymbol{\Lambda}}^{(r)} \right) =
\left(\hat{\boldsymbol{\Theta}}^{(r)}\right)^{-1}\, \tilde{\boldsymbol{\off}} \, \mathcal{K}_h \, \tilde{\boldsymbol{\off}}^{-1} \, \hat{\boldsymbol{\Theta}}^{(r)}$, where $\hat{\boldsymbol{\Theta}}^{(r)} = \text{diag}(\hat{\theta}_{k}^{(r)})$. Note that $\off_k = 1$ in the context considered by \cite{silvermanems} and \cite{nychka1990spa}. The iteration (\ref{modified_EMSpl}) is recognized as Nychka's modified EMS algorithm with the smoothing matrix equal to ${\cal K}={\cal K}_h$.

In summary, the first order approximation of equivalent kernel in the local-EM algorithm leads to Nychka's modified EMS algorithm.

\bigskip
\noindent {\bf Remark:} %{\bf $\mathbf{\mathcal{L}}^1$-Convergence of EMS to local-EM:} 
The theoretic results in $\mathcal{L}^1$ and uniform convergence in \S3 can also be extended with the equivalent kernel in the local EM algorithm.

\bigskip
\noindent {\bf Local-EM and Penalized Likelihood:} In appendix A.3 we consider the penalized likelihood of \ref{appx_ems} under the limiting conditions of \ref{appx_convergence}. In particular, we interpret the penalty in terms of a class of functions that we then study as $\max_\ell |\!|Q_\ell|\!| \downarrow 0$.

This ultimately allows us to speculate that the role of local-EM is to penalize the usual nonparametric likelihood for departures of the target function from the class $${\cal Z}=\left\{f \ \Bigg | \, f^{1/2}(x)=\int_{\cal
M} \dfrac{K_h(u-x)}{\int_{\map} K_h(u-x) \dee{u}}f^{1/2}(u)\, \mathrm{d}u \ \mbox{for all $x \in {\cal S}$}
\right\}$$ identified by the eigenfunction of the kernel.


\section{Examples}
\comment{
\subsection{Density Estimation for Failure Time Data}

Consider a bivariate failure time process, such as time to HIV infection and AIDS onset, or in flu epidemics an infected individual's time to onset of symptoms and the first time they transmit the infection to another individual.  As infections are usually only revealed through repeated testing the event times are interval-censored and only known to fall between two consecutive clinic visits, one where the patient tests negative for the presence of the virus and a follow-up visit where they test positive. Multivariate density estimation in this context may be facilitated by (\ref{llicdp}), which now simplifies to
\begin{align*}
{\cal L}_x({\bf a}) &= \sum_{i=1}^n \mbox{E}_{\lambda}[K_h(X_i-x){\cal P}(X_i-x)|I_i]-n \int_{\cal M} K_h(u-x)\exp\{{\cal P}(u-x)\}\dee{u}
\end{align*}
where $\mathcal{O}_i(u) = 1$ for all $u \in {\cal M}$ .

\subsubsection{One dimension}

\subsubsection{Two dimensions}
When two events are being modelled, $X_i$ is bivariate and ${\cal M}=\Re^2$. Typically, when data are doubly interval-censored, the estimation of NPMLE for the joint distribution is complex and a treatment of some of the issues can be found in \cite{maathuis2005ran}. In the rest of this example, we consider a simple case that suggests local-EM in this context may ease these complexities and thus deserves further exploration. For example, the use of a local-EM algorithm does not require identifying maximal sets (the analog of inner most intervals in the univariate case) and thus does require use of the height map algorithm of \cite{maathuis2005ran}. Also the solution is unique while the NPMLE is not. A Bayesian interpretation of local-EM provides insight into the latter.

A hypothetical example of bivariate interval-censored data is shown in Figure \ref{fig:bivariate}a. These data consists of eights bivariate intervals, represented by four horizontal and four vertical rectangles. Overlaying these observations forms a partition
of 81 unit squares, and the intersections of these rectangles form the maximal sets. A NPMLE places all probability on these sets however it is not unique. For example, a uniform weight of 1/16 on all 16 sets, a weight of 1/4 on the positive diagonal sets, and a weight of 1/4 on the negative diagonal sets all maximize the nonparametric likelihood. Consequently, the EM iteration will converge to one of the solutions depending on the initial value. However, empirical evidence suggests otherwise for the local-EM algorithm.

Figure \ref{fig:bivariate}b shows the estimated intensity surface for a circular kernel.  In this case the EMS iteration will always converge to a solution that favours  uniform weighting regardless of the starting value. Figure \ref{fig:bivariate}c gives the intensity estimate using a kernel with an elliptical contour such that bandwidths are 1.5 and .15 in the x- and y-direction and rotated by 45 degrees.  In this case, the local-EM algorithm converges to a solution that favours the positive diagonal. Similarly, when this kernel is instead rotated by -45 degrees, Figure \ref{fig:bivariate}d shows the local-EM algorithm converges to a solution that favours the
negative diagonal. This behaviour can be interpreted in terms of the penalized likelihood of \S \ref{appx_ems}. Here the local-EM algorithm aims to maximize the penalized likelihood (\ref{e:npllk}) where the penalty depends on the choice of kernel. The kernel leads us to favour one NPMLE over another \emph{a priori}.  Explicitly when the kernel is radially symmetrical, any deviations from the maximal eigenfunction are equally penalized. However, as the kernel becomes more elliptical, deviations in the direction of the major axis of the elliptical contour are penalized less than those in the direction of the minor axis.
\vspace{-1.5cm}
\begin{figure}
\vspace{-.05in} 
\centering
\subfigure[Regions corresponding to eight bivariate interval-censored data points]{\includegraphics[width=0.45\textwidth, height=0.45\textwidth,  clip=true, trim=0.8in 0.45in 1in .75in]{bivariate_example.pdf}}\\
\subfigure[EMS intensity estimate from circular kernel]{\includegraphics[width=0.32\textwidth, clip=true, trim=1.75in 0.45in
2in .55in]{ems_1_08mar2009.pdf}}
\subfigure[
Elliptical kernel, positive-sloped major axis
]{\includegraphics[width=0.3\textwidth,clip=true, trim=2.25in 0.45in
2in .55in]{ems_2_08mar2009.pdf}}
\subfigure[Elliptical kernel, negative-sloped major axis]{\includegraphics[width=0.3\textwidth, clip=true, trim=2.25in 0.45in
2in .55in]{ems_3_08mar2009.pdf}} 
\caption{An artificial example of bivariate interval-censored data (a), with EMS estimates of the intensity using three different kernels (b-d).}
\label{fig:bivariate}
\end{figure}
\newpage

}

\subsection{Intensity Estimation for Panel Count Data}

Consider a (univariate) process in time, where each individual $i$ has event times $X_{ik}, k=1\ldots K_i$ and 
is observed at a  set of points
${\cal T}_i=\{\tau_{ij}, j = 0 \ldots J_i\}$.  These observations times are either prearranged or determined by a visit process that is assumed to be independent of the event process, and at each visit the number of events since the previous visit is recorded.  In other words, the data are interval censored.  A special case if this type of process is failure time data, where each individual is removed following their first event. 

Using the previous notation, 
$S_{ij}=[\tau_{ij-1},\tau_{ij}]$ is referred to as the $j$th panel
for the $i$th individual and we denote the number of events in the
interval $S_{ij}$ by $Y_{ij}=||\{ X_{ik} \in S_{ij} \}||$.  The observation period $\map_i$ is the interval between $\tau_{i0}$ and $\tau_{i J_i}$.  Following the setup of \cite{hu2008gls} we let
$${\cal T}=\cup_i^n {\cal T}_i=\{\tau_\ell; \ell = 0 \ldots L\}$$
and 
${\cal Q}$ denotes a partition of the data
where now $Q_\ell=[\tau_{\ell-1},\tau_\ell]$. For this setting the intensity, $\lambda(s)$, of the Poisson process is the object of central interest.

Individuals may drop out of the study at different time points and we use $\off_i(s)=1$ to indicate if the $i$th individual is in the study at time $s$ and $\off_i(s)=0$ otherwise. Since an individual's event process is observable only before he/she drops out, the intensity of the observable process equals $\off_i(s) \lambda(s)$ (see \cite{andersen1993smb}). Finally, the number of at-risk individuals at any time $s$ is denoted by $\tilde\off(s)=\sum_i \off_i(s)$.

The local likelihood from \ref{eq:formalEM}
\begin{multline*}
\ell^{(r)}(\coef;Y,s) = \sum_{ij} Y_{ij} \mbox{E}_{X}\left[K_h(X -s){\cal P}(X -s;\coef) \mid X \in S_{ij};\hat\lambda^{(r-1)}\right] \\ - \int_{\cal M} \tilde\off(u) K_h(u-s) \exp\{{\cal P}(u - s;\coef) \} \, \mathrm{d}u
\end{multline*}
and the corresponding EMS implementation (\ref{EMSic}) has
$$
[\mathfrak{M}(\hat{\boldsymbol{\Lambda}}^{(r)})]_{\ell} = \sum_{ij} Y_{ij}
\frac{\off_i(\tau_\ell)\hat{\Lambda}^{(r)}_{\ell} {\cal I}_{ij\ell}}{\off(\tau_\ell)\sum_m
\hat{\Lambda}^{(r)}_{m} {\cal I}_{ijm}}
$$
and 
\[
[\mathcal{K}_h(\mathbf{\hat{\Lambda}}^{(r)})]_{\ell m}= \frac{\tilde\off(\tau_\ell)}{|\!|Q_\ell|\!|}\int_{Q_m}
\frac{\int_{Q_\ell}K_h\left({u-x}\right)\ \mathrm{d}u}{\Psi_h \left(x;
\hat{\mathbf{a}}^{(r)}_{-1} 
\right) }\ \mathrm{d}x.
\]
Note $\mathfrak{M}(\hat{\boldsymbol{\Lambda}}^{(r)})$ is a step in the self-consistent algorithm of \cite{hu2008gls}.

%\noindent Note that $\int_{J_\ell} K_h( u-t)\, \mathrm{d}u \rightarrow \mathcal{I}_{J_\ell}(t)$ and $\Psi_h[\mathbf{a}(t;\mathbf{\hat{\Lambda}}_r)] \rightarrow Y(t)$ as $h \searrow 0$. It follows
%$$\lim_{h\downarrow 0} [\mathcal{K}_h]_{ls} =
%\lim_{h\downarrow
%0}\frac{Y(\tau_\ell)}{||J_\ell||}\int_{J_s}\dfrac{\int_{J_\ell}K_h\left({u-t}\right)\,
%\mathrm{d}u}{\Psi_h[\mathbf{a}(t; \mathbf{\hat{\Lambda}}_r)]}\,
%\mathrm{d}t
%=\dfrac{Y(\tau_\ell)}{||J_\ell||}\int_{J_s}\frac{1_{J_\ell}(t)}{Y(t)}\,
%\mathrm{d}t
%=\delta_{sl}.$$ Consequently,
%\noindent As $h \downarrow 0$, %the smoothing matrix ${\cal K}_h$
%converges to the identity matrix and %

\subsubsection{A simulation study}

A simulation study was carried out to examine the mean integrated squared error (MISE) of the local-EM estimator as well as several alternatives. Event times follow a Poisson process with intensity $\lambda(x)$ equal to a re-scaled
gamma density function (shape = 4.75 and rate=0.75). Each subject is assumed to have a sequence of predetermined observation times $\tau_1, \tau_2, \ldots, \tau_J$ where $\tau_j = j$ and $J=20$. However, subjects miss a visit with increasing probability, specifically, the probability of missing a visit equals
$(\tau_j/20)^{0.25} - 0.05$. Finally, a subject's panel counts are obtained by aggregating events times among consecutive observed
visits. Note that each subject is assumed to have no event at time 0.

For each of $S$ samples, and for a fixed window size $h$, we compute several estimators of the intensity using a Gaussian kernel. For each estimator, $\hat{\lambda}$, we approximate its MISE as $S^{-1} \sum_k \int (\hat{\lambda}_k(u) - \lambda(u))^2 \, \mathrm{d}u$. This was performed for 49 different values of $h$ between $0.05$ and $2.45$ with $S=500$. The resulting MISE's for each estimator are plotted in Figure \ref{fig:pc:mise_bw}. The first estimator assumes no interval censoring has taken place and uses the exact event times themselves, rather than the panel counts. This is the gold standard. For the panel counts we use the partition $\mathcal{Q}$ and compute the local-EM estimator in both the constant and linear cases, that is, where the polynomial is truncated at the first or second term. In addition, as an alternative to, and competitor of, the local-EM estimators we consider simply smoothing the self-consistent estimator of \cite{hu2008gls} after their EM algorithm has converged. 

The results favour the local-EM estimator considerably. While the gold standard achieves the smallest MISE, the local-EM estimators track it quite closely and attain the next smallest MISE for a similar window size. Smoothing after the EM algorithm converges has the worst performance achieving a minimum MISE that is larger for a larger window size. This result is perhaps not all that surprising given that the $\lambda$ is quite non-linear. In cases where $\lambda$ is linear the improvements
in MISE for the local-EM estimator are not as dramatic. Another simulation was performed in the spatial context with similar results.

\begin{figure}\centering
\includegraphics[trim = 0mm 4mm 10mm 20mm, clip, width=0.75\textwidth]{mise_vs_bw_30sub_77prob.pdf}
\caption{Mean integrated squared error (MISE) as a function of bandwidth for the kernel intenstiy estimator using exact observations (---), smoothed EM estimator using left-end points ({\color{red} $- -$}), smoothed EM estimator using midpoints ({\color{green} $- -$}), smoothed EM estimator using right-end points ({\color{blue} $- -$}), local-EM estimator using constant case ($\cdots$), and local-EM estimator using linear case ($- -$). The kernel intensity estimator has the lowest MISE of 0.0609 at a bandwidth of 1.00. The proposed local-EM estimator achieves the lowest MISE of 0.0725 at a bandwidth of 0.90 for panel count data.} 
\label{fig:pc:mise_bw}
\end{figure}

\begin{figure}\centering
\subfigure[Minimum MISE as a function of average probability of missing a visit.]{
\includegraphics[trim = 0mm 4mm 10mm 20mm, clip, width=0.45\textwidth]{mise_vs_prob_30sub.pdf}
\label{fig:pc:mise_prob}
}
\subfigure[Minimum MISE as a function of number of subjects.]{
\includegraphics[trim = 0mm 4mm 10mm 20mm, clip, width=0.45\textwidth]{mise_vs_sub_77prob.pdf}
\label{fig:pc:mise_sub}
}
\end{figure}


\subsection{The Spatial Structure of Lupus in the Toronto, Canada}

In this example, we consider the setting described in \S 2.2 and investigate the spatial structure of female Lupus incidence  in the Greater Toronto Area. The lupus clinic at the Toronto Western Hospital records the census tracts where individuals with lupus reside, and has data from 1965 to 2007. If lupus is affected by a spatially varying environmental or social risk factor, it should result in a spatially smooth relative risk surface $\lambda(s)$.

Population data is available from the censuses of 1971, 1981, 1991, 1996, and 2001, and within each census period $i$ the offest surface $\off_i(s)$ is calculated as a function of the population data (assumed to be constant within census Dissemination Areas), and estimated rates by age group and a time effect.  



Female lupus case locations are assumed to arise from an inhomogeneous Poisson process with intensity $\rho_i(s) = \off_i(s)\lambda(s)$.  The offset surfaces $\off_i(s)$ are calculated from the population data (assumed to be uniformly distributed within census Dissemination Areas), and rates by age group and time effect estimated from a Poisson regression and treated as fixed. 

in which the intensity is given as $\rho_k(s,t) = \lambda(s) {\cal O}_k(s,t)$ with the offset surface ${\cal O}_k$ given as
$$
{\cal O}_k(x,t) =  \beta(t) \theta_k P_k(x,t).
$$
Here the subscript denotes the $k$th age-sex group, $\theta_k$ is the incidence rate for this group, $P_k(x,t)$ is the population intensity (in persons per km square), and $\beta(t)$ is the time trend. Using regionally aggregated case counts to estimate relative risk surface $\lambda(s)$ is the objective. The main complication is that boundaries of census regions used to aggregate the data change repeatedly over the study period.

Using  the time trend $\beta(t)$ and age group rates $\theta_k$ were estimated by treating $\lambda(s) = 1$ and using a generalized linear model on Poisson-distributed case counts within each age group and census period.  Treating the time effect $\beta(t)$ as peicewise constant within each census period $i$, and it's estimate as $\hat\beta_i$, the offset surface for census period $i$ is
\[
\off_i(s) = 
\]


Census periods are defined as beginning and ending at the mid-points between census years before and after a given census. Period $i$ covers the years $t_{i-1}$ to $t_i$ and  $i=1\ldots T$ where $T$ is the total number of census periods during the study. The $j$th census region for the $i$th census period is denoted as $S_{ij}$ and these regions have boundaries that vary between census periods.  For simplicity, we assume $\beta(t)$ and the population $P_k(s,t)$ are constant within a census period so that $\beta(t)=\beta_i$ when $t$ is in period $i$ and 
$$
P_k(x,t) = P_{ik}(x) = P_{ijk}/ \area{S_{ij}}\quad \text{for $x \in S_{ij}$.}
$$
where $P_{ijk}$ is the population count for group $k$ in region $S_{ij}$. As a result of these simplifications, the offset is constant over the region $S_{ij}$ within a census period.
$$
{\cal O}_k(x,t) =  {\cal O}_{ik}(x) =\beta_i \theta_k P_{ik}(x).
$$
Finally, the available data are case counts of the form $N_{ijk}$ for individuals in group $k$ who were diagnosed with lupus during census period $i$ while living in region $S_{ij}$.

The model is fitted in two stages. At the first stage, the spatial variation in $\lambda(x)$ is ignored so that case counts $N_{ijk}$ may be assumed to be distributed as
$$
N_{ijk} \sim \text{Poisson}(\theta_k \beta_i (t_{i}-t_{i-1}) P_{ijk}).
$$
This allows $\beta_i$ and $\theta_k$ to be estimated from a generalized linear model. At the second stage, $\beta_i$ and $\theta_k$ are set to the values estimated at the first stage and treated as if they were known. They are then used to construct the offsets $${\cal O}_i(x) = \sum_k {\cal O}_{ik}(x).$$
These offsets are in turn used in the iteration (\ref{EMSic}) to estimate $\lambda(x)$.


Figure \ref{f: lupus_map} shows the estimated intensity surface using the local-EM algorithm with locally constant risks within square grid cells and a bandwidth of 1.35km.  The offsets were computed by calculating empirical rates by age and sex groups and applying these to the population data from the census for census data from 1971, 1981, 1991, and 2001.  The risk surface is fairly flat and near unity throughout most of the region, with an area of elevated risk near the centre of the downtown area of the city.  This could be due to a risk factor not accounted for, such as ethnicity, or reporting bias due to the proximity to the clinic.  Detailed results and discussion of this application will be reported elsewhere.


\begin{figure}\centering
\includegraphics[height=0.75\textheight, angle=0]{Relative_Risk.pdf}
\caption{Estimated risk surface for lupus using EMS with a bandwith of 1350 m.} 
\label{f: lupus_map}
\end{figure}


We conclude this example by making a comparison between the local-EM
algorithm in this context to methods in the literature. Note that if we only have a
single map ($j=1$) then $Q_\ell$ and $S_{j\ell}$ coincide so that $I_{ij\ell}=0$ for all $j\neq \ell$. As a result the kernel weight given in (\ref{Cev}) simplifies to $\int_{Q_\ell}K_h\left({u-t}\right)\, \mathrm{d}u/\area{Q_\ell}$, the algorithm (\ref{localem}) iterates once and the local-EM estimator simply becomes the Nadaraya-Watson estimator advocated by \cite{brillinger1990st, brillinger1991si, brillinger1994esp} in a series of papers concerning spatial smoothing where data are aggregated to regions within a map.

\subsection{Spatial Simulation Study}

This was performed for 100 different values of $h$ between $0.01$ and $1.00$ with $S=500$.  I'LL FIND OUT FROM PAUL WHAT THE REAL RISK SURFACE AND OFFSETS WERE.

\begin{figure}\centering
\includegraphics[trim = 0mm 4mm 10mm 20mm, clip, width=0.75\textwidth]{mise_vs_bw_spatial.pdf}
\caption{Mean integrated squared error (MISE) as a function of bandwidth for the kernel intenstiy estimator using exact observations (---), smoothed EM estimator using centroids ($\cdot\cdot\cdot$) and local-EM estimator using constant case ($- -$). The kernel intensity estimator has the lowest MISE of 0.00184 at a bandwidth of 0.36. The proposed local-EM estimator achieves the lowest MISE of 0.00226 at a bandwidth of 0.19 for the censored spatial Poisson point process data.} 
\label{fig:spat:mise_bw}
\end{figure}



\section{Discussion}

FORMERLY IN SECTION 5

For univariate failure time data, ${\cal M}=\Re$ and $X_i$ is the event time for the $i$th individual which is only known to fall in the interval $I_i$. For the iteration (\ref{EMSic}) we recognize ${\mathfrak M}(\hat{\bf \Lambda}_{r})$ as a step in the EM algorithm of \cite{turnbull1976edf}. Furthermore, in the case of a histogram, $\{I_i\} \equiv {\cal Q}$, the local-EM algorithm (\ref{localem}) will only iterate once and reduces to methods of Jones (1989) for smoothing histograms. Finally, \cite{braun2005lld} proposes a local-EM algorithm based on ${\cal L}_x({\bf a})$ and, without being aware of it, develop an EMS implementation.

END FORMERLY IN 5

Local likelihood can be seen as a semi-parametric method, providing a compromise between the power and theoretical rigour of parametric methods and the flexibility of kernel smoothing algorithms.  Local-EM provides a method for applying local likelihood in situations where interval or area censoring with irregular observed regions.  By demonstrating that local-EM and the EMS algorithm are related, it is hoped that the computational advantages offered by EMS will lead to greater adoption of local-EM methods.  Formulating EMS problems in the context of local likelihood allows for a natural and rigorous method of incorporating offsets.

A final comparison of local-EM to \cite{silvermanems} permits further insights beyond what has already been discussed in the paper. In \cite{silvermanems} quantities analogous to $S_{ij}$ and $Q_\ell$ are referred to as observation and reconstruction bins respectively and the context concerns image reconstruction involving a single image rather than multiple maps say. As a result, example \S 5.3 could well be thought of as an extension of the image reconstruction techniques of \cite{silvermanems} to an epidemiological setting. Furthermore, noting there are no offsets in \cite{silvermanems}, the expression (2.2) given there and ${\mathfrak M}(\hat{\bf \Lambda}^{(r)})$ are related. For example, their weights $p_{st}$ simplify in our setting to the indicator variables ${\cal I}_{ij\ell}$ because we have assumed the locations of events have been measured without error. This observation provides an avenue for extending the local-EM toolbox to settings where data are mismeasured, but this is beyond the scope of this paper. Finally, we note that in our context ${\mathfrak M}(\hat{\bf \Lambda}^{(r)})$ is an extension of \cite{vardi1985pet} to multiple maps where data are not mismeasured.



%\vspace{15pt}
%\noindent
%{\it should we mention Vardi anywhere, or do we need to demonstrate that ${\mathfrak M}(\hat{\bf \Lambda}^{(r)})$ is an EM step}
\section*{Acknowledgements}
We are grateful to Dr. Paul Fortin for the provision of the lupus data. We would also like to acknowledge the Natural Sciences and Engineering Research Council of Canada for supporting this research through individual operating grants.

\bibliographystyle{apalike}
\bibliography{llems}

\appendix
\section{Appendix}
% \subsection{Local-EM and the Modified EMS algorithm}\label{appx_ems}
% \cite{nychka1990spa} identified a relationship between the EMS algorithm and a penalized likelihood by demonstrating that a modified EMS algorithm maximizes this penalized likelihood. In this section, we demonstrate that the local-EM algorithm may be used to maximize the penalized likelihood with the appropriate choice of kernel, namely an equivalent kernel. This occurs because the equivalent kernel leads to Nychka's modification of the EMS algorithm.
% 
% We begin by first considering (\ref{e:npllk}) with
% $$
% {\bf Pen}(\boldsymbol{\theta}, {\cal K})=\boldsymbol{\theta}^{T} \mathbf{R} \boldsymbol{\theta}.
% $$ 
% Here $\mathbf{R} = \mathcal{K}^{-1} - \tilde{\boldsymbol{\off}}$, where $\mathcal{K}$ is any symmetrical smoothing matrix and $\tilde{\boldsymbol{\off}}$ is a diagonal matrix with $\tilde{\off}_\ell$.
% We explore the relationship between this penalized likelihood and the local-EM algorithm in the locally constant case by considering the following function
% \begin{equation} \label{e:equivalent_kernel}
% (1/\lambda(u))^{1/2} K_h(u-x),
% \end{equation}
% where $\lambda$ is the smooth component of the true density or intensity, and $K_h(u-x)$ is any symmetric positive kernel with compact support. The renormalization of (\ref{e:equivalent_kernel}) gives
% $$
% K_{h}^{\ast}(u - x) = (\lambda(x)/\lambda(u))^{1/2} K_h(u-x)
% $$
% and $\int K_{h}^{\ast}(u - x) \dee{u} = 1+o(h)$ for any interior point, $x$. We refer to the kernel $K_h^{\ast}$ as an equivalent kernel. Next, consider the use of the equivalent kernel with the $\cal Q$-approximant $\bar{\lambda}^{(r)}$ in our local-EM algorithm while assuming $\area{Q_j} = \area{Q}$ for all $j$. This combination results in the conditional expectation $\mbox{E}_{\hat{\lambda}^{(r)}}\left[ K_h^{\ast}(X-x) \mid X \in S_{ij} \right]$ being approximated by
% \begin{eqnarray*}
% \mbox{E}_{\bar{\lambda}^{(r)}}\left[K_h^{\ast}(X-x) \mid S_{ij}\right] &=& 
% \sum_{k} \left(
% \dfrac{\hat{\Lambda}_{\ell}^{(r)}}{\hat{\Lambda}_{k}^{(r)}} 
% \right)^{1/2} 
% \area{Q}^{-1} \int_{Q_{k}} K_h(u-x)\dee{u}
% \dfrac{\mathcal{I}_{ijk}\hat{\Lambda}_{k}^{(r)}}{\sum_{m} \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} 
% \quad \text{for $x \in J_\ell$.}
% \end{eqnarray*}
% %where $\off_k = \off(x)$ for $x\in Q_k$.  
% This in turn gives the following iteration for $\boldsymbol{\Lambda}$:
% \begin{align}
% \hat{\Lambda}_{\ell}^{r+1} &= n^{-1} \sum_{ijk} \left( \dfrac{\hat{\Lambda}_{\ell}^{(r)}}%
% {\hat{\Lambda}_{k}^{(r)}} \right)^{1/2} \area{Q}^{-1} \int_{Q_\ell} \dfrac{\int_{Q_k} K_h(u-x) \dee{u}}{\int_{\sum_i \mathcal{M}} \off_i(u) K_h^{\ast}(u-x) \, \mathrm{d}u} \, \mathrm{d}x \dfrac{ \mathcal{I}_{ijk} \hat{\Lambda}_{k}^{(r)}}{\sum_m \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} \notag \\
% %
% & \approxeq n^{-1} \sum_{ijk} \left( \dfrac{\hat{\Lambda}_{\ell}^{(r)}}%
% {\hat{\Lambda}_{k}^{(r)}} \right)^{1/2} \area{Q}^{-1} \int_{Q_\ell} \int_{Q_k} K_h(u-x) \dee{u} \dee{x} %
% \frac{1}{\tilde{\off}_\ell} \dfrac{ \mathcal{I}_{ijk} \hat{\Lambda}_{k}^{(r)}}{\sum_m \mathcal{I}_{ijm} \hat{\Lambda}_{m}^{(r)}} 
% \label{e:modified_EMS}
% \end{align}
% The expression (\ref{e:modified_EMS}) can be re-expressed in the following matrix form:
% \begin{equation} \label{modified_EMSpl}
% \hat{\boldsymbol{\Lambda}}^{r+1}={\mathfrak M}(\hat{\boldsymbol{\Lambda}}^{(r)}) {\cal K}_h^{\ast}\left(\hat{\boldsymbol{\Lambda}}^{(r)}\right).
% \end{equation}
% ${\cal K}_h^{\ast}\left(\hat{\boldsymbol{\Lambda}}^{(r)} \right) =
% \left(\hat{\boldsymbol{\Theta}}^{(r)}\right)^{-1}\, \tilde{\boldsymbol{\off}} \, \mathcal{K}_h \, \tilde{\boldsymbol{\off}}^{-1} \, \hat{\boldsymbol{\Theta}}^{(r)}$, where $\hat{\boldsymbol{\Theta}}^{(r)} = \text{diag}(\hat{\theta}_{k}^{(r)})$. Note that $\off_k = 1$ in the context considered by \cite{silvermanems} and \cite{nychka1990spa}. The iteration (\ref{modified_EMSpl}) is recognized as Nychka's modified EMS algorithm with the smoothing matrix equal to ${\cal K}={\cal K}_h$.
% 
% In summary, the use of equivalent kernel in the local-EM algorithm leads to Nychka's modified EMS algorithm.



\subsection{Local-EM and the Penalized Likelihood} \label{appx_penalized}

%In this section we study the penalized likelihood of \S\ref{appx_ems} under the conditions of \S\ref{appx_convergence} in which $k\rightarrow \infty$.
For clarity, assume $\map_i = \map$ for all $i$ and $\off_{ij}=1$. Consider those values of $\boldsymbol{\theta}$ for which the penalty $\boldsymbol{\theta}^{T} \mathbf{R} \boldsymbol{\theta}$ is minimized. For such $\boldsymbol{\theta}$, we have
\begin{eqnarray*}
\mathbf{R} \boldsymbol{\theta}&=&(\mathbf{\cal K}_h^{-1} - n\mathbf{I})\boldsymbol{\theta}=0
\end{eqnarray*}
or rather
\begin{eqnarray}\label{eigen}
\boldsymbol{\theta}&=& n\mathbf{\cal K}_h \boldsymbol{\theta}.
\end{eqnarray}
This permits an interpretation of ${\cal L}_p(\boldsymbol{\theta})$
as penalizing the nonparametric likelihood on the basis of the
proximity of $\boldsymbol{\theta}$ to the maximal eigenvector of
the smoothing matrix $\mathbf{\cal K}_h$. To see this, let
$\varrho_{(\ell)}$ denote the $\ell$th largest eigenvalue of
$\mathcal{K}_h$ with its corresponding eigenvector $\boldsymbol{\gamma}_{(\ell)}$. 
Let $\boldsymbol{\Gamma}=\begin{bmatrix} \boldsymbol{\gamma}_{(k)} &
\boldsymbol{\gamma}_{(k-1)}& \cdots & \boldsymbol{\gamma}_{(1)}
\end{bmatrix}$. Then the spectral decomposition of $\mathbf{R}$ is
$ \boldsymbol{\Gamma} \mathbf{D} \boldsymbol{\Gamma}^{T},$ where
$\mathbf{D}=\text{diag}\left( \varrho_{(k)}^{-1}, \ \varrho_{(k-1)}^{-1},
\ \ldots, \ \varrho_{(1)}^{-1}\right) - \mathbf{I}$.
%
Since $\varrho_{(1)} \leq 1$, $\mathbf{R}$ penalizes eigenvectors with small eigenvalues more than those with large ones.

We note that for $\boldsymbol{\theta}$ satisfying the condition
(\ref{eigen}) we have
\begin{equation}
\boldsymbol{\theta}^T \boldsymbol{\theta}- n\boldsymbol{\theta}^T {\cal
K}_h \boldsymbol{\theta}=0 \label{eigen_2}.
\end{equation}
We may consider the limit of the left-hand side as $J \rightarrow \infty$ and $\max_j |\!|Q_j|\!| \downarrow 0$. Note $[{\cal K}_h]_{jk}$ and $\theta_j$ may be
approximated as
\begin{eqnarray*}
 [{\cal K}_h]_{kj}&=& \Delta^{-1} \int_{Q_j} \int_{Q_k} 
\dfrac{K_h(u-x)}%
{n \int_{\map} K_h(u-x) \dee{u}} \dee{u} \dee{x}
\approx \dfrac{K_h(x_k - x_j)}{n \int_{\map} K_h(u-x_j) \dee{u}} \Delta_u \Delta_x \\
\theta_j^2 &=& \left( \int_{Q_j} \rho(u) \dee{u} \right)^{1/2} \left( \int_{Q_j} \rho(x)\dee{x} \right)^{1/2} = \rho(x_j) \Delta_u \Delta_x.
\end{eqnarray*}
Let $\Delta = \Delta_u \Delta_x$ and note $\Delta \downarrow 0$ as $J \rightarrow \infty$. The left-hand side of (\ref{eigen_2}) may now be written as
$$
\sum_j \rho(x_j)\Delta - \sum_{jk} \rho^{1/2}(x_j) \dfrac{K_h(x_k-x_j)}{\int_{\map} K_h(u-x_j) \dee{u}} \rho^{1/2}(x_k)\, \Delta^2
$$
and as $J \rightarrow \infty$ this expression becomes
$$
\int_{\map}\rho(u)\, \mathrm{d}u-\int_{\map} \int_{\map} \rho^{1/2}(x) \dfrac{K_h(u-x)}{\int_{\map} K_h(u-x_j) \dee{u}} \rho^{1/2}(u)\, \mathrm{d}u\,\mathrm{d}x,
$$
As a result, the penalty will equal 0 for any function $f$ belonging to the following class
$$
{\cal Z}=\left\{f \ \Bigg | \, f^{1/2}(x)=\int_{\map} \dfrac{K_h(u-x)}{\int_{\map} K_h(u-x_j) \dee{u}} f^{1/2}(u) \dee{u} \mbox{ for all $x \in \map$}
\right\}.
$$
Given this and the results of \S\ref{appx_convergence}, we conclude that the local-EM
algorithm penalizes the nonparametric likelihood for departures of the target function from the class of
maximal eigenfunctions $\cal Z$.





\section{Old stuff}



Since the $X_i$ are assumed to be realizations of a Poisson process the first term of (\ref{llicdp}) may be written as $\sum_{ij}N_{ij} E_\lambda [K_h(X-x){\cal P}(X-x;\mathbf{a})|X \in S_{ij}]$, and in general the local-EM algorithm can be written as
\begin{eqnarray}\label{localem}
\hat{\lambda}^{r+1}(x)=\sum_{ij}
N_{ij}\mbox{E}_{\hat{\lambda}^{(r)}}\left[\left.
K_h\left({X-x}\right)\right|X\in S_{ij}\right]/\Psi_h(\hat{\bf a}^{r+1})
\end{eqnarray}
where
\begin{eqnarray*}
\Psi_h(x;\bf a)&=&\sum_{i}\int_{\cal M}{\cal O}_i(u)K_h(u-x)\exp\left\{{\cal P}(u - x)-a_0\right\} \dee{u}
\end{eqnarray*}
and ${\bf \hat{a}}^{r+1}$ solves the local likelihood equations based on
${\cal L}_x({\bf a})$ with $\lambda$ replaced by $\hat{\lambda}^r$. Note that, given the offset surface is assumed to be constant over each region $S_{ij}$, expectation is computed with respect to the conditional density
\begin{equation} \label{conden}
\frac{\hat{\rho}^{(r)}(u)}{\int_{S_{ij}}{\hat{\rho}^{(r)}(x)} \, \mathrm{d}x}=
\frac{\hat{\lambda}^{(r)}(u) }{\int_{S_{ij}}{\hat{\lambda}^{(r)}(x)} \, \mathrm{d}x}
%\lambda(t){\Bigg /}\int_{I_{ij}}\lambda(u)\,\mathrm{d}u
\end{equation}
at the $r$th iteration.
\end{document}