\documentclass[12pt,letterpaper]{article}

\usepackage{layout, latexsym, array, enumerate, amsmath, amsthm, amssymb, amsfonts}
\usepackage[mathscr]{eucal}
\usepackage{graphicx, subfigure}
\usepackage[margin=1in]{geometry}
\usepackage{natbib}
%\usepackage[numbers]{natbib}

\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}{Corollary}[section]
\newtheorem{example}{Example}[section]
\newtheorem{lemma}{Lemma}[section]
\newtheorem{defn}{Definition}[section]

\newcommand{\off}{\ensuremath{\mathcal{O}}}
\newcommand{\lik}{\ensuremath{\mathcal{L}}}
\newcommand{\map}{\ensuremath{\mathfrak{M}}}
\newcommand{\poly}{\ensuremath{\mathcal{P}}}
\newcommand{\area}[1]{\ensuremath{|\!|#1 |\!|}}
\newcommand{\norm}[1]{\ensuremath{\left|\!\left|#1\right|\!\right|_{1}}}
\newcommand{\normInf}[1]{\ensuremath{\left|\!\left|#1\right|\!\right|_{\infty}}}
\newcommand{\dee}[1]{\ensuremath{\,\mathrm{d}#1}}
\newcommand{\Steve}[1]{{\tt (#1)}}

\bibliographystyle{apalike} %\bibliographystyle{amsplain}

\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% To specify the title page
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\title{A local-EM algorithm for spatially aggregated data with time-varying boundaries}
\author{Patrick Brown, Chun-Po Steve Fan, Jamie Stafford}
\maketitle

\section{Introduction}

Disease mapping often requires working with spatially aggregated data, as data may be aggregated into regions to protect confidentiality or only collected at an area level such as health region or postal code.  These regions often vary in size, being larger in rural areas.  Canadian six-digit postal codes are able to locate an individual precisely in cities but rural and remote areas have a postal codes covering large geographical extents.  Further, estimation of disease risk must take the underlying population distribution into account, and spatially-aggregated census data is
often used for this purpose.

Estimating disease risk for small areas or rare diseases require using data collected over a long time period, in order to accumulate sufficient cases to allow for accurate estimation.  As postal and census regions change over time, this can cause problems due to the need to combine maps with different tessellations.  Whereas a conventional cross-sectional analysis would model risks using the populations and case counts for census area, a longitudinal analysis would have populations and case counts for each of the 1971, 1981, 1991, and 2001 census tracts.  Disease mapping on large, stable areas avoids these problems, though small area analysis would involve finer-scale census regions which are more volatile.

This paper introduces a local-EM class of algorithms for spatial data with multiple types of aggregation, {\it motivated by locally constant Poisson regression}. The algorithms extend the methods of \citet{braun2005lld} and their implementation is shown to be related to the EMS algorithm introduced by \citet{silvermanems}.  A local-EM algorithm is used to estimate the risk surface on a tessellation in which each of the cumulative intersections of the various maps are distinct regions. At each iteration the expected number of cases in each of the regions of the fine tessellation are computed and the risk surface re-estimated.

\subsection{Application}

The method is illustrated with an application to mapping the risk of Lupus in Toronto, Canada, from the period 1965 to 2007.  These data are collected from the lupus clinic at Toronto Western Hospital.  Lupus may have an environmental risk factor (cite some papers) so it would be expected have a spatially structured risk pattern.

As the exact locations of the patients can be computed from the full street address, a kernel smoother can be used to evaluate the risk surface.  We treat this estimate as a gold standard against which we evaluate the proposed methodology by aggregating the data to the census dissemination area and census tract level.  Census
dissemination areas (DA) are the finest level at which population data are released, with each region containing approximately 400 individuals and ??? DA's (perhaps) covering the greater Toronto area.  Census tracts (CT) are larger, with ??? CT's contained in greater Toronto.

Cancer in essex county as a (maybe) second application where exact locations are unavailable.  Partly because of confidentiality, partly because rural addresses don't allow for precise geocoding.

\section{Methods}

The development of local likelihood and kernel methods for spatial and spatially aggregated data may be found in the literature where a sequence of papers attributed to Brillinger (1990, 91, 94) and the methods of Silverman, Jones, Nychka and Wilson (1990), Fan and Stafford (2008) are the most relevant to efforts presented here. While Brillinger was concerned with spatially smoothing data that had been aggregated over regions of a single map, and Silverman \emph{et al.}\ focused on the reconstruction of a single image, our
methods permit multiple maps with differing tessellations to be
combined.

One view is to consider the methods developed here as an extension
of EMS algorithm to an epidemiologic setting. However the extension is motivated formally through the local-EM approach of Brown, Fan and Stafford (2009) rather than in the \emph{ad hoc} manner of Silverman \emph{et al}. This has the advantage of permitting
adjustments that account for spatial variations in population size,
age, sex, time, \emph{et~cetera}, to arise as a natural consequence
of the local likelihood construction. Points to emphasize about the
proposed methodology are that it is computationally undemanding and
able to fully integrate area censoring into the estimation
procedure.

In \S 2.1 we develop a local-EM algorithm that allows multiple disease maps to be combined over time while simultaneously being spatially smoothed. Its relationship to the work of Brillinger and Silverman $et~al$ is discussed. In \S 2.2 we extend this algorithm to include an offset that permits an adjustment for other variables.

\subsection{The model}
Disease incidences, in this case lupus within the metropolitan boundaries of Toronto, are modeled as an inhomogeneous Poisson point process in space and time, where the $i$th occurrence of the disease has location $S_i$ and occurs at time $T_i$. Initially, we assume the
underlying intensity $\rho$ varies spatially but not temporally;
that is, $\rho(s,t)=\lambda(s)$, and interest centers on $\lambda$.
The likelihood function for $\lambda$ is given by
\begin{align}\label{lik}
\mathcal{L}(\lambda) &= \sum_{i} \log \rho(S_{i}, T_{i}) -
\int_{\mathcal{T}} \int_{\mathcal{M}} \rho(u,v) \, \mathrm{d}u \,
\mathrm{d}v \notag \\
&=\sum_i \log \lambda(S_i) - \sum_j |\mathcal{T}_j|
\int_{\mathcal{M}} \lambda(u) \, \mathrm{d}u,
\end{align}
where $j$ indexes over the number of census periods in the study.
For later convenience, we have written the second term as a sum over
successive census periods of durations $|{\cal T}_j|$ and denoted
the entire observation period by $\mathcal{T} = \cup_j \mathcal{T}_j$.
The region of study, namely metropolitan Toronto, is denoted
by $\cal M$, and data are reported as counts $n_{j\ell}$ for regions
$R_{j\ell}$. Since there is only one map at each census period, the same index $j$ is used for both maps and census periods.  Here, let $R_{j\ell}$ denote the $\ell$th census tract of
the $j$th map $M_j$ for the region $\cal M$.

If the boundaries that define $R_{j\ell}$'s within each map were the same, then it would be straightforward to combine these longitudinal maps. However, these boundaries vary in
time to account, for example, for the growth of a metropolitan areas
in the case of the lupus study. As a result, it is difficult to
combine these regional counts over time.  To address this
difficulty, it is useful to construct a partition
$P=\dot{\cup}J_\ell$ of $\cal M$ by overlaying all available maps
$M_j$'s.

To flexibly estimate the intensity surface, we consider the local
likelihood methods of Hastie and Tibshirani (1988) and Loader
(1999). Here the likelihood function (\ref{lik}) is replaced with its local counterpart
\begin{align}\label{loclik}
\mathcal{L}_s(\lambda) &= \sum_i K_h(S_i-s)\log \lambda(S_i) -
\sum_j |\mathcal{T}_j| \int_{\mathcal{M}} K_h(u-s)\lambda(u) \,
\mathrm{d}u,
\end{align}
and the intensity $\lambda$ is locally approximated by a bivariate polynomial, namely $\mathcal{P}_{\bf a}$, with coefficients given by ${\bf a}^T=\{a_0,{\bf a}_1,\ldots\}$. That is,
\begin{align*}
\log \lambda(u) &=\mathcal{P}_{\bf a}(u-s)=a_0+{\bf a}_1^T({\bf u}-{\bf s})+\cdots
\end{align*}
%Here $\mathcal{P}_{\bf a}$ is a bivariate polynomial with coefficients given by ${\bf a}^T=\{a_0,{\bf a}_1,\ldots\}$ and
%
This leads to a local likelihood function, %(\ref{loclik})
which a function of $\bf a$ at each point $s$, given by
\begin{align}\label{loclik2}
\mathcal{L}_s({\bf a}) &= \sum_i K_h(S_i-s)\mathcal{P}_{\bf a}(S_i-s) - \sum_j |\mathcal{T}_j| \int_{\mathcal{M}} K_h(u-s)\exp\left\{ \mathcal{P}_{\bf a}(u-s) \right\}\, \mathrm{d}u
\end{align}
where the locations $S_i$ are assumed to be observed exactly. The
estimate $\hat{\bf a}$ may be found by solving a set of score
equations based on ${\cal L}_s({\bf a})$, and a estimate of the
unknown intensity surface at the point $s$ is set to be
$\hat{\lambda}(s)=e^{\hat{a}_0}$.

In the setting under consideration, data are area censored and reported as the number of
occurrences $n_{j\ell}$ for the region $R_{j\ell}$. That is, the
location $S$ of any particular disease occurrence is only known to
be within some region of a map. For this situation, we mimic Fan and
Stafford (2008) and consider an EM-type strategy in which, given a nonhomogeneous Poisson
disease incidence, the E-step results in replacing (\ref{loclik2}) with
\begin{align}\label{loclike}
\mathcal{L}_s({\bf a}) &= \sum_{j\ell} n_{j\ell}E_\lambda \left
[K_h(S-s)\mathcal{P}_{\bf a}(S-s)|S\in R_{jl}\right ] \notag \\
&\hspace{.5in}- \sum_j |\mathcal{T}_j| \int_{\mathcal{M}} K_h(u-s)\exp\left\{
\mathcal{P}_{\bf a}(u-s) \right\} \, \mathrm{d}u.
\end{align}
Here expectation is computed with respect to the conditional density
\begin{align*}
\dfrac{\lambda(u)}{\int_{R_{j\ell}} \lambda(t) \,\mathrm{d}t}.
%{\lambda(u)\over{\int_{R_{j\ell}}{{{\lambda}}(t)} \, \mathrm{d}t}}\cdot
\end{align*} \label{conden}
The M-step requires solving equations based on (\ref{loclike}) to
get $\hat{\bf a}^r$ at the $r$th iteration. Combining the E and M
steps of this local-EM algorithm
%Now ${\cal M}=\dot{\cup} J_l$ \& $\lambda(S)=\hat{\Lambda}_{r\ell}/||J_\ell||$ for $t\in
%J_\ell$, where $\hat{\Lambda}_{r\ell}=\int_{J_\ell}
%\hat{\lambda}_r(u)\, \mathrm{d}u$.
leads to the iteration
\begin{eqnarray}\label{localEM}
\hat{\lambda}_{r+1}(x)=\sum_{j\ell}
\mbox{E}_{\hat{\lambda}_{r}}\left[\left.
K_h\left({S-s}\right)\right|S\in R_{j\ell}\right]{\Bigg /} \sum_j |\mathcal{T}_j| \int_{\cal
M}\tilde{K}_h(u-s)\, \mathrm{d}u,
%K_h\left({S-s}\right)\right|S\in R_{ij}\right]/\Psi_h(\hat{\bf a}_r)
\end{eqnarray}
where $\tilde{K}_h(u-s)={K}_h(u-s)\exp\left\{ \mathcal{P}_{\hat{\bf a}^r}(u-s)-\hat{a}_0^r\right\}$.
%where $\Psi_h$ is
%\begin{eqnarray*}
%\Psi_h(\bf a)&=&\int_0^{\infty}K_h(u-t)\exp\left\{\sum_{j=1}^pa_j(u-t)^j \right\}\, \mathrm{d}u
%\end{eqnarray*}
%and ${\bf \hat{a}}_r$ solves a set of local likelihood equations based on (\ref{loclike}).
Note the kernel weight for each observation is $
\mbox{E}_{\hat{\lambda}_{r}}\left[\left.K_h\left({S-s}\right)\right|S\in
R_{ij}\right] $ and, in the strategy analogous to Fan and Stafford
(2008), we simplify the computation of the kernel weight by
approximating $\hat{\lambda}_r$ with a piecewise constant function
$\hat{g}_r$, defined as
$\hat{g}_r(s)=\hat{\Lambda}_{r\ell}/||J_\ell||$ for $s\in J_\ell$,
where $\hat{\Lambda}_{r\ell}=\int_{J_\ell}\hat{\lambda}_r(u)\,
\mathrm{d}u$.  Then the conditional expectation in the local EM algorithm (\ref{localEM}) can be approximated by
\begin{align*}
&\mbox{E}_{\hat{\lambda}_{r}}\left[\left.K_h\left({S-s}\right)\right|S\in R_{j\ell}\right]
\approx \mbox{E}_{\hat{g}_{r}}\left[\left.K_h\left({S-s}\right)\right|S\in R_{j\ell}\right] \\
&\hspace{.5in} =%
\sum_{k} \dfrac{\hat{\Lambda}_{rk} {\cal I}_{j\ell k}\int_{J_k}K_h\left({u-s}\right)\,\mathrm{d}u}
           {||J_{k}||\sum_m \hat{\Lambda}_{rm} {\cal I}_{j\ell m}}
\end{align*}
At the next iteration we are required to compute $\hat{\Lambda}_{r+1\ell}$ which leads to the simple iteration
\begin{align}
\hat{\Lambda}_{r+1 s} &=
\sum_{kj\ell}  n_{j\ell} \dfrac{\hat{\Lambda}_{rk} {\cal I}_{j\ell k}}{||J_{k}|| \sum_m \hat{\Lambda}_{rm} {\cal I}_{j\ell m}}
\int_{J_s} \dfrac{\int_{J_{k}}K_h\left({u-t}\right)\,\mathrm{d}u}{\sum_p |\mathcal{T}_p| \int_{\cal M} \tilde{K}_h(u-t)\, \mathrm{d}u} \, \mathrm{d}t, \label{EMS}
\end{align}
The last two expressions permit a comparison between the local-EM
algorithm and other methods in the literature. Note that if we only
have a single map ($j=1$), then $J_{k}$ and $R_{\ell}$ coincide so that
$I_{\ell k}=0$ for any $k \neq \ell$. As a result, the kernel weight
simplifies to $$\dfrac{\int_{R_{\ell}}K_h\left({u-t}\right)\,
\mathrm{d}u/|| R_{\ell}||}{|\mathcal{T}| \int_{\mathcal{M}} K_h\left({u-t}\right)\,
\mathrm{d}u},$$ the algorithm (\ref{localEM}) iterates once,
and the local-EM estimator simply becomes the Nadaraya-Watson
estimator advocated by Brillinger (1990, 1991, 1994) in a series of
papers concerning spatial smoothing when data are
aggregated over regions within a map.

Also note that the iteration (\ref{EMS}) may be written as
\begin{eqnarray}\label{EMSic}
\hat{\bf \Lambda}_{r+1}={\cal M}(\hat{\bf \Lambda}_{r}) {\cal K}_h.
\end{eqnarray}
${\cal K}_h$ is a smoothing matrix with
entries
$$
    [{\cal K}_h]_{ls}=
    \frac{1}{||J_k||} \int_{J_s}\dfrac{\int_{J_l}K_h\left({u-t}\right)\, \mathrm{d}u}
                      {\int_{\cal M}\tilde{K}_h(u-t)\, \mathrm{d}u} \, \mathrm{d}t,
$$ and ${\cal M}(\hat{\bf \Lambda}_{r})$ is a row vector, whose $k$th entry equals
$$
\frac{1}{|\mathcal{T}|} \sum_{j\ell} n_{j\ell} \dfrac{\hat{\Lambda}_{rk} {\cal
I}_{j\ell k}}{\sum_m \hat{\Lambda}_{rm} {\cal I}_{j\ell m}}.
$$
The mapping $\mathcal{M}$ is, in fact, the EM algorithm.  To see
this, assume $|\mathcal{T}_j| = 1$ for all $j$ for simplicity. This
also implies $|\mathcal{T}|=J$. Note that, for any subset $A
\subseteq \mathcal{M}$, $$N(A) \sim \mbox{Poisson}\left( \int_A
\lambda(s) \, \mathrm{d}s \right),$$ and, if $A\cap B = \emptyset$,
$N(A)$ and $N(B)$ are independent. If counts on the partition $P$
were observed for all $J$ maps, the likelihood function is given by
$$
\mathcal{L}(\boldsymbol{\Lambda}) = \sum_{jk}  N_{jk} \log \Lambda_{k} - J \sum_{k} \Lambda_{k},
%\sum_{\ell} N_{\ell} \log\left( \int_{J_{\ell}} \lambda(u) \, \mathrm{d}u  \right) - \int_{\mathcal{M}} \lambda(u) \, \mathrm{d}u.
$$ where $N_{jk}$ is the number of incidences on the area $J_k$ of the $j$th map.
With the observed counts $\{n_{j\ell}\}$, let $
Q(\boldsymbol{\Lambda} \mid \boldsymbol{\Lambda}') =
\mathbf{E}_{\boldsymbol{\Lambda}'} \left[
\mathcal{L}(\boldsymbol{\Lambda}) \mid \{n_{j \ell}\} \right]$.
Conditioning on $n_{j\ell}$, $N_{jk}$ has a multinomial distribution
with $n=n_{j\ell}$ and $p_k = \Lambda_{k}/ \sum_m \mathcal{I}_{j\ell
m} \Lambda_{m}$.  It follows that the E-step of EM gives
\begin{align}
 Q(\boldsymbol{\Lambda} \mid \hat{\boldsymbol{\Lambda}}_r) &= \sum_{jk} \mathbf{E}_{\hat{\boldsymbol{\Lambda}}_r} \left[ N_{jk} \mid \{n_{j \ell}\} \right] \log\Lambda_{k} - J \sum_{k} \Lambda_{k} \notag \\
 &= \sum_{j\ell k} n_{j\ell} \frac{\mathcal{I}_{j\ell k} \hat{\Lambda}_{rk}}{\sum_m \mathcal{I}_{j\ell m} \hat{\Lambda}_{rm}} \log\Lambda_{k} - J \sum_{k} \Lambda_{k}
\end{align} \label{e_step}
%
Upon differentiation with respect to $\boldsymbol{\Lambda}$, the M-step updates the $r$th estimate of $\boldsymbol{\Lambda}$ with
$$\hat{\Lambda}_{r+1, k} = \frac{1}{J} \sum_{j\ell} n_{j\ell} \frac{\mathcal{I}_{j\ell k} \hat{\Lambda}_{rk}}{\sum_m \mathcal{I}_{j\ell m} \hat{\Lambda}_{rm}},$$ which gives the $k$th entry of $\mathcal{M}(\hat{\boldsymbol{\Lambda}}_r)$.

In other words, (\ref{EMS}) may be written explicitly as an EMS
(expectation-maximization-smooth)algorithm of the type advocated by
Silverman \emph{et al.}\ although here it is formally motivated by
an EM-type strategy applied to local likelihood. Some detailed
comparison of (\ref{EMS}) to Silverman \emph{et al.}\ provides
further insights. The latter refers to quantities analogous to
$R_{j\ell}$ and $J_k$ as observation and reconstruction bins
respectively. In particular, the context of Silverman \emph{et al.}\
concerns image reconstruction centered on a single image rather than
multiple maps. Nevertheless, what is proposed in this paper could
well be thought of as an extension of the image reconstruction
techniques of Silverman $et~al.$ to an epidemiological setting.
Furthermore, the expression (2.2) of Silverman \emph{et al.}\ and
${\cal M}(\hat{\bf \Lambda}_{r})$ are related where, for example,
their weights $p_{st}$ simplify to our indicator variables ${\cal
I}_{ijl}$ because we assume the locations $S_{ij}$ have been
measured without error. This observation provides an avenue for
extending the local-EM toolbox to settings where data are
mismeasured but this is beyond the scope of this paper.

\subsection{Offsets}

In this section, we no longer assume $\rho(s,t)=\lambda(s)$ but
model the time trend multiplicatively. In addition, we account for
spatial and temporal variations in the underlying populations and
the average incidence rate of the disease within sex and age
groupings. As a result, we model the intensity surface as
$$ \rho_k(x,t) = \lambda(x) {\cal O}_k(x,t), $$
where ${\cal O}_k(x,t)= \beta(t) \theta_k P_k(x,t)$. In what follows
${\cal O}_k(x,t)$ is treated as known and piecewise constant. That
is, the intensity of this process is assumed to depend on the
following quantities: the population intensity $P(x,t)$, the
spatially varying relative risk $\lambda(x)$; the average incidence
rate $\theta$; and a time trend $\beta(t)$.

To allow variations in disease rates by age and gender, consider
$P_k(x,t)$ to be a vector of population intensity by the $k$th
age-and-gender group and $\theta$ to be a vector of the corresponding disease rates.  The number of lupus incidences has a Poisson distribution, and effects are multiplicative.  That is, counts now denoted
by $n_{jkl}$, are reported by region for age and sex groups and we have
$$
N_{jk\ell} \sim \text{Poisson}\left[\rho_{jk\ell} = \int_{\mathcal{T}_j}
\int_{R_{j\ell}} \lambda(x) {\cal O}_k(x,t)dx dt
\right].
$$

Denote the indicator function $\mathcal{I}(t \in \mathcal{T}_j)$ by $\mathcal{I}_j$ and the indicator function $\mathcal{I}(t \in
\mathcal{T}_j \text{ and } x \in R_{j\ell})$ by $\mathcal{I}_{jl}$. We model discrete time effects instead of considering continuous time.  Furthermore,
assume that $P_k(x, t)$ is a piecewise constant function at each census tract during each census period.  Then we have
\[\beta(t) = \sum_j \mathcal{I}_j \beta_j \quad
\text{and} \quad P_k(x,t) = \sum_{j\ell} \mathcal{I}_{jl} P_{kj\ell}/|\!| R_{j\ell} |\!|,\] where $\beta_j$ is the $j$th period effect, and $P_{jk\ell}$ and $|\!| R_{j\ell} |\!|$ are the total population size and the area of the $\ell$th census tract at the $j$th census period. As a result, we will also have
\[\quad {\cal O}_k(x,t) = \sum_{j\ell} \mathcal{I}_{jl} {\cal O}_{kj\ell}~~ \text{where} ~~{\cal O}_{kj\ell} = \hat{\theta}_k \hat{\beta}_j
P_{kj\ell}/ |\!|R_{j\ell}|\!|\] is the offset for the $k$th age-sex
group at the region $R_{j\ell}$.
The intensity $\rho_{kj\ell}$ can then be simplified to
\begin{align}
\rho_{kj\ell} & = \int_{\mathcal{T}_j} \int_{R_{j\ell}} \lambda(x) {\cal O}_k(x,t) \, \mathrm{d}x \, \mathrm{d}t
 = \left( \theta_k \int_{\mathcal{T}_j} \beta_j  \,\mathrm{d}t \right) \int_{R_{j\ell}} \lambda(x) P_{jk}(x) \,\mathrm{d}x \notag \\
&= |\!|R_{j\ell}|\!|^{-1} \beta_j \theta_k |\mathcal{T}_j|
P_{jk\ell} \int_{R_{j\ell}} \lambda(x) \, \mathrm{d}x \notag \\
&=|\mathcal{T}_j| {\cal O}_{kj\ell} \int_{R_{j\ell}} \lambda(x) \, \mathrm{d}x\notag \\
&=
\tilde{\cal O}_{kj\ell} \int_{R_{j\ell}} \lambda(x) \, \mathrm{d}x\label{e:
suff_stat}
\end{align}
Remarks:
\begin{enumerate}
 \item Censuses provides us with $$P_{kj\ell} = \int_{R_{j\ell}} P_{k}(x) dx.$$
 \item The equation (\ref{e: suff_stat}) implies that $\sum_{k\ell} N_{kj\ell}$, $\sum_{j\ell} N_{kj\ell}$, and $\sum_{k} N_{kj\ell}$ are sufficient statistics for $\beta_j$, $\theta_k$, and $\int_{R_{j\ell}} \lambda(x) \, \mathrm{d}x$, respectively.
\end{enumerate}

Should the risk be spatially constant, i.e.\ no spatial variations in disease rates, we can further
simplify the integral (\ref{e: suff_stat}) to $\beta_j \theta_k
|\mathcal{T}_j| P_{jk\ell}$, and thus estimating $\theta_{k}$ and
$\beta_{j}$ using a generalized linear model (\textsc{glm}) with an
offset equal to the person-year $| \mathcal{T}_j| P_{jk\ell}$.
Next, treat the \textsc{glm} estimates of $\hat{\theta}_k$ and $\hat{\beta}_j$ as known quantities.
To account for age and sex groupings we denote the location and time of a disease occurrence by the pair $\{S_{ik},T_{ik}\}$. Given this the likelihood function can now written as
\begin{align}
\mathcal{L}(\lambda) &= \sum_{k} \left(\sum_{i} \log
\rho_k(S_{ik}, T_{ik}) - \int_{\mathcal{T}} \int_{M} \rho_k(u, v) \,
\mathrm{d}u \, \mathrm{d}v \right) \notag\\
& =\sum_{k} \left(\sum_{i} \log
\lambda(S_{ik}) - \int_{\mathcal{T}} \int_{M} \lambda(u){\cal O}_k(u, v) \,
\mathrm{d}u \, \mathrm{d}v \right) \notag\\
& =\sum_{ik} \log
\lambda(S_{ik}) - \sum_{jkl}|\mathcal{T}_j|{\cal O}_{kj\ell} \int_{R_{j\ell}} \lambda(u)\,
\mathrm{d}u  \notag\\
& =\sum_{ik} \log
\lambda(S_{ik}) - \sum_{\ell}\tilde{\cal O}_{\ell} \int_{J_{\ell}} \lambda(u)\,
\mathrm{d}u  \label{e:llk_1}
\end{align}
where $\tilde{\cal O}_{\ell} =\sum_{kjm} \mathcal{I}(J_{\ell} \subseteq R_{jm}) \tilde{\cal O}_{kjm}$.

The remaining details are now analogous to \S 2.1 where we initially consider modelling $\lambda$ flexibly so that (\ref{e:llk_1}) becomes
\begin{align*}
\mathcal{L}_s({\bf a}) &= \sum_{ik} K_h(S_{ik}-s)\mathcal{P}_{\bf a}(S_{ik}-s) -   \sum_{\ell}\tilde{\cal O}_{\ell} \int_{J_{\ell}}K_h(u-s)\exp\left\{ \mathcal{P}_{\bf a}(u-s) \right\}\, \mathrm{d}u.
\end{align*}
which, given the $S_{ij}$ are areal censored, is replaced by
%
\begin{align}
\mathcal{L}_s({\bf a}) & = \sum_{kj\ell} n_{kj\ell} \mathbf{E}_{\lambda} \left[ K_{h}(S - s) {\cal P}_{\bf a}(u-s) \,\Big|\, S \in R_{jl}\right] -\sum_{\ell}\tilde{\cal O}_{\ell} \int_{J_{\ell}}K_h(u-s)\exp\left\{ \mathcal{P}_{\bf a}(u-s) \right\}\, \mathrm{d}u \notag \\
%
& = \sum_{j\ell} n_{j\ell} \mathbf{E}_{\lambda} \left[ K_{h}(S - s) {\cal P}_{\bf a}(u-s) \,\Big|\, S \in R_{jl}\right] -\sum_{\ell}\tilde{\cal O}_{\ell} \int_{J_{\ell}}K_h(u-s)\exp\left\{ \mathcal{P}_{\bf a}(u-s) \right\}\, \mathrm{d}u
\end{align}
where $n_{j\ell} = \sum_k n_{kj\ell}$.  This ultimately leads to the following local-EM algorithm for estimating ${\Lambda}_l$
$$
\hat\Lambda_\ell^{r+1}=\sum_p \int_{J_\ell} \dfrac{\tilde{\cal O}_p \int_{J_p} K_{h}(x - s) \, \mathrm{d}x/ |\!|J_p|\!|}{\sum_n \tilde{\cal O}_n \int_{J_n} \tilde{K}_h(x - s) \, \mathrm{d}x} \, \mathrm{d}s \times %
\left(\dfrac{1}{\tilde{\cal O}_p} \sum_{jm} n_{jm} \frac{\mathcal{I}(J_p
\subseteq R_{jm}) \hat{\Lambda}_{p}^{r}}{\sum_q \mathcal{I}(J_q
\subseteq R_{jm}) \hat{\Lambda}_{q}^{r}}\right)
$$
which may again be written explicitly as an EMS algorithm
\begin{equation}
\hat{\bf \Lambda}_{r+1}={\cal M}(\hat{\bf \Lambda}_r){\cal K}_h. 
\label{e:ems}
\end{equation}

Here multiplication and division by the quantity $\tilde{\cal O}_p$ ensures ${\cal M}(\hat{\bf \lambda}_r)$ is a step in an EM algorithm and also that ${\cal K}_h$ has rowsums equal to 1 {\em add comment that this matters for convergence}.
Finally, upon the convergence an EMS estimate for the relative risk $\lambda(x)$
is
\begin{equation}
\hat\lambda^{\ast}(x) = \sum_p \dfrac{\tilde{\cal O}_p \int_{C_p} K_{h}(x - s) \,
\mathrm{d}x/ \area{J_p}}{\sum_n \tilde{\cal O}_n \int_{C_n} \tilde{K}_h(x - s) \,
\mathrm{d}x} \times %
\left( \dfrac{1}{\tilde{\cal O}_p} \sum_{j\ell} \frac{\mathcal{I}(C_p \subseteq
R_{j\ell}) N_{j\ell} \hat{\Lambda}_{p}^{\ast}}{\sum_q
\mathcal{I}(C_q \subseteq R_{j\ell}) \hat{\Lambda}_{q}^{\ast}}
\right)
\end{equation}

Note that the matrix expression (\ref{e:ems}) is recognized as an
EMS algorithm, in which $\mathcal{M}$ is an EM
mapping with an extra smoothing step represented by $\mathcal{S}_h$.

% This local likelihood is then, which leads to a local EM algorithm.

%likelihood for $\lambda$
%$\begin{align*}
%$\log L(\lambda) =& \sum_k \sum_{i; X_i = k} \log \rho_k(s_i, t_i) - \int \rho_k(s, t)\\
%= & \sum_k \sum_j \sum_{\ell=1}^{M_j} \sum_{i; X_i = k, T_i,S_i \in R_{j\ell}}
%\end{align*}
%
%only depends on
%
%\[
%O_{j\ell} = \sum_k \theta_k P_{jk\ell} \beta_j (t_j - t_{j-1} )  / |R_{j\ell}|
%\]
%and the $S_i$ and $T_i$, not the $X_i$ .
%
%
%$S_i$ are poisson point process, intensity = ?
%
%$\lambda(x) = \lambda$, constant in space, MLE
%
%local likelihood, uniform kernel and general kernel
%
%
%Area censoring results in the $S_{it}$ being unobserved and we are only provided with the number of cases $Y_{tj}$ in region $R_{tj}$ during the $t$th period. Writing $P_{tj}$ to be the vector of populations for each age and sex group in region $j$ and period $t$, we assume the population is evenly distributed within each region. Although this assumption is not ideal, census regions and $\theta$ to be the population-average disease rates for each age-sex group, we



\subsection{Simulation Study}

A simulation study of 100 samples is conducted to assess the performance of the proposed local EM relative risk estimator in the simplified scenario that assumes only one age-and-sex group and no temporal variation in the spatial risk surface. Each simulated sample consists of two maps that represent two observations in the same geographical area at different time points. The outer boundaries of these two map are identical, but the subregions of these two maps are of different shapes.  More specifically, there are five vertically stacked horizontal rectangles in the first map and five horizontally juxtaposed vertical rectangles in the second map.  All subregions have the same area of 5 squared-units. Overlaying these two maps forms 25 unit-squared pixels over which kernel weights are computed.  A Gaussian kernel and a sequence of 201 equal-distant bandwidths, which ranges from 0 to 2, are chosen to construct the kernel weight matrix as shown in (\ref{e:ems}).

Once the boundaries of the subregions are defined, spatial Poisson point process data are simulated in the following steps.  First, regional populations are simulated using a Poisson process whose intensity function is uniform over each subregion.  The intensities for the five subregions in each map are set to be 18, 28, 38, 28, and 18  per squared-unit.  Next, we set the true relative risk $\rho(x)$ to be the product of rescaled two gamma density functions with the shape and scale parameters of 1.5 and 0.5, respectively.  This relative risk attains its maximum of 1 at $(0.25, 0.25)$.  Next, a case label is randomly generated with the probability of $\rho(x)$ for each simulated location $x_i$.  Finally, the population and case data are both aggregated over the subregion where they are located and reported in the form of regional counts.

For each simulated sample and for each bandwidth, we compute three relative risk estimates, namely the kernel estimate, the proposed local likelihood estiamte and a smooth nonparametric estimate.  We calculate the kernel estimate based on the data in which case locations are completely observed with aggregated population. When data are area-censored, we compute the proposed local likelihood estimate and a smooth nonparametric estimate (SNE).  The SNE is obtained by by smoothing a NPMLE estimated using an EM algorithm. Since the NPMLE is only unique up to the equivalence class defined by $\mathcal{J}$, we smooth the NPMLE by placing masses of the esitmated pixel risks at the centre of each pixel. Finally, we assess how the three estimators perform on the basis of the mean integrated squared error (MISE),
$$
\mbox{MISE} \equiv \mathbf{E} \left[\int \left(\hat{\rho}(u) - \rho(u) \right)^2 \dee{u} \right].
$$
Here, we estimate the MISE by averaging the integrated squared errors of the simulated samples for each value bandwidth.  The result is plotted and shown in Figure \ref{f:sim_mise_spatial}.

As shown in Figure \ref{f:sim_mise_spatial}, the MISE of the proposed local likelihood estimator is consistently lower than that of the SNE.  Comparing to the bandwidth that SNE attains its lowest MISE, the MISE of the local likelihood estimator is the lowest at a smaller bandwidth of 0.19.  The kernel estimator has the lowest MISE ($1.8\time 10^{-3}$) among all three because it is based on the complete data.  It is not that all surprising that the bandwidth value that the kernel estimator has the lowest MISE is greater than that of the local likelihood estimator since the aggregation or area-censoring is also considered as a smoothing operation. Hence, the bandwidth value required achieving the lowest MISE for smoothed data is smaller than that for the complete data. 

\begin{figure} \centering
\includegraphics[scale=.5, angle=0]{mise_simulation_v2_gaussian}
\caption{The proposed local likelihood risk estimate achieves the lowest overall MISE with a small bandwidth of 0.19, comparing to the smooth nonparametric risk estimate resulted from smoothing the NPMLE at the centre of each pixel.}
\label{f:sim_mise_spatial}
\end{figure}



\section{Application}


\subsection{Cross Validation}

Leave-one-map-out cross validation is used to choose an optimal bandwidth size.  Since excluding a map alters the offsets $O_{n} = \sum_{j\ell} \mathcal{I}(C_{n} \subseteq R_{j\ell}) O_{j\ell}$ in the $(p, m)$th entry of $\mathcal{K}_h$, the cross validation requires the re-calculation of the smoothing matrix each time a map is left out, which leads to four smoothing matrices of 206,957 by 206,957 for each bandwidth value. To facilitate this computationally intensive task, we limit the construction of the smoothing matrix to 14 equal-distant bandwidth values, ranging from 150 to 2100.  Moreover, the geographical areas that these four maps represent are different because the Greater Toronto Area has been expanding during this 40-year observation period.  We restrict the estimation of prediction error, which is the squared difference between the observed and predicted values,  over the census tracts that are common to all four maps. There are in total 2,689 common census tracts.  Specifically, $$ \mbox{PE}(h) = \frac{1}{4} \sum_{j\ell'} \left( N_{j\ell'} - O_{j\ell'}\hat{\Lambda}_{j\ell'}^{(-j)}(h) \right)^2,$$ where $\ell'$ is the index of the common census tracts. The bandwidth of 1,350 is selected based on the cross validation.

\begin{table}
\centering
\begin{tabular}{ |l|*{7}{c|} }
\hline \multicolumn{8}{|c|}{Leave-One-Out Cross Validation} \\
\hline Selected Bandwidth Value  & 150   & 300   & 450   & 600   & 750 & 900 & 1050 \\
\hline Prediction Errors    & 30246 & 20622 & 17018 & 15205 & 14103 & 13312 & 12609 \\
\hline \hline
\hline Selected Bandwidth Value  & 1200 & 1350 & 1500 & 1650 & 1800 & 1950 & 2100 \\
\hline Prediction Errors    & 11900 & 6841 & 7935 & 7306 & 9010 & 8369 & 7779 \\
\hline
\end{tabular}
\caption{The bandwidth that gives the smallest prediction error is chosen to be the optimal. As shown in the table above, the prediction error is the smallest with the bandwidth of 1350.}
\label{t: cross_val}
\end{table}



\section{Discussion}

\section{Penalized Likelihood: Example}

Bivariate interval-censored data arise when times of two events of interest are both interval censored; for
example, events of interest could be HIV infection and the
manifestation of acquired immune deficiency syndrome.  Here, we are
interested in estimating the joint distribution of event times.

We give an example of hypothetical bivariate interval-censored data
to better illustrate the effect of this penalty functional on the
estimation procedure. This dataset consists of eights observations, represented by four
horizontal and four vertical rectangles.  Overlaying these
observations forms a partition of 81 unit squares.  Among these 81 unit squares, there are 16 maximal intersections, which is analogous to innermost intervals in the univariate case. These maximal intersection is shaded in grey in the figure.  Similarly, an density estimate that places positive weights on partition elements other than these maximal intersections cannot be a NPMLE, and there are multiple NPMLE's. For example, a uniform weight of 1/16 on all maximal intersections, a weight of 1/4 on the positive diagonal maximal intersections, and a weight of 1/4 on the negative diagonal maximal intersections all maximize the likelihood.  Consequently, the EM iteration will converges to one of the solutions depending on the initial value. However, the empirical evidence suggests otherwise for the EMS algorithm.  

When a radially symmetrical kernel is used, the EMS iteration will always converge to the solution that favors the uniform weight of 1/16 on all maximal intersection in spite of initial values.  However, if the kernel with bandwidth values of 1 and .25 in the x- and y-direction is rotated by 45 degrees, the EMS iteration will converge to the solution that favors the wight of 1/4 on the positive diagonal.  Likewise, iterations converges to the solution that favors the weights on the negative diagonal if the elliptical kernel is rotated by -45 degrees.  This phenomena can be explained by the penality induced by the kernel.  When the kernel is radially symmetrical, any deviations from the maximal eigenfunction are equally penalized.  However, as the kernel becomes more elliptical, deviations in the direction of the major axis of the elliptical contour are penalized less than those in the direction of the minor axis.



\section{Computational Details}
With the data-driven partition $\mathcal{J}$, the entry of the smoothing matrix is of the following form:
$$
\left[\mathcal{K}_h \right]_{\ell s} = \frac{\tilde{\off}_\ell}{\area{J_\ell}}\int_{J_s}\frac{\int_{J_\ell}K_h(u-x) \dee{u}}{\varPsi_h(x; \hat{\bf a}^{r+1})}\dee{x}.
$$
However, to compute the quadruple integrals is difficult when the partition $\mathcal{J}$ consists regions of irregular shapes and sizes.  To proceed with the computation, we first tessellate all four maps, and the tessellation, denoted by $\mathcal{C} = \{C_1, \ldots, C_K \}$, has ??? 100-by-100 square pixels\footnote{The unit is in meters.}.  Such a tessellation leads to an intimidating computation task to numerically evaluate ?? billions of quadruple integrals!

We takes the following two steps in order to complete the taxing task in a timely fashion.  First, we choose a radially symmetrical kernel with \emph{compact} support.  With a relatively small bandwidth value, the use of this kernel will result in a sparse smoothing matrix, such that most entries equal zero. We can then apply specialized procedures for sparse matrices, for example \textsf{R}'s SparseM package~\cite{R:sparsem}, to perform necessary algebraic operations in an \textsc{ems} iteration with minimal memory consumption.

Next, we have to numerically evaluate the quadruple integrals in $\mathcal{K}_h$.  A brute-force approach is to directly apply a quadrature rule, say Simpson's rule, to the four dimensional function
$$
\area{C}^{-1} \dfrac{\tilde{\off}_p K_{h}(u - x) \dee{u}}{\sum_n \tilde{\off}_n \int_{C_n} K_h(u - x)}.
$$
Due to the large number of pixels, the brute-force approach is infeasible. Instead, we choose a kernel simple enough that we can derive an analytical solution to the inner double integral. Once we obtain the analytical form of the inner double integral, we apply a Gaussian quadrature rule to approximate the outer double integral. This step will significantly shorten the computational time. Take this disease mapping problem as an example. Here, we choose a bivariate biweight kernel, given by
$$
K(u, v) = \begin{cases}
         \dfrac{3}{\pi} \left(1-(u^2 + v^2)\right)^2 & \mbox{where $u^2 + v^2 \le 1 $}\\
        0   & \mbox{otherwise.}
        \end{cases}
$$
For a fixed point $s$, we apply Green's Theorem to simplify the inner double integral~$\int_{C_p} K_h(x-s)\dee{x}$ to a line integral, an indefinite integral that has an analytical formula. For instance, let $u=x_1 - s_1$ and $v=x_2 - s_2$. Without the loss of generality, set $h=1$ so that the support of the kernel is a unit circle.  Denote the intersection of the unit circle and the pixel $C_p$ by $D$ and the boundary of $D$ in a counter-clockwise orientation by $\partial D$. Then the analytical solution to the inner double integral is given by
\begin{align}
\int_{D} K(u, v) \dee{u} \dee{v} %= \frac{3}{\pi} \int_{D} \left(1-(u^2 + v^2)\right)^2 \dee{u} \dee{v} \notag \\
 &= \frac{3}{\pi} \int_{\partial D} f(u, v) \dee{u} + g(u, v) \dee{v}, \label{e:line_int}
\end{align}
where $$f(u, v) = -(v^5/5 - 2v^3/3 + u^2v^3/3+v/2)$$ and $$g(u, v) = u^5/5 -2u^3/3 +u^3v^2/3 + u/2.$$  Equation~\ref{e:line_int} can be also expressed in polar coordinates with $r=1$, which is given by
\begin{align}
 \frac{3}{\pi} \int_{\partial D} (5 \theta - \sin 4\theta)/30 \dee{\theta}
\end{align}
%
When $\partial D = \bigcup_i \partial D_i$, where $\partial D_i$ is either a segment of the pixel boundary or an arc of the unit circle,
$\int_{\partial D} f(u, v) \dee{u} + g(u, v) \dee{v} = \sum_i \int_{\partial D_i} f(u, v) \dee{u} + g(u, v) \dee{v}$ and
$$
\int_{\partial D_i} f(u, v) \dee{u} + g(u, v) \dee{v} =
    \begin{cases}
    \frac{3}{\pi} \int_{u_R}^{u_L} f(u, v) \dee{u} & \mbox{if $\partial D_i \equiv (u_R, v) \rightarrow (u_L, v)$} \\
    \frac{3}{\pi} \int_{v_U}^{v_L} g(u, v)  \dee{v} & \mbox{if $\partial D_i \equiv (u, v_U) \rightarrow (u, v_L)$} \\
    \frac{1}{10\pi} \int_{\theta_L}^{\theta_U} 5 \theta - \sin 4\theta \, \mathrm{d}\theta  & \mbox{if $\partial D_i \equiv \theta_L \rightarrow \theta_U$.}
    \end{cases}
$$
\begin{figure} \centering
\includegraphics[scale=.5, clip=true, trim=1.75in 1.35in 2.25in 1.5in]{kernel_cal_example.pdf}
\caption{The region $D$ is enclosed by three solid lines.  Let $\partial D_1$, $\partial D_2$ and $\partial D_3$ be the horizontal, the arc and the vertical solid lines, respectively.}
\label{f:kernel_cal_example}
\end{figure}
%
For example, we can use Formula \ref{e:line_int} to calculate volume over the region $D$ enclosed by the three solid lines in Figure \ref{f:kernel_cal_example}. Let $\partial D_1$, $\partial D_2$ and $\partial D_3$ be the horizontal solid line, the solid arc and the vertical solid line, respectively. Assume a counter-clockwise orientation.
\begin{align*}
 & \int_{D} K(u, v) \dee{u} \dee{v} = \frac{3}{\pi} \int_{\partial D} f(u, v) \dee{u} + g(u, v) \dee{v} \\
& = \frac{3}{\pi} \int_{.5}^{-\sqrt{.75}} f(u, v) \dee{u} + \frac{1}{10\pi} \int_{\pi -\sin^{-1}(.5)}^{2\pi -\cos^{-1}(.5)} 5 \theta - \sin 4\theta \dee{\theta} + \frac{3}{\pi} \int_{-\sqrt{.75}}^{.5} g(u, v)  \dee{v}.
\end{align*}

Once the inner double integral can be analytically evaluated for any fixed point $s$, the outer double integral is then numerically evaluated using a Gaussian quadrature rule (see \citet{Abramowitz:1965handbook}). Here, we use the 25-point Gaussian quadrature rule (5 quadrature nodes in each dimension). Let $\alpha_m$ and $\omega_m$ denote the $m$th quadrature node in a square defined by $[-1, 1] \times [-1, 1]$ and its corresponding quadrature weight, respectively. Let $s_{mi}$ be the $i$th quadrature point in $C_m$. Mapping the square of $[-1, 1] \times [-1, 1]$ to $C_m$ leads to the following linear transformation:
$$s_{mi} = \begin{bmatrix}
            \frac{d_m}{2} & 0 \\
        0 & \frac{d_m}{2}
           \end{bmatrix}
 \alpha_i + c_m,$$
where $c_m$ and $d_m$ are the centroid and edge length of $C_m$, respectively.
Since the Jacobian matrix of this transformation has the determinant of $\area{C}/4$, it follows
$$
\area{C}^{-1} \int_{C_m} \dfrac{\tilde{\off}_p \int_{C_p} K_{h}(x - s) \dee{x}}{\sum_m \tilde{\off}_m \int_{C_m} K_h(x - s) \dee{x}} \dee{s} \approxeq %
\frac{1}{4} \sum_i w_i \dfrac{\tilde{\off}_p \int_{C_p} K_{h}(x - s_{mi}) \dee{x}}{\sum_m \tilde{\off}_m \int_{C_n} K_h(x - s_{mi}) \dee{x}}.
$$
We implement the computation of the smoothing matrix using the programming language \textsf{C} since it is very computationally involved.
% Although the EMS algorithm is easy to implement, iterative procedures become computational demanding and time consuming when the number of pixels is large.  We takes the following two steps to reduce this computational burden and to expedite the necessary computational process.  First, we choose a radially symmetrical kernel with compact support to construct the smoothing matrix $\mathcal{K}_h$, which results in a sparse matrix, i.e.\ most matrix
% entries equal to zero, when the bandwidth is small relative to the pixel and map sizes. In this case, we can apply algebraic procedures for sparse matrices to speed up the EMS iterations with minimal computational resources.
% 
% Next, we need to numerically evaluate the quadruple integrals that appear in the smoothing matrix $\mathcal{S}_h$.  A brute-force method is to apply the Gaussian quadrature rule to the four dimensional function 
% $$
% \area{C_p}^{-1} \dfrac{\tilde{\off}_p  K_{h}(x - s)}{\sum_m \tilde{\off}_m \int_{C_m} K_h(x - s) \dee{x}}.
% $$ 
% However, this method becomes computationally infeasible when the number of pixels is large. The computational burden of these quadruple integrals can be greatly reduced by deriving an analytical solution to the inner double integrals and then apply a quadrature, say Gaussian, rule to approximate the outer double integrals. For these two reasons, we choose the bivariate biweight function as the kernel function
% $$
% K(u, v) = \begin{cases}
%          \dfrac{3}{\pi} \left(1-(u^2 + v^2)\right)^2 & \mbox{where $u^2 + v^2 \le 1 $}\\
%         0   & \mbox{otherwise}
%         \end{cases}
% $$ 
% in the construction of the smoothing matrix $\mathcal{S}_h$.
% 
% For any fixed $s$, we then apply Green's Theorem to simplify the double integral $\int_{C_p} K_h(x-s) \dee{x}$ to a line integral that is simple enough to derive an analytical solution. For instance, let $h=1$, $u=x_1 - s_1$ and $v=x_2 - s_2$. Denote the intersection of the unit circle and the pixel over which the integral is computed by $C$ and the boundary of $C$ in a counter clockwise orientation by $\partial C$. Then the inner double integral is given by
% \begin{align}
%  & \int_{C} K(u, v)\,\mathrm{d}u \,\mathrm{d}v 
% = \frac{3}{\pi} \int_{\partial C} f(u, v) \,\mathrm{d}u + g(u, v)\,\mathrm{d}v, \label{e:line_int}
% \end{align}
% where $f(u, v) = -(v^5/5 - 2v^3/3 + u^2v^3/3+v/2)$ and $g(u, v) =
% u^5/5 -2u^3/3 +u^3v^2/3 + u/2$. In terms of polar coordinate with
% $r=1$, the equation (\ref{e:line_int}) can be expressed as
% \begin{align}
%  \frac{3}{\pi} \int_{\partial C} (5 \theta - \sin 4\theta)/30 \, \mathrm{d}\theta
% \end{align}
% %
% When $\partial C = \bigcup_i \partial C_i$, where $\partial C_i$ is
% either the boundary of the pixel or the arc of the unit circle,
% $\int_{\partial C} f(u, v) \,\mathrm{d}u + g(u, v)\,\mathrm{d}v =
% \sum_i \int_{\partial C_i} f(u, v) \,\mathrm{d}u + g(u,
% v)\,\mathrm{d}v$ and
%  $$ \int_{\partial C_i} f(u, v) \,\mathrm{d}u + g(u, v)\,\mathrm{d}v =
%     \begin{cases}
%     \frac{3}{\pi} \int_{u_R}^{u_L} f(u, v) \,\mathrm{d}u & \mbox{if $\partial C_i \equiv (u_R, v) \rightarrow (u_L, v)$} \\
%     \frac{3}{\pi} \int_{v_U}^{v_L} g(u, v)  \,\mathrm{d}v & \mbox{if $\partial C_i \equiv (u, v_U) \rightarrow (u, v_L)$} \\
%     \frac{1}{10\pi} \int_{\theta_L}^{\theta_U} 5 \theta - \sin 4\theta \, \mathrm{d}\theta  & \mbox{if $\partial C_i \equiv \theta_L \rightarrow \theta_U$.}
%     \end{cases}
% $$
% %
% Once the inner double integral is evaluated for any fixed $s$, the outer double integral can be then numerically evaluated using the Gaussian quadrature rule, i.e.
% $$
% \int_{C_m} \dfrac{\tilde{\off}_p \int_{C_p} K_{h}(x - s) \, \mathrm{d}x/ |\!|C_p|\!|}{\sum_m \tilde{\off}_m \int_{C_m} K_h(x - s) \, \mathrm{d}x} \, \mathrm{d}s \approxeq %
% \sum_i w_i \dfrac{\tilde{\off}_p \int_{C_p} K_{h}(x - s_{mi}) \, \mathrm{d}x/ |\!|C_p|\!|}{\sum_m \tilde{\off}_m \int_{C_n} K_h(x - s_{mi}) \, \mathrm{d}x},
% $$ 
% where $s_{mi}$ is the $i$th quadrature point in $C_m$ with quadrature weight $w_i$. Here, we use the 25-point Gaussian quadrature rule.  


\section{Concluding Remark}
A careful comparison of our local \textsc{em} to the \textsc{ems} algorithm of \citet{silvermanems} permits further insights beyond what has already been discussed. In \citet{silvermanems}, the authors's primarily concern was with image reconstruction involving a single image with unit offsets.  It was not clear how one could incorporate covariates in their methodological framework.  In a way, the local likelihood provides us with a natural framework to extend the applicability of the \textsc{ems} algorithm by allowing  multi-scaled and misaligned maps/images and permitting the analysis of covariate effects.  As a result, the disease mapping problem that we consider in the paper  could be thought as an extension of the image reconstruction technique of \citet{silvermanems} to an epidemiological setting. Furthermore, although the authors did not advocate any particular type of smoothing matrix in their 1990 paper, the smoothing matrix of integrated kernels arises naturally from the local likelihood setting.

\citet{silvermanems} suggests the \textsc{ems} as an alternative method to address data issues related to indirect observed data in the sense that the data are contaminated according to the following integral equation.
\begin{equation}
 h(x) = \int K(x, y) f(y) \dee{y},
\end{equation}
where $K(x, y)$ is a known conditional density of $X$ given $y$ and $f$ is the parameter of interest. The authors' attempts to solve the integral equation by discretizing $f$ has led to the consideration of the \textsc{ems} algorithm. 
Their version of the algorithm is related to our \textsc{ems} algorithm, but they are different in terms of the conditional density $K(x, y)$.

\citet{silvermanems} referred to quantities analogous to $R_{ij}$'s and $J_\ell$'s as observation and reconstruction bins, respectively and defined a weight to be $$w_{ijk} = \int_{R_{ij}} \int_{J_k} K(x, y) \dee{x} \dee{y}.$$ Here, we interpret the quantity as the amount of overall influence that an event in $J_k$ exerts on the observation in $R_{ij}$. Should we have some expert knowledge such as an observed event is more likely to originate in one specific region than others, we can incorporate such knowledge by exploiting the role that these weights play. %In other words, the determination of $w_{ijk}$ depends on how the conditional density is specified. 
For example, consider the two pairs of observed counts in Figure \ref{f:concluding_example1}.  The use of the \textsc{em} algorithm always results in a \textsc{npmle} that favours the distribution of unobserved counts shown in Figure \ref{f:concluding_example2}.  However, it may be the case that unobserved counts are distributed as shown in Figure \ref{f:concluding_example3}, which would not be attainable without further information.  If we know the conditional density \emph{a priori}, such as disease incidences observed on the left side of the first map are much more likely to come from the bottom than the upper area,  we may want to incorporate such information into the formulation of the \textsc{em} step so that we could obtain a more sensible estimate. Note that the $w_{ijk}$'s here correspond to the $\mathcal{I}_{ijk}$'s, implying that the $K(x, y)$ is a Dirac delta function in our setting. In other words, the interval- and area-censored data that we have considered so far are, in fact, directly observed. 

\begin{figure} \centering
 \subfigure[Two pairs of aggregated counts of (10, 20) and (20, 10).]{\includegraphics[scale=.5, clip=true, trim=1.75in 4in 1.75in 1.25in]{conclusion_example.pdf} \label{f:concluding_example1}}\\
\subfigure[One possible distribution of unobserved quantities. An \textsc{npmle} will always favour this distribution using the \textsc{ems} algorithm presented in the thesis.]{\includegraphics[scale=.5, clip=true, trim=4in .5in 4in 4in]{conclusion_example2.pdf} \label{f:concluding_example2}}
\hspace{.5in}
\subfigure[Another possible distribution of unobserved quantities.  This distribution does not maximize the nonparametric likelihood.]{\includegraphics[scale=.5, clip=true, trim=4in .5in 4in 4in]{conclusion_example3.pdf} \label{f:concluding_example3}}
\caption{A \textsc{npmle} always favours the distribution in Figure~\ref{f:concluding_example2}; however, it may be the case that the unobserved quantities are really distributed in the manner of the map shown in Figure~\ref{f:concluding_example3}.}
\end{figure}

The above observation provides us with an insight into extending local \textsc{em} techniques to analyze indirectly observed data.  We are, in particular, interested in the analysis of \emph{mismeasured data}. $K(x, y)$ in this application may be a known conditional density of $X$ or a more general function that weighs $x$ relative to $y$.
For instance, an \textsc{hiv} patient is often subject to periodic medical testing to determine the progression of \textsc{aids}, and a sequence of test results from the patient is recorded.  Should we assume the setting considered in Chapter \ref{chapter:intensity}, a negative test followed by a positive test indicates that the patient developed \textsc{aids} between these two test dates.  However, it would only be the case if the medical test were perfect.  In reality, each testing is subject to the sensitivity and specificity that cause the results to be mismeasured.  Not only is the time of full \textsc{aids} development interval-censored, but it is also indirectly observed.  In this setting, the $w_{ijk}$ is determined by the sensitivity and specificity of the test and interpreted as the probability of having the observed results conditional on the time interval in which \textsc{aids} is fully developed.

In the spatial context, we may consider residential history data that involve disease incidences as an example of mismeasured data.  
Suppose that the primary cause of a certain type of disease is the accumulative exposure of risk that is associated with the physical environment.  
Since a subject moves from one area to another over time, his/her (accumulative) risk exposure varies, depending on when and where he/she resides. 
In the situation in which it is difficult to directly measure the level of risk exposure that varies in space and time for each subject, we can use his/her residential history as a proxy. For example, if the subject is diagnosed with the disease while living at location $x$, it is not the most appropriate to attribute the detrimental event \emph{solely} to the current residence.  In fact, it is probably more sensible to attribute the event to locations where the subject had lived. In this case, we may think of the movement in space and time as a major cause of mismeasured risk exposure. 
Moreover, $K(x, y)$ in this situation may be considered as a more general weight function that determines how relevant a subject's past risk exposure while living in $y$ is to an incidence while the subject is living in $x$.  The functional form of $K(x, y)$ may be based on a model or specified by expert knowledge regarding the nature of the disease.
Even if the past and present residences are area-censored, we are still able to specify the relevance of the past risk exposure to the disease incidence through the use of $w_{ijk}$'s. 
In either example, the \textsc{ems} algorithm in \citet{silvermanems} offers an avenue to incorporate such information in the local \textsc{em} setting.

Finally, the local likelihood can be seen as a semi-parametric method, providing a compromise between the power and theoretical rigour of parametric methods and the flexibility of kernel-base methods.  Local \textsc{em} provides a method for applying the local likelihood in the situation in which data are coarsened by interval- or area-censoring.  By connecting the local \textsc{em} with the \textsc{ems} algorithm, we hope that the computational advantage offered by the \textsc{ems} will lead to greater adoption of local \textsc{em} methods.



\bibliography{spatial}
\end{document}
