|
arXiv:1001.0044v3 [math.PR] 31 Mar 2011A law of large numbers approximation for |
|
Markov population processes with countably |
|
many types |
|
A. D. Barbour∗and M. J. Luczak† |
|
Universit¨ at Z¨ urich and London School of Economics |
|
Abstract |
|
When modelling metapopulation dynamics, the influence of a s in- |
|
gle patch on the metapopulation depends on the number of indi vidu- |
|
als in the patch. Since the population size has no natural upp er limit, |
|
this leads to systems in which there are countably infinitely many |
|
possible types of individual. Analogous considerations ap ply in the |
|
transmission of parasitic diseases. In this paper, we prove a law of |
|
large numbers for quite general systems of this kind, togeth er with |
|
a rather sharp bound on the rate of convergence in an appropri ately |
|
chosen weighted ℓ1norm. |
|
Keywords: Epidemic models, metapopulation processes, countably many |
|
types, quantitative law of large numbers, Markov population proce sses |
|
AMS subject classification: 92D30, 60J27, 60B12 |
|
Running head: A law of large numbers approximation |
|
1 Introduction |
|
There are many biological systems that consist of entities that diffe r in their |
|
influence according to the number of active elements associated wit h them, |
|
∗Angewandte Mathematik, Universit¨ at Z¨ urich, Winterthurertra sse 190, CH-8057 |
|
Z¨URICH; ADB was supported in part by Schweizerischer Nationalfond s Projekt Nr. 20– |
|
107935/1. |
|
†London School of Economics; MJL was supported in part by a STICE RD grant. |
|
1and can be divided into types accordingly. In parasitic diseases (Bar bour & |
|
Kafetzaki 1993, Luchsinger 2001a,b, Kretzschmar 1993), the in fectivity of a |
|
host depends on the number of parasites that it carries; in metapo pulations, |
|
the migration pressure exerted by a patch is related to the number of its |
|
inhabitants (Arrigoni 2003); the behaviour of a cell may depend on the num- |
|
ber of copies of a particular gene that it contains (Kimmel & Axelrod 2 002, |
|
Chapter 7); and so on. In none of these examples is there a natura l upper |
|
limit to the number of associated elements, so that the natural set ting for |
|
a mathematical model is one in which there are countably infinitely man y |
|
possible types of individual. In addition, transition rates typically incr ease |
|
with the number of associated elements in the system — for instance , each |
|
parasite has an individual death rate, so that the overall death ra te of par- |
|
asites grows at least as fast as the number of parasites — and this le ads |
|
to processes with unbounded transition rates. This paper is conce rned with |
|
approximations to density dependent Markov models of this kind, wh en the |
|
typical population size Nbecomes large. |
|
In density dependent Markov population processes with only finitely |
|
many types of individual, a law of large numbers approximation, in the f orm |
|
ofasystemofordinarydifferentialequations, wasestablishedbyK urtz(1970), |
|
together with a diffusion approximation (Kurtz, 1971). In the infinit e di- |
|
mensional case, the law of large numbers was proved for some spec ific mod- |
|
els (Barbour & Kafetzaki 1993, Luchsinger 2001b, Arrigoni 2003 , see also |
|
L´ eonard 1990), using individually tailored methods. A more general result |
|
was then given by Eibeck & Wagner (2003). In Barbour & Luczak (20 08), |
|
the law of large numbers was strengthened by the addition of an err or bound |
|
inℓ1that is close to optimal order in N. Their argument makes use of an |
|
intermediate approximation involving an independent particles proce ss, for |
|
which the law of large numbers is relatively easy to analyse. This proce ss is |
|
then shown to be sufficiently close to the interacting process of act ual inter- |
|
est, by means of a coupling argument. However, the generality of t he results |
|
obtained is limited by the simple structure of the intermediate proces s, and |
|
the model of Arrigoni (2003), for instance, lies outside their scop e. |
|
In this paper, we develop an entirely different approach, which circu m- |
|
vents the need for an intermediate approximation, enabling a much w ider |
|
class of models to be addressed. The setting is that of families of Mar kov |
|
population processes XN:= (XN(t), t≥0),N≥1, taking values in the |
|
countable space X+:={X∈ZZ+ |
|
+;/summationtext |
|
m≥0Xm<∞}. Each component repre- |
|
2sents the number of individuals of a particular type, and there are c ountably |
|
many types possible; however, at any given time, there are only finit ely |
|
many individuals in the system. The process evolves as a Markov proc ess |
|
with state-dependent transitions |
|
X→X+Jat rate NαJ(N−1X), X∈ X+, J∈ J,(1.1) |
|
where each jump is of bounded influence, in the sense that |
|
J ⊂ {X∈ZZ+;/summationdisplay |
|
m≥0|Xm| ≤J∗<∞},for some fixed J∗<∞,(1.2) |
|
so that the number of individuals affected is uniformly bounded. Dens ity |
|
dependence is reflected in the fact that the arguments of the fun ctionsαJ |
|
are counts normalised by the ‘typical size’ N. Writing R:=RZ+ |
|
+, the func- |
|
tionsαJ:R →R+are assumed to satisfy |
|
/summationdisplay |
|
J∈JαJ(ξ)<∞, ξ∈ R0, (1.3) |
|
whereR0:={ξ∈ R:ξi= 0 for all but finitely many i}; this assumption |
|
implies that the processes XNare pure jump processes, at least for some |
|
non-zero length of time. To prevent the paths leaving X+, we also assume |
|
thatJl≥ −1 for each l, and that αJ(ξ) = 0 ifξl= 0 for any J∈ Jsuch |
|
thatJl=−1. Some remarks on the consequences of allowing transitions J |
|
withJl≤ −2 for some lare made at the end of Section 4. |
|
Thelawoflargenumbersisthenformallyexpressed intermsofthesy stem |
|
ofdeterministic equations |
|
dξ |
|
dt=/summationdisplay |
|
J∈JJαJ(ξ) =:F0(ξ), (1.4) |
|
to be understood componentwise for those ξ∈ Rsuch that |
|
/summationdisplay |
|
J∈J|Jl|αJ(ξ)<∞,for alll≥0, |
|
thus by assumption including R0. Here, the quantity F0represents the in- |
|
finitesimal average drift of the components of the random proces s. However, |
|
in this generality, it is not even immediately clear that equations (1.4) h ave |
|
a solution. |
|
3In order to make progress, it is assumed that the unbounded comp onents |
|
in the transition rates can be assimilated into a linear part, in the sens e |
|
thatF0can be written in the form |
|
F0(ξ) =Aξ+F(ξ), (1.5) |
|
again to be understood componentwise, where Ais a constant Z+×Z+ |
|
matrix. These equations are then treated as a perturbed linear sy stem |
|
(Pazy 1983, Chapter 6). Under suitable assumptions on A, there exists a |
|
measure µonZ+, defining a weighted ℓ1norm/⌊ard⌊l · /⌊ard⌊lµonR, and a strongly |
|
/⌊ard⌊l·/⌊ard⌊lµ–continuoussemigroup {R(t), t≥0}oftransitionmatriceshaving point- |
|
wise derivative R′(0) =A. IfFis locally /⌊ard⌊l·/⌊ard⌊lµ–Lipschitz and /⌊ard⌊lx(0)/⌊ard⌊lµ<∞, |
|
this suggests using the solution xof the integral equation |
|
x(t) =R(t)x(0)+/integraldisplayt |
|
0R(t−s)F(x(s))ds (1.6) |
|
as an approximation to xN:=N−1XN, instead of solving the deterministic |
|
equations (1.4) directly. We go on to show that the solution XNof the |
|
stochastic system can be expressed using a formula similar to (1.6), which |
|
has an additional stochastic component in the perturbation: |
|
xN(t) =R(t)xN(0)+/integraldisplayt |
|
0R(t−s)F(xN(s))ds+/tildewidemN(t),(1.7) |
|
where |
|
/tildewidemN(t) :=/integraldisplayt |
|
0R(t−s)dmN(s), (1.8) |
|
andmNis the local martingale given by |
|
mN(t) :=xN(t)−xN(0)−/integraldisplayt |
|
0F0(xN(s))ds. (1.9) |
|
The quantity mNcanbe expected to be small, at least componentwise, under |
|
reasonable conditions. |
|
To obtain tight control over /tildewidemNin all components simultaneously, suf- |
|
ficient to ensure that sup0≤s≤t/⌊ard⌊l/tildewidemN(s)/⌊ard⌊lµis small, we derive Chernoff–like |
|
boundsonthedeviations ofthemost significant components, witht hehelpof |
|
a family of exponential martingales. The remaining components are t reated |
|
usingsomegeneral a prioriboundsonthebehaviourofthestochasticsystem. |
|
4This allows us to take the difference between the stochastic and det erministic |
|
equations (1.7) and (1.6), after which a Gronwall argument can be c arried |
|
through, leading to the desired approximation. |
|
The main result, Theorem 4.7, guarantees an approximation error o f or- |
|
derO(N−1/2√logN) in the weighted ℓ1metric/⌊ard⌊l·/⌊ard⌊lµ, except on an event of |
|
probability of order O(N−1logN). More precisely, for each T >0, there |
|
exist constants K(1) |
|
T,K(2) |
|
T,K(3) |
|
Tsuch that, for Nlarge enough, if |
|
/⌊ard⌊lN−1XN(0)−x(0)/⌊ard⌊lµ≤K(1) |
|
T/radicalbigg |
|
logN |
|
N, |
|
then |
|
P/parenleftBig |
|
sup |
|
0≤t≤T/⌊ard⌊lN−1XN(t)−x(t)/⌊ard⌊lµ> K(2) |
|
T/radicalbigg |
|
logN |
|
N/parenrightBig |
|
≤K(3) |
|
TlogN |
|
N.(1.10) |
|
Theerrorboundissharper, byafactoroflog N, thanthatgiveninBarbour& |
|
Luczak(2008),andthetheoremisapplicabletoamuch widerclassof models. |
|
However, the method of proof involves moment arguments, which r equire |
|
somewhat stronger assumptions on the initial state of the system , and, in |
|
models such as that of Barbour & Kafetzaki (1993), onthe choice of infection |
|
distributions allowed. The conditions under which the theorem holds c an be |
|
divided into three categories: growth conditions on the transition r ates, so |
|
that the a prioribounds, which have the character of moment bounds, can |
|
be established; conditions on the matrix A, sufficient to limit the growth of |
|
the semigroup R, and (together with the properties of F) to determine the |
|
weights defining the metric in which the approximation is to be carried o ut; |
|
and conditions on the initial state of the system. The a priori bounds are |
|
derived in Section 2, the semigroup analysis is conducted in Section 3, and |
|
the approximation proper is carried out in Section 4. The paper conc ludes |
|
in Section 5 with some examples. |
|
The form (1.8) of the stochastic component /tildewidemN(t) in (1.7) is very simi- |
|
lar to that of a key element in the analysis of stochastic partial differ ential |
|
equations; see, for example, Chow (2007, Section 6.6). The SPDE a rguments |
|
used for its control are however typically conducted in a Hilbert spa ce con- |
|
text. Our setting is quite different in nature, and it does not seem cle ar how |
|
to translate the SPDE methods into our context. |
|
52 A priori bounds |
|
We begin by imposing further conditions on the transition rates of th e pro- |
|
cessXN, sufficient to constrain its paths to bounded subsets of X+dur- |
|
ing finite time intervals, and in particular to ensure that only finitely ma ny |
|
jumps can occur in finite time. The conditions that follow have the flav our |
|
of moment conditions on the jump distributions. Since the index j∈Z+is |
|
symbolic in nature, we start by fixing an ν∈ R, such that ν(j) reflects in |
|
some sense the ‘size’ of j, with most indices being ‘large’: |
|
ν(j)≥1 for allj≥0 and lim |
|
j→∞ν(j) =∞. (2.1) |
|
We then define the analogues of higher empirical moments using the q uanti- |
|
tiesνr∈ R, defined by νr(j) :=ν(j)r,r≥0, setting |
|
Sr(x) :=/summationdisplay |
|
j≥0νr(j)xj=xTνr, x∈ R0, (2.2) |
|
where, for x∈ R0andy∈ R,xTy:=/summationtext |
|
l≥0xlyl. In particular, for X∈ X+, |
|
S0(X) =/⌊ard⌊lX/⌊ard⌊l1. Note that, because of (2.1), for any r≥1, |
|
#{X∈ X+:Sr(X)≤K}<∞for allK >0. (2.3) |
|
To formulate the conditions that limit the growth of the empirical mom ents |
|
ofXN(t) witht, we also define |
|
Ur(x) :=/summationdisplay |
|
J∈JαJ(x)JTνr;Vr(x) :=/summationdisplay |
|
J∈JαJ(x)(JTνr)2, x∈ R.(2.4) |
|
The assumptions that we shall need are then as follows. |
|
Assumption 2.1 There exists a νsatisfying (2.1)andr(1) |
|
max,r(2) |
|
max≥1such |
|
that, for all X∈ X+, |
|
/summationdisplay |
|
J∈JαJ(N−1X)|JTνr|<∞,0≤r≤r(1) |
|
max,(2.5) |
|
the case r= 0following from (1.2)and(1.3); furthermore, for some non- |
|
negative constants krl, the inequalities |
|
U0(x)≤k01S0(x)+k04, |
|
U1(x)≤k11S1(x)+k14, (2.6) |
|
Ur(x)≤ {kr1+kr2S0(x)}Sr(x)+kr4,2≤r≤r(1) |
|
max; |
|
6and |
|
V0(x)≤k03S1(x)+k05, |
|
Vr(x)≤kr3Sp(r)(x)+kr5,1≤r≤r(2) |
|
max, (2.7) |
|
are satisfied, where 1≤p(r)≤r(1) |
|
maxfor1≤r≤r(2) |
|
max. |
|
The quantities r(1) |
|
maxandr(2) |
|
maxusually need to be reasonably large, if Assump- |
|
tion 4.2 below is to be satisfied. |
|
Now, for XNas in the introduction, we let tXNndenote the time of its |
|
n-th jump, with tXN |
|
0= 0, and set tXN∞:= lim n→∞tXNn, possibly infinite. For |
|
0≤t < tXN∞, we define |
|
S(N) |
|
r(t) :=Sr(XN(t));U(N) |
|
r(t) :=Ur(xN(t));V(N) |
|
r(t) :=Vr(xN(t)), |
|
(2.8) |
|
once again with xN(t) :=N−1XN(t), and also |
|
τ(N) |
|
r(C) := inf {t < tXN |
|
∞:S(N) |
|
r(t)≥NC}, r≥0,(2.9) |
|
where the infimum of the empty set is taken to be ∞. Our first result shows |
|
thattXN∞=∞a.s., and limits the expectations of S(N) |
|
0(t) andS(N) |
|
1(t) for any |
|
fixedt. |
|
In what follows, we shall write F(N) |
|
s=σ(XN(u),0≤u≤s), so that |
|
(F(N) |
|
s:s≥0) is the natural filtration of the process XN. |
|
Lemma 2.2 Under Assumptions 2.1, tXN∞=∞a.s. Furthermore, for any |
|
t≥0, |
|
E{S(N) |
|
0(t)} ≤(S(N) |
|
0(0)+Nk04t)ek01t; |
|
E{S(N) |
|
1(t)} ≤(S(N) |
|
1(0)+Nk14t)ek11t. |
|
Proof. Introducing the formal generator ANassociated with (1.1), |
|
ANf(X) :=N/summationdisplay |
|
J∈JαJ(N−1X){f(X+J)−f(X)}, X∈ X+,(2.10) |
|
we note that NUl(x) =ANSl(Nx). Hence, if we define M(N) |
|
lby |
|
M(N) |
|
l(t) :=S(N) |
|
l(t)−S(N) |
|
l(0)−N/integraldisplayt |
|
0U(N) |
|
l(u)du, t ≥0,(2.11) |
|
7for 0≤l≤r(1) |
|
max, it is immediate from (2.3), (2.5) and (2.6) that the process |
|
(M(N) |
|
l(t∧τ(N) |
|
1(C)), t≥0) is a zero mean F(N)–martingale for each C >0. |
|
In particular, considering M(N) |
|
1(t∧τ(N) |
|
1(C)), it follows in view of (2.6) that |
|
E{S(N) |
|
1(t∧τ(N) |
|
1(C))} ≤S(N) |
|
1(0)+E/braceleftBigg/integraldisplayt∧τ(N) |
|
1(C) |
|
0{k11S(N) |
|
1(u)+Nk14}du/bracerightBigg |
|
≤S(N) |
|
1(0)+/integraldisplayt |
|
0(k11E{S(N) |
|
1(u∧τ(N) |
|
1(C))}+Nk14)du. |
|
Using Gronwall’s inequality, we deduce that |
|
E{S(N) |
|
1(t∧τ(N) |
|
1(C))} ≤(S(N) |
|
1(0)+Nk14t)ek11t,(2.12) |
|
uniformly in C >0, and hence that |
|
P/bracketleftBig |
|
sup |
|
0≤s≤tS1(XN(s))≥NC/bracketrightBig |
|
≤C−1(S1(xN(0))+k14t)ek11t(2.13) |
|
also. Hence sup0≤s≤tS1(XN(s))<∞a.s. for any t, limC→∞τ(N) |
|
1(C) =∞ |
|
a.s., and, from (2.3) and (1.3), it thus follows that tXN∞=∞a.s. The bound |
|
onE{S(N) |
|
1(t)}is now immediate, and that on E{S(N) |
|
0(t)}follows by applying |
|
the same Gronwall argument to M(N) |
|
0(t∧τ(N) |
|
1(C)). |
|
The next lemma shows that, if any T >0 is fixed and Cis chosen large |
|
enough, then, with high probability, N−1S(N) |
|
0(t)≤Cholds for all 0 ≤t≤T. |
|
Lemma 2.3 Assume that Assumptions 2.1 are satisfied, and that S(N) |
|
0(0)≤ |
|
NC0andS(N) |
|
1(0)≤NC1. Then, for any C≥2(C0+k04T)ek01T, we have |
|
P[{τ(N) |
|
0(C)≤T}]≤(C1∨1)K00/(NC2), |
|
whereK00depends on Tand the parameters of the model. |
|
Proof. It is immediate from (2.11) and (2.6) that |
|
S(N) |
|
0(t) =S(N) |
|
0(0)+N/integraldisplayt |
|
0U(N) |
|
0(u)du+M(N) |
|
0(t) |
|
≤S(N) |
|
0(0)+/integraldisplayt |
|
0(k01S(N) |
|
0(u)+Nk04)du+ sup |
|
0≤u≤tM(N) |
|
0(u).(2.14) |
|
8Hence, from Gronwall’s inequality, if S(N) |
|
0(0)≤NC0, then |
|
S(N) |
|
0(t)≤/braceleftbigg |
|
N(C0+k04T)+ sup |
|
0≤u≤tM(N) |
|
0(u)/bracerightbigg |
|
ek01t.(2.15) |
|
Now, considering the quadratic variation of M(N) |
|
0, we have |
|
E/braceleftBigg |
|
{M(N) |
|
0(t∧τ(N) |
|
1(C′))}2−N/integraldisplayt∧τ(N) |
|
1(C′) |
|
0V(N) |
|
0(u)du/bracerightBigg |
|
= 0 (2.16) |
|
for anyC′>0, from which it follows, much as above, that |
|
E/parenleftBig |
|
{M(N) |
|
0(t∧τ(N) |
|
1(C′))}2/parenrightBig |
|
≤E/braceleftbigg |
|
N/integraldisplayt |
|
0V(N) |
|
0(u∧τ(N) |
|
1(C′))du/bracerightbigg |
|
≤/integraldisplayt |
|
0{k03ES(N) |
|
1(u∧τ(N) |
|
1(C′))+Nk05}du. |
|
Using (2.12), we thus find that |
|
E/parenleftBig |
|
{M(N) |
|
0(t∧τ(N) |
|
1(C′))}2/parenrightBig |
|
≤k03 |
|
k11N(C1+k14T)(ek11t−1)+Nk05t,(2.17) |
|
uniformlyforall C′. Doob’smaximal inequality appliedto M(N) |
|
0(t∧τ(N) |
|
1(C′)) |
|
now allows us to deduce that, for any C′,a >0, |
|
P/bracketleftBig |
|
sup |
|
0≤u≤TM(N) |
|
0(u∧τ(N) |
|
1(C′))> aN/bracketrightBig |
|
≤1 |
|
Na2/braceleftbiggk03 |
|
k11(C1+k14T){ek11T−1}+k05T/bracerightbigg |
|
=:C1K01+K02 |
|
Na2, |
|
say, so that, letting C′→ ∞, |
|
P/bracketleftBig |
|
sup |
|
0≤u≤TM(N) |
|
0(u)> aN/bracketrightBig |
|
≤C1K01+K02 |
|
Na2 |
|
also. Taking a=1 |
|
2Ce−k01Tand putting the result into (2.15), the lemma |
|
follows. |
|
In the next theorem, we control the ‘higher ν-moments’ S(N) |
|
r(t) ofXN(t). |
|
9Theorem 2.4 Assume thatAssumptions 2.1are satisfied, andthat S(N) |
|
1(0)≤ |
|
NC1andS(N) |
|
p(1)(0)≤NC′ |
|
1. Then, for 2≤r≤r(1) |
|
maxand for any C >0, we |
|
have |
|
E{S(N) |
|
r(t∧τ(N) |
|
0(C))} ≤(S(N) |
|
r(0)+Nkr4t)e(kr1+Ckr2)t,0≤t≤T.(2.18) |
|
Furthermore, if for 1≤r≤r(2) |
|
max,S(N) |
|
r(0)≤NCrandS(N) |
|
p(r)(0)≤NC′ |
|
r, |
|
then, for any γ≥1, |
|
P[ sup |
|
0≤t≤TS(N) |
|
r(t∧τ(N) |
|
0(C))≥NγC′′ |
|
rT]≤Kr0γ−2N−1, (2.19) |
|
where |
|
C′′ |
|
rT:= (Cr+kr4T+/radicalbig |
|
(C′ |
|
r∨1))e(kr1+Ckr2)T |
|
andKr0depends on C,Tand the parameters of the model. |
|
Proof. Recalling (2.11), use the argument leading to (2.12) with the martin- |
|
galesM(N) |
|
r(t∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C)), for any C′>0, to deduce that |
|
ES(N) |
|
r(t∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C)) |
|
≤S(N) |
|
r(0)+/integraldisplayt |
|
0/parenleftBig |
|
{kr1+Ckr2}E/braceleftBig |
|
S(N) |
|
r(u∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C))/bracerightBig |
|
+Nkr4/parenrightBig |
|
du, |
|
for 1≤r≤r(1) |
|
max, sinceN−1S(N) |
|
0(u)≤Cwhenu≤τ(N) |
|
0(C): define k12= 0. |
|
Gronwall’s inequality now implies that |
|
ES(N) |
|
r(t∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C))≤(S(N) |
|
r(0)+Nkr4t)e(kr1+Ckr2)t,(2.20) |
|
for 1≤r≤r(1) |
|
max, and (2.18) follows by Fatou’s lemma, on letting C′→ ∞. |
|
Now, also from (2.11) and (2.6), we have, for t≥0 and each r≤r(1) |
|
max, |
|
S(N) |
|
r(t∧τ(N) |
|
0(C)) |
|
=S(N) |
|
r(0)+N/integraldisplayt∧τ(N) |
|
0(C) |
|
0U(N) |
|
r(u)du+M(N) |
|
r(t∧τ(N) |
|
0(C)) |
|
≤S(N) |
|
r(0)+/integraldisplayt |
|
0/parenleftBig |
|
{kr1+Ckr2}S(N) |
|
r(u∧τ(N) |
|
0(C))+Nkr4/parenrightBig |
|
du |
|
+ sup |
|
0≤u≤tM(N) |
|
r(u∧τ(N) |
|
0(C)). |
|
10Hence, from Gronwall’s inequality, for all t≥0 andr≤r(1) |
|
max, |
|
S(N) |
|
r(t∧τ(N) |
|
0(C))≤/braceleftBig |
|
N(Cr+kr4t)+ sup |
|
0≤u≤tM(N) |
|
r(u∧τ(N) |
|
0(C))/bracerightBig |
|
e(kr1+Ckr2)t. |
|
(2.21) |
|
Now, as in (2.16), we have |
|
E/braceleftBigg |
|
{M(N) |
|
r(t∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C))}2−N/integraldisplayt∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C) |
|
0V(N) |
|
r(u)du/bracerightBigg |
|
= 0, |
|
(2.22) |
|
from which it follows, using (2.7), that, for 1 ≤r≤r(2) |
|
max, |
|
E/parenleftBig |
|
{M(N) |
|
r(t∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C))}2/parenrightBig |
|
≤E/braceleftBigg |
|
N/integraldisplayt∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C)) |
|
0V(N) |
|
r(u)du/bracerightBigg |
|
≤/integraldisplayt |
|
0{kr3ES(N) |
|
p(r)(u∧τ(N) |
|
1(C′)∧τ(N) |
|
0(C))+Nkr5}du |
|
≤N(C′ |
|
r+kp(r),4T)kr3 |
|
kp(r),1+Ckp(r),2(e(kp(r),1+Ckp(r),2t)−1)+Nkr5T, |
|
this last by (2.20), since p(r)≤r(1) |
|
maxfor 1≤r≤r(2) |
|
max. Using Doob’s |
|
inequality, it follows that, for any a >0, |
|
P/bracketleftBig |
|
sup |
|
0≤u≤TM(N) |
|
r(u∧τ(N) |
|
0(C))> aN/bracketrightBig |
|
≤1 |
|
Na2/braceleftbiggkr3(C′ |
|
r+kp(r),4T) |
|
kp(r),1+Ckp(r),2(e(kp(r),1+Ckp(r),2T)−1)+kr5T/bracerightbigg |
|
=:C′ |
|
rKr1+Kr2 |
|
Na2. |
|
Takinga=γ/radicalbig |
|
(C′ |
|
r∨1) and putting the result into (2.21) gives (2.19), with |
|
Kr0= (C′ |
|
rKr1+Kr2)/(C′ |
|
r∨1). |
|
Note also that sup0≤t≤TS(N) |
|
r(t)<∞a.s. for all 0 ≤r≤r(2) |
|
max, in view of |
|
Lemma 2.3 and Theorem 2.4. |
|
In what follows, we shall particularly need to control quantities of t he |
|
form/summationtext |
|
J∈JαJ(xN(s))d(J,ζ), where xN:=N−1XNand |
|
d(J,ζ) :=/summationdisplay |
|
j≥0|Jj|ζ(j), (2.23) |
|
11forζ∈ Rchosen such that ζ(j)≥1 grows fast enough with j: see (4.12). |
|
Defining |
|
τ(N)(a,ζ) := inf/braceleftBigg |
|
s:/summationdisplay |
|
J∈JαJ(xN(s))d(J,ζ)≥a/bracerightBigg |
|
,(2.24) |
|
infinite if there is no such s, we show in the following corollary that, under |
|
suitable assumptions, τ(N)(a,ζ) is rarely less than T. |
|
Corollary 2.5 Assume that Assumptions 2.1 hold, and that ζis such that |
|
/summationdisplay |
|
J∈JαJ(N−1X)d(J,ζ)≤ {k1N−1Sr(X)+k2}b(2.25) |
|
for some 1≤r:=r(ζ)≤r(2) |
|
maxand some b=b(ζ)≥1. For this value |
|
ofr, assume that S(N) |
|
r(0)≤NCrandS(N) |
|
p(r)(0)≤NC′ |
|
rfor some constants |
|
CrandC′ |
|
r. Assume further that S(N) |
|
0(0)≤NC0,S(N) |
|
1(0)≤NC1for some |
|
constants C0,C1, and define C:= 2(C0+k04T)ek01T. Then |
|
P[τ(N)(a,ζ)≤T]≤N−1{Kr0γ−2 |
|
a+K00(C1∨1)C−2}, |
|
for anya≥ {k2+k1C′′ |
|
rT}b, whereγa:= (a1/b−k2)/{k1C′′ |
|
rT},Kr0andC′′ |
|
rT |
|
are as in Theorem 2.4, and K00is as in Lemma 2.3. |
|
Proof. In view of (2.25), it is enough to bound the probability |
|
P[ sup |
|
0≤t≤TS(N) |
|
r(t)≥N(a1/b−k2)/k1]. |
|
However, Lemma 2.3 and Theorem 2.4 together bound this probability by |
|
N−1/braceleftbig |
|
Kr0γ−2 |
|
a+K00(C1∨1)C−2/bracerightbig |
|
, |
|
whereγais as defined above, as long as a1/b−k2≥k1C′′ |
|
rT. |
|
If (2.25) is satisfied,/summationtext |
|
J∈JαJ(xN(s))d(J,ζ)isa.s. bounded on0 ≤s≤T, |
|
becauseS(N) |
|
r(s) is. The corollary shows that the sum is then bounded by |
|
{k2+k1C′′ |
|
r,T}b, except on an event of probability of order O(N−1). Usually, |
|
one can choose b= 1. |
|
123 Semigroup properties |
|
We make the following initial assumptions about the matrix A: first, that |
|
Aij≥0 for alli/ne}ationslash=j≥0;/summationdisplay |
|
j/negationslash=iAji<∞for alli≥0,(3.1) |
|
and then that, for some µ∈RZ+ |
|
+such that µ(m)≥1 for each m≥0, and |
|
for some w≥0, |
|
ATµ≤wµ. (3.2) |
|
We then use µto define the µ-norm |
|
/⌊ard⌊lξ/⌊ard⌊lµ:=/summationdisplay |
|
m≥0µ(m)|ξm|onRµ:={ξ∈ R:/⌊ard⌊lξ/⌊ard⌊lµ<∞}.(3.3) |
|
Note that there may be many possible choices for µ. In what follows, it is |
|
important that Fbe a Lipschitz operator with respect to the µ-norm, and |
|
this has to be borne in mind when choosing µ. |
|
Setting |
|
Qij:=AT |
|
ijµ(j)/µ(i)−wδij, (3.4) |
|
whereδis the Kronecker delta, we note that Qij≥0 fori/ne}ationslash=j, and that |
|
0≤/summationdisplay |
|
j/negationslash=iQij=/summationdisplay |
|
j/negationslash=iAT |
|
ijµ(j)/µ(i)≤w−Aii=−Qii, |
|
using (3.2) for the inequality, so that Qii≤0. Hence Qcan be augmented to |
|
a conservative Q–matrix, in the sense of Markov jump processes, by adding a |
|
coffin state ∂, and setting Qi∂:=−/summationtext |
|
j≥0Qij≥0. LetP(·) denote the semi- |
|
group of Markov transition matrices corresponding to the minimal p rocess |
|
associated with Q; then, in particular, |
|
Q=P′(0) and P′(t) =QP(t) for all t≥0 (3.5) |
|
(Reuter 1957, Theorem 3). Set |
|
RT |
|
ij(t) :=ewtµ(i)Pij(t)/µ(j). (3.6) |
|
13Theorem 3.1 LetAsatisfy Assumptions (3.1)and(3.2). Then, with the |
|
above definitions, Ris a strongly continuous semigroup on Rµ, and |
|
/summationdisplay |
|
i≥0µ(i)Rij(t)≤µ(j)ewtfor alljandt. (3.7) |
|
Furthermore, the sums/summationtext |
|
j≥0Rij(t)Ajk= (R(t)A)ikare well defined for all |
|
i,k, and |
|
A=R′(0)andR′(t) =R(t)Afor allt≥0.(3.8) |
|
Proof. We note first that, for x∈ Rµ, |
|
/⌊ard⌊lR(t)x/⌊ard⌊lµ≤/summationdisplay |
|
i≥0µ(i)/summationdisplay |
|
j≥0Rij(t)|xj|=ewt/summationdisplay |
|
i≥0/summationdisplay |
|
j≥0µ(j)Pji(t)|xj| |
|
≤ewt/summationdisplay |
|
j≥0µ(j)|xj|=ewt/⌊ard⌊lx/⌊ard⌊lµ, (3.9) |
|
sinceP(t) is substochastic on Z+; henceR:Rµ→ Rµ. To show strong |
|
continuity, we take x∈ Rµ, and consider |
|
/⌊ard⌊lR(t)x−x/⌊ard⌊lµ=/summationdisplay |
|
i≥0µ(i)/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle/summationdisplay |
|
j≥0Rij(t)xj−xi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle=/summationdisplay |
|
i≥0/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingleewt/summationdisplay |
|
j≥0µ(j)Pji(t)xj−µ(i)xi/vextendsingle/vextendsingle/vextendsingle/vextendsingle/vextendsingle |
|
≤(ewt−1)/summationdisplay |
|
i≥0/summationdisplay |
|
j≥0µ(j)Pji(t)xj+/summationdisplay |
|
i≥0/summationdisplay |
|
j/negationslash=iµ(j)Pji(t)xj+/summationdisplay |
|
i≥0µ(i)xi(1−Pii(t)) |
|
≤(ewt−1)/summationdisplay |
|
j≥0µ(j)xj+2/summationdisplay |
|
i≥0µ(i)xi(1−Pii(t)), |
|
from which it follows that lim t→0/⌊ard⌊lR(t)x−x/⌊ard⌊lµ= 0, by dominated conver- |
|
gence, since lim t→0Pii(t) = 1 for each i≥0. |
|
The inequality (3.7) follows from the definition of Rand the fact that P |
|
is substochastic on Z+. Then |
|
(ATRT(t))ij=/summationdisplay |
|
k/negationslash=iQikµ(i) |
|
µ(k)ewtµ(k) |
|
µ(j)Pkj(t)+(Qii+w)ewtµ(i) |
|
µ(j)Pij(t) |
|
=µ(i) |
|
µ(j)[(QP(t))ij+wPij(t)]ewt, |
|
14with (QP(t))ij=/summationtext |
|
k≥0QikPkj(t) well defined because P(t) is sub-stochastic |
|
andQis conservative. Using (3.5), this gives |
|
(ATRT(t))ij=µ(i) |
|
µ(j)d |
|
dt[Pij(t)ewt] =d |
|
dtRT |
|
ij(t), |
|
and this establishes (3.8). |
|
4 Main approximation |
|
LetXN,N≥1, beasequence ofpure jumpMarkov processes asinSection 1, |
|
withAandFdefined as in (1.4) and (1.5), and suppose that F:Rµ→ Rµ, |
|
withRµas defined in (3.3), for some µsuch that Assumption (3.2) holds. |
|
Suppose also that Fis locally Lipschitz in the µ-norm: for any z >0, |
|
sup |
|
x/negationslash=y:/bardblx/bardblµ,/bardbly/bardblµ≤z/⌊ard⌊lF(x)−F(y)/⌊ard⌊lµ//⌊ard⌊lx−y/⌊ard⌊lµ≤K(µ,F;z)<∞.(4.1) |
|
Then, for x(0)∈ RµandRas in (3.6), the integral equation |
|
x(t) =R(t)x(0)+/integraldisplayt |
|
0R(t−s)F(x(s))ds. (4.2) |
|
has a unique continuous solution xinRµon some non-empty time interval |
|
[0,tmax), such that, if tmax<∞, then/⌊ard⌊lx(t)/⌊ard⌊lµ→ ∞ast→tmax(Pazy 1983, |
|
Theorem 1.4, Chapter 6). Thus, if Awere the generator of R, the function x |
|
would be a mild solution of the deterministic equations (1.4). We now wish |
|
to show that the process xN:=N−1XNis close to x. To do so, we need a |
|
corresponding representation for XN. |
|
To find such a representation, let W(t),t≥0, be a pure jump path on X+ |
|
that has only finitely many jumps up to time T. Then we can write |
|
W(t) =W(0)+/summationdisplay |
|
j:σj≤t∆W(σj),0≤t≤T, (4.3) |
|
where ∆W(s) :=W(s)−W(s−)andσj,j≥1, denote thetimes when Whas |
|
its jumps. Now let Asatisfy (3.1) and (3.2), and let R(·) be the associated |
|
semigroup, as defined in (3.6). Define the path W∗(t), 0≤t≤T, from the |
|
equation |
|
W∗(t) :=R(t)W(0)+/summationtext |
|
j:σj≤tR(t−σj)∆j−/integraltextt |
|
0R(t−s)AW(s)ds, |
|
(4.4) |
|
15where ∆ j:= ∆W(σj). Note that the latter integral makes sense, because |
|
each of the sums/summationtext |
|
j≥0Rij(t)Ajkis well defined, from Theorem 3.1, and |
|
because only finitely many of the coordinates of Ware non-zero. |
|
Lemma 4.1 W∗(t) =W(t)for all0≤t≤T. |
|
Proof. Fix any t, and suppose that W∗(s) =W(s) for alls≤t. This is |
|
clearly the case for t= 0. Let σ(t)> tdenote the time of the first jump |
|
ofWaftert. Then, for any 0 < h < σ(t)−t, using the semigroup property |
|
forRand (4.4), |
|
W∗(t+h)−W∗(t) |
|
= (R(h)−I)R(t)W(0)+/summationdisplay |
|
j:σj≤t(R(h)−I)R(t−σj)∆j (4.5) |
|
−/integraldisplayt |
|
0(R(h)−I)R(t−s)AW(s)ds−/integraldisplayt+h |
|
tR(t+h−s)AW(t)ds, |
|
where, in the last integral, we use the fact that there are no jumps ofW |
|
between tandt+h. Thus we have |
|
W∗(t+h)−W∗(t) |
|
= (R(h)−I) |
|
|
|
R(t)W(0)+/summationdisplay |
|
j:σj≤tR(t−σj)∆j−/integraldisplayt |
|
0R(t−s)AW(s)ds |
|
|
|
|
|
−/integraldisplayt+h |
|
tR(t+h−s)AW(t)ds |
|
= (R(h)−I)W(t)−/integraldisplayt+h |
|
tR(t+h−s)AW(t)ds. (4.6) |
|
But now, for x∈ X+, |
|
/integraldisplayt+h |
|
tR(t+h−s)Axds= (R(h)−I)x, |
|
from (3.8), so that W∗(t+h) =W∗(t) for all t+h < σ(t), implying that |
|
W∗(s) =W(s) for all s < σ(t). On the other hand, from (4.4), we have |
|
W∗(σ(t))−W∗(σ(t)−) = ∆W(σ(t)), so that W∗(s) =W(s) for alls≤σ(t). |
|
Thus we can prove equality over the interval [0 ,σ1], and then successively |
|
over the intervals [ σj,σj+1], until [0 ,T] is covered. |
|
16Now suppose that Warises as a realization of XN. ThenXNhas transi- |
|
tion rates such that |
|
MN(t) :=/summationdisplay |
|
j:σj≤t∆XN(σj)−/integraldisplayt |
|
0AXN(s)ds−/integraldisplayt |
|
0NF(xN(s))ds(4.7) |
|
is a zero mean local martingale. In view of Lemma 4.1, we can use (4.4) t o |
|
write |
|
XN(t) =R(t)XN(0)+/tildewiderMN(t)+N/integraldisplayt |
|
0R(t−s)F(xN(s))ds,(4.8) |
|
where |
|
/tildewiderMN(t) :=/summationdisplay |
|
j:σj≤tR(t−σj)∆XN(σj) |
|
−/integraldisplayt |
|
0R(t−s)AXN(s)ds−/integraldisplayt |
|
0R(t−s)NF(xN(s))ds.(4.9) |
|
Thus, comparing (4.8) and (4.2), we expect xNandxto be close, for |
|
0≤t≤T < tmax, provided that we can show that supt≤T/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµis small, |
|
where/tildewidemN(t) :=N−1/tildewiderMN(t). Indeed, if xN(0) andx(0) are close, then |
|
/⌊ard⌊lxN(t)−x(t)/⌊ard⌊lµ |
|
≤ /⌊ard⌊lR(t)(xN(0)−x(0))/⌊ard⌊lµ |
|
+/integraldisplayt |
|
0/⌊ard⌊lR(t−s)[F(xN(s))−F(x(s))]/⌊ard⌊lµds+/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµ |
|
≤ewt/⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ |
|
+/integraldisplayt |
|
0ew(t−s)K(µ,F;2ΞT)/⌊ard⌊lxN(s)−x(s)/⌊ard⌊lµds+/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµ,(4.10) |
|
by (3.9), with the stage apparently set for Gronwall’s inequality, ass uming |
|
that/⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµand sup0≤t≤T/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµare small enough that then |
|
/⌊ard⌊lxN(t)/⌊ard⌊lµ≤2ΞTfor 0≤t≤T, where Ξ T:= sup0≤t≤T/⌊ard⌊lx(t)/⌊ard⌊lµ. |
|
Bounding sup0≤t≤T/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµis, however, not so easy. Since /tildewiderMNis not |
|
itselfamartingale, wecannotdirectlyapplymartingaleinequalitiestoc ontrol |
|
its fluctuations. However, since |
|
/tildewiderMN(t) =/integraldisplayt |
|
0R(t−s)dMN(s), (4.11) |
|
17we can hope to use control over the local martingale MNinstead. For this |
|
and the subsequent argument, we introduce some further assum ptions. |
|
Assumption 4.2 |
|
1. There exists r=rµ≤r(2) |
|
maxsuch that supj≥0{µ(j)/νr(j)}<∞. |
|
2. There exists ζ∈ Rwithζ(j)≥1for alljsuch that (2.25)is satisfied |
|
for some b=b(ζ)≥1andr=r(ζ)such that 1≤r(ζ)≤r(2) |
|
max, and that |
|
Z:=/summationdisplay |
|
k≥0µ(k)(|Akk|+1)/radicalbig |
|
ζ(k)<∞. (4.12) |
|
The requirement that ζsatisfies (4.12) as well as satisfying (2.25) for some |
|
r≤r(2) |
|
maximplies in practice that it must be possible to take r(1) |
|
maxandr(2) |
|
max |
|
to be quite large in Assumption 2.1; see the examples in Section 5. |
|
Note that part 1 of Assumption 4.2 implies that lim j→∞{µ(j)/νr(j)}= 0 |
|
for some r= ˜rµ≤rµ+1. We define |
|
ρ(ζ,µ) := max {r(ζ),p(r(ζ)),˜rµ}, (4.13) |
|
wherep(·) is as in Assumptions 2.1. We can now prove the following lemma, |
|
which enables us to control the paths of /tildewiderMNby using fluctuation bounds for |
|
the martingale MN. |
|
Lemma 4.3 Under Assumption 4.2, |
|
/tildewiderMN(t) =MN(t)+/integraldisplayt |
|
0R(t−s)AMN(s)ds. |
|
Proof. From (3.8), we have |
|
R(t−s) =I+/integraldisplayt−s |
|
0R(v)Adv. |
|
Substituting this into (4.11), we obtain |
|
/tildewiderMN(t) =/integraldisplayt |
|
0R(t−s)dMN(s) |
|
18=MN(t)+/integraldisplayt |
|
0/braceleftbigg/integraldisplayt |
|
0R(v)A1[0,t−s](v)dv/bracerightbigg |
|
dMN(s) |
|
=MN(t)+/integraldisplayt |
|
0/braceleftbigg/integraldisplayt |
|
0R(v)A1[0,t−s](v)dv/bracerightbigg |
|
dXN(s) |
|
−/integraldisplayt |
|
0/braceleftbigg/integraldisplayt |
|
0R(v)A1[0,t−s](v)dv/bracerightbigg |
|
F0(xN(s))ds. |
|
It remains to change the order of integration in the double integrals , for |
|
which we use Fubini’s theorem. |
|
In the first, the outer integral is almost surely a finite sum, and at e ach |
|
jump time tXN |
|
lwe havedXN(tXN |
|
l)∈ J. Hence it is enough that, for each i, |
|
mandt,/summationtext |
|
j≥0Rij(t)Ajmis absolutely summable, which follows from Theo- |
|
rem 3.1. Thus we have |
|
/integraldisplayt |
|
0/braceleftbigg/integraldisplayt |
|
0R(v)A1[0,t−s](v)dv/bracerightbigg |
|
dXN(s) =/integraldisplayt |
|
0R(v)A{XN(t−v)−XN(0)}dv. |
|
(4.14) |
|
For the second, the k-th component of R(v)AF0(xN(s)) is just |
|
/summationdisplay |
|
j≥0Rkj(v)/summationdisplay |
|
l≥0Ajl/summationdisplay |
|
J∈JJlαJ(xN(s)). (4.15) |
|
Now, from (3.7), we have 0 ≤Rkj(v)≤µ(j)ewv/µ(k), and |
|
/summationdisplay |
|
j≥0µ(j)|Ajl| ≤µ(l)(2|All|+w), (4.16) |
|
becauseATµ≤wµ. Hence, puttingabsolutevaluesinthesummandsin(4.15) |
|
yields at most |
|
ewv |
|
µ(k)/summationdisplay |
|
J∈JαJ(xN(s))/summationdisplay |
|
l≥0|Jl|µ(l)(2|All|+w). |
|
Now, in view of (4.12) and since ζ(j)≥1 for allj, there is a constant K <∞ |
|
such that µ(l)(2|All|+w)≤Kζ(l). Furthermore, ζsatisfies (2.25), so that, |
|
by Corollary 2.5,/summationtext |
|
J∈JαJ(xN(s))/summationtext |
|
l≥0|Jl|ζ(l) is a.s. uniformly bounded in |
|
0≤s≤T. Hence we can apply Fubini’s theorem, obtaining |
|
/integraldisplayt |
|
0/braceleftbigg/integraldisplayt |
|
0R(v)A1[0,t−s](v)dv/bracerightbigg |
|
F0(xN(s))ds=/integraldisplayt |
|
0R(v)A/braceleftbigg/integraldisplayt−v |
|
0F0(xN(s))ds/bracerightbigg |
|
dv, |
|
19and combining this with (4.14) proves the lemma. |
|
We now introduce the exponential martingales that we use to bound the |
|
fluctuations of MN. Forθ∈RZ+bounded and x∈ Rµ, |
|
ZN,θ(t) :=eθTxN(t)exp/braceleftBig |
|
−/integraltextt |
|
0gNθ(xN(s−))ds/bracerightBig |
|
, t≥0, |
|
is a non-negative finite variation local martingale, where |
|
gNθ(ξ) :=/summationdisplay |
|
J∈JNαJ(ξ)/parenleftBig |
|
eN−1θTJ−1/parenrightBig |
|
. |
|
Fort≥0, we have |
|
logZN,θ(t) =θTxN(t)−/integraldisplayt |
|
0gNθ(xN(s−))ds |
|
=θTmN(t)−/integraldisplayt |
|
0ϕN,θ(xN(s−),s)ds, (4.17) |
|
where |
|
ϕN,θ(ξ) :=/summationdisplay |
|
J∈JNαJ(ξ)/parenleftBig |
|
eN−1θTJ−1−N−1θTJ/parenrightBig |
|
,(4.18) |
|
andmN(t) :=N−1MN(t). Note also that we can write |
|
ϕN,θ(ξ) =N/integraldisplay1 |
|
0(1−r)D2vN(ξ,rθ)[θ,θ]dr, (4.19) |
|
where |
|
vN(ξ,θ′) :=/summationdisplay |
|
J∈JαJ(ξ)eN−1(θ′)TJ, |
|
andD2vNdenotes thematrixofsecond derivatives withrespect totheseco nd |
|
argument: |
|
D2vN(ξ,θ′)[ζ1,ζ2] :=N−2/summationdisplay |
|
J∈JαJ(ξ)eN−1(θ′)TJζT |
|
1JJTζ2(4.20) |
|
for anyζ1,ζ2∈ Rµ. |
|
Now choose any B:= (Bk, k≥0)∈ R, and define ˜ τ(N) |
|
k(B) by |
|
˜τ(N) |
|
k(B) := inf/braceleftBigg |
|
t≥0:/summationdisplay |
|
J:Jk/negationslash=0αJ(xN(t−))> Bk/bracerightBigg |
|
. |
|
Our exponential bound is as follows. |
|
20Lemma 4.4 For anyk≥0, |
|
P |
|
sup |
|
0≤t≤T∧˜τ(N) |
|
k(B)|mk |
|
N(t)| ≥δ |
|
≤2exp(−δ2N/2BkK∗T). |
|
for all0< δ≤BkK∗T, whereK∗:=J2 |
|
∗eJ∗, andJ∗is as in(1.2). |
|
Proof. Takeθ=e(k)β, forβto be chosen later. We shall argue by stopping |
|
the local martingale ZN,θat timeσ(N)(k,δ), where |
|
σ(N)(k,δ) :=T∧˜τ(N) |
|
k(B)∧inf{t:mk |
|
N(t)≥δ}. |
|
Note that eN−1θTJ≤eJ∗, so long as |β| ≤N, so that |
|
D2vN(ξ,rθ)[θ,θ]≤N−2/parenleftBigg/summationdisplay |
|
J:Jk/negationslash=0αJ(ξ)/parenrightBigg |
|
β2K∗. |
|
Thus, from (4.19), we have |
|
ϕN,θ(xN(u−))≤1 |
|
2N−1Bkβ2K∗, u≤˜τ(N) |
|
k(B), |
|
and hence, on the event that σ(N)(k,δ) = inf{t:mk |
|
N(t)≥δ} ≤(T∧˜τ(N) |
|
k(B)), |
|
we have |
|
ZN,θ(σ(k,δ))≥exp{βδ−1 |
|
2N−1Bkβ2K∗T}. |
|
But since ZN,θ(0) = 1, it now follows from the optional stopping theorem |
|
and Fatou’s lemma that |
|
1≥E{ZN,θ(σ(N)(k,δ))} |
|
≥P/bracketleftBig |
|
sup |
|
0≤t≤T∧˜τ(N) |
|
k(B)mk |
|
N(t)≥δ/bracketrightBig |
|
exp{βδ−1 |
|
2N−1Bkβ2K∗T}. |
|
We can choose β=δN/B kK∗T, as long as δ/BkK∗T≤1, obtaining |
|
P |
|
sup |
|
0≤t≤T∧˜τ(N) |
|
k(B)mk |
|
N(t)≥δ |
|
≤exp(−δ2N/2BkK∗T). |
|
Repeating with |
|
˜σ(N)(k,δ) :=T∧˜τ(N) |
|
k(B)∧inf{t:−mk |
|
N(t)≥δ}, |
|
21and choosing β=δN/B kK∗T, gives the lemma. |
|
Theprecedinglemmagivesaboundforeachindividualcomponentof MN. |
|
We need first to translate this into a statement for all components simulta- |
|
neously. For ζas in Assumption 4.2, we start by writing |
|
Z(1) |
|
∗:= max |
|
k≥1k−1#{m:ζ(m)≤k};Z(2) |
|
∗:= sup |
|
k≥0µ(k)(|Akk|+1)/radicalbig |
|
ζ(k).(4.21) |
|
Z(2) |
|
∗is clearly finite, because of Assumption 4.2, and the same is true for Z(1) |
|
∗ |
|
also, since Zof Assumption 4.2 is at least # {m:ζ(m)≤k}/√ |
|
k, for each k. |
|
Then, using the definition (2.24) of τ(N)(a,ζ), note that, for every k, |
|
/summationdisplay |
|
J:Jk/negationslash=0αJ(xN(t))h(k)≤/summationdisplay |
|
J:Jk/negationslash=0αJ(xN(t))h(k)d(J,ζ) |
|
|Jk|ζ(k)≤ah(k) |
|
ζ(k),(4.22) |
|
for anyt < τ(N)(a,ζ) and any h∈ R, and that, for any K ⊆Z+, |
|
/summationdisplay |
|
k∈K/summationdisplay |
|
J:Jk/negationslash=0αJ(xN(t))h(k)≤/summationdisplay |
|
k∈K/summationdisplay |
|
J:Jk/negationslash=0αJ(xN(t))h(k)d(J,ζ) |
|
|Jk|ζ(k) |
|
≤a |
|
mink∈K(ζ(k)/h(k)). (4.23) |
|
From (4.22) with h(k) = 1 for all k, if we choose B:= (a/ζ(k), k≥0), then |
|
τ(N)(a,ζ)≤˜τ(N) |
|
k(B) for allk. For this choice of B, we can take |
|
δ2 |
|
k:=δ2 |
|
k(a) :=4aK∗TlogN |
|
Nζ(k)=4BkK∗TlogN |
|
N(4.24) |
|
in Lemma 4.4 for k∈κN(a), where |
|
κN(a) :=/braceleftbig |
|
k:ζ(k)≤1 |
|
4aK∗TN/logN/bracerightbig |
|
={k:Bk≥4logN/K∗TN}, |
|
(4.25) |
|
since then δk(a)≤BkK∗T. Note that then, from (4.12), |
|
/summationdisplay |
|
k∈κN(a)µ(k)δk(a)≤2Z/radicalbig |
|
aK∗TN−1logN, (4.26) |
|
withZas defined in Assumption 4.2, and that |
|
|κN(a)| ≤1 |
|
4aZ(1) |
|
∗K∗TN/logN. (4.27) |
|
22Lemma 4.5 If Assumptions 4.2 are satisfied, taking δk(a)andκN(a)as |
|
defined in (4.24)and(4.25), and for any η∈ R, we have |
|
1.P |
|
/uniondisplay |
|
k∈κN(a)/braceleftBig |
|
sup |
|
0≤t≤T∧τ(N)(a,ζ)|mN(t)| ≥δk(a)/bracerightBig |
|
≤aZ(1) |
|
∗K∗T |
|
2NlogN; |
|
2.P |
|
/summationdisplay |
|
k/∈κN(a)Xk |
|
N(t) = 0for all0≤t≤T∧τ(N)(a,ζ) |
|
≥1−4logN |
|
K∗N; |
|
3. sup |
|
0≤t≤T∧τ(N)(a,ζ) |
|
|
|
/summationdisplay |
|
k/∈κN(a)η(k)|Fk(xN(t))| |
|
|
|
≤aJ∗ |
|
mink/∈κN(a)(ζ(k)/η(k)). |
|
Proof. For part 1, use Lemma 4.4 together with (4.24) and (4.27) to give |
|
the bound. For part 2, the total rate of jumps into coordinates w ith indices |
|
k /∈κN(a) is |
|
/summationdisplay |
|
k/∈κN(a)/summationdisplay |
|
J:Jk/negationslash=0αJ(xN(t))≤a |
|
mink/∈κN(a)ζ(k), |
|
ift≤τ(N)(a,ζ),using(4.23)with K= (κN(a))c,which, combinedwith(4.25), |
|
proves the claim. For the final part, if t≤τ(N)(a,ζ), |
|
/summationdisplay |
|
k/∈κN(a)η(k)|Fk(xN(t))| ≤/summationdisplay |
|
k/∈κN(a)η(k)/summationdisplay |
|
J:Jk/negationslash=0αJ(xN(t))J∗, |
|
and the inequality follows once more from (4.23). |
|
LetB(1) |
|
N(a) andB(2) |
|
N(a) denote the events |
|
B(1) |
|
N(a) := |
|
|
|
/summationdisplay |
|
k/∈κN(a)Xk |
|
N(t) = 0 for all 0 ≤t≤T∧τ(N)(a,ζ) |
|
|
|
; |
|
B(2) |
|
N(a) := |
|
/intersectiondisplay |
|
k∈κN(a)/braceleftBig |
|
sup |
|
0≤t≤T∧τ(N)(a,ζ)|mN(t)| ≤δk(a)/bracerightBig |
|
,(4.28) |
|
and setBN(a) :=B(1) |
|
N(a)∩B(2) |
|
N(a). Then, by Lemma 4.5, we deduce that |
|
P[BN(a)c]≤aZ(1) |
|
∗K∗T |
|
2NlogN+4logN |
|
K∗N, (4.29) |
|
23of order O(N−1logN) for each fixed a. Thus we have all the components |
|
ofMNsimultaneously controlled, except on a set of small probability. We |
|
now translate this into the desired assertion about the fluctuation s of/tildewidemN. |
|
Lemma 4.6 If Assumptions 4.2 are satisfied, then, on the event BN(a), |
|
sup |
|
0≤t≤T∧τ(N)(a,ζ)/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµ≤√aK4.6/radicalbigg |
|
logN |
|
N, |
|
where the constant K4.6depends on Tand the parameters of the process. |
|
Proof. From Lemma 4.3, it follows that |
|
sup |
|
0≤t≤T∧τ(N)(a,ζ)/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµ (4.30) |
|
≤sup |
|
0≤t≤T∧τ(N)(a,ζ)/⌊ard⌊lmN(t)/⌊ard⌊lµ+ sup |
|
0≤t≤T∧τ(N)(a,ζ)/integraldisplayt |
|
0/⌊ard⌊lR(t−s)AmN(s)/⌊ard⌊lµds. |
|
For the first term, on BN(a) and for 0 ≤t≤T∧τ(N)(a,ζ), we have |
|
/⌊ard⌊lmN(t)/⌊ard⌊lµ≤/summationdisplay |
|
k∈κN(a)µ(k)δk(a)+/integraldisplayt |
|
0/summationdisplay |
|
k/∈κN(a)µ(k)|Fk(xN(u))|du. |
|
The first sum is bounded using (4.26) by 2 Z√aK∗T N−1/2√logN, the sec- |
|
ond, from Lemma 4.5 and (4.25), by |
|
TaJ∗ |
|
mink/∈κN(a)(ζ(k)/µ(k))≤Z(2) |
|
∗2J∗/radicalbigg |
|
Ta |
|
K∗/radicalbigg |
|
logN |
|
N. |
|
For the second term in (4.30), from (3.7) and (4.16), we note that |
|
/⌊ard⌊lR(t−s)AmN(s)/⌊ard⌊lµ≤/summationdisplay |
|
k≥0µ(k)/summationdisplay |
|
l≥0Rkl(t−s)/summationdisplay |
|
r≥0|Alr||mr |
|
N(s)| |
|
≤ew(t−s)/summationdisplay |
|
l≥0µ(l)/summationdisplay |
|
r≥0|Alr||mr |
|
N(s)| |
|
≤ew(t−s)/summationdisplay |
|
r≥0µ(r){2|Arr|+w}|mr |
|
N(s)|. |
|
24OnBN(a) and for 0 ≤s≤T∧τ(N)(a,ζ), from (4.12), the sum for r∈κN(a) |
|
is bounded using |
|
/summationdisplay |
|
r∈κN(a)µ(r){2|Arr|+w}|mr |
|
N(s)| |
|
≤/summationdisplay |
|
r∈κN(a)µ(r){2|Arr|+w}δr(a) |
|
≤/summationdisplay |
|
r∈κN(a)µ(r){2|Arr|+w}/radicalBigg |
|
4aK∗TlogN |
|
Nζ(r) |
|
≤(2∨w)Z/radicalbig |
|
4aK∗T/radicalbigg |
|
logN |
|
N. |
|
The remaining sum is then bounded by Lemma 4.5, on the set BN(a) and |
|
for 0≤s≤T∧τ(N)(a,ζ), giving at most |
|
/summationdisplay |
|
r/∈κN(a)µ(r){2|Arr|+w}|mr |
|
N(s)| |
|
≤/summationdisplay |
|
r/∈κN(a)µ(r){2|Arr|+w}/integraldisplays |
|
0|Fr(xN(t))|dt |
|
≤(2∨w)saJ∗ |
|
mink/∈κN(a)(ζ(k)/µ(k){|Akk|+1}) |
|
≤(2∨w)Z(2) |
|
∗2J∗/radicalbigg |
|
Ta |
|
K∗/radicalbigg |
|
logN |
|
N. |
|
Integrating, it follows that |
|
sup |
|
0≤t≤T∧τ(N)(a,ζ)/integraldisplayt |
|
0/⌊ard⌊lR(t−s)AmN(s)/⌊ard⌊lµds |
|
≤(2T∨1)ewT/braceleftBigg/radicalbig |
|
4aK∗TZ+Z(2) |
|
∗J2J∗/radicalbigg |
|
Ta |
|
K∗/bracerightBigg/radicalbigg |
|
logN |
|
N, |
|
and the lemma follows. |
|
This has now established the control on sup0≤t≤T/⌊ard⌊l/tildewidemN(t)/⌊ard⌊lµthat we need, |
|
in order to translate (4.10) into a proof of the main theorem. |
|
25Theorem 4.7 Suppose that (1.2),(1.3),(3.1),(3.2)and(4.1)are all satis- |
|
fied, and that Assumptions 2.1 and 4.2 hold. Recalling the defi nition(4.13) |
|
ofρ(ζ,µ), forζas given in Assumption 4.2, suppose that S(N) |
|
ρ(ζ,µ)(0)≤NC∗ |
|
for some C∗<∞. |
|
Letxdenote the solution to (4.2)with initial condition x(0)satisfying |
|
Sρ(ζ,µ)(x(0))<∞. Thentmax=∞. |
|
Fix any T, and define ΞT:= sup0≤t≤T/⌊ard⌊lx(t)/⌊ard⌊lµ. If/⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ≤ |
|
1 |
|
2ΞTe−(w+k∗)T, wherek∗:=ewTK(µ,F;2ΞT), then there exist constants c1,c2 |
|
depending on C∗,Tand the parameters of the process, such that for all N |
|
large enough |
|
P/parenleftBigg |
|
sup |
|
0≤t≤T/⌊ard⌊lxN(t)−x(t)/⌊ard⌊lµ>/parenleftBigg |
|
ewT/⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ+c1/radicalbigg |
|
logN |
|
N/parenrightBigg |
|
ek∗T/parenrightBigg |
|
≤c2logN |
|
N. (4.31) |
|
Proof. AsS(N) |
|
ρ(ζ,µ)(0)≤NC∗, it follows also that S(N) |
|
r(0)≤NC∗for all |
|
0≤r≤ρ(ζ,µ). Fix any T < tmax, takeC:= 2(C∗+k04T)ek01T, and observe |
|
that, for r≤ρ(ζ,µ)∧r(2) |
|
max, and such that p(r)≤ρ(ζ,µ), we can take |
|
C′′ |
|
rT≤/tildewideCrT:={2(C∗∨1)+kr4T}e(kr1+Ckr2)T, (4.32) |
|
in Theorem 2.4, since we can take C∗to bound CrandC′ |
|
r. In particular, |
|
r=r(ζ) as defined in Assumption 4.2 satisfies both the conditions on r |
|
for (4.32) to hold. Then, taking a:={k2+k1/tildewideCr(ζ)T}b(ζ)in Corollary 2.5, it |
|
follows that for some constant c3>0, on the event BN(a), |
|
P[τ(N)(a,ζ)≤T]≤c3N−1. |
|
Then, from (4.29), for some constant c4,P[BN(a)c]≤c4N−1logN. Here, |
|
the constants c3,c4depend on C∗,Tand the parameters of the process. |
|
We now use Lemma 4.6 to bound the martingale term in (4.10). It fol- |
|
lows that, on the event BN(a)∩ {τ(N)(a,ζ)> T}and on the event that |
|
/⌊ard⌊lxN(s)−x(s)/⌊ard⌊lµ≤ΞTfor all 0≤s≤t, |
|
/⌊ard⌊lxN(t)−x(t)/⌊ard⌊lµ≤/parenleftBigg |
|
ewT/⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ+√aK4.6/radicalbigg |
|
logN |
|
N/parenrightBigg |
|
+k∗/integraldisplayt |
|
0/⌊ard⌊lxN(s)−x(s)/⌊ard⌊lµds, |
|
26wherek∗:=ewTK(µ,F;2ΞT). Then from Gronwall’s inequality, on the |
|
eventBN(a)∩{τ(N)(a,ζ)> T}, |
|
/⌊ard⌊lxN(t)−x(t)/⌊ard⌊lµ≤/parenleftBigg |
|
ewT/⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ+√aK4.6/radicalbigg |
|
logN |
|
N/parenrightBigg |
|
ek∗t, |
|
(4.33) |
|
for all 0≤t≤T, provided that |
|
/parenleftBigg |
|
ewT/⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ+√aK4.6/radicalbigg |
|
logN |
|
N/parenrightBigg |
|
≤ΞTe−k∗T. |
|
This is true for all Nsufficiently large, if /⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ≤1 |
|
2ΞTe−(w+k∗)T, |
|
which we have assumed. We have thus proved (4.31), since, as show n above, |
|
P(BN(a)c∪{τ(N)(a,ζ)> T}c) =O(N−1logN). |
|
We now use this to show that in fact tmax=∞. Forx(0) as above, we |
|
can take xj |
|
N(0) :=N−1⌊Nxj(0)⌋ ≤xj(0), so that S(N) |
|
ρ(ζ,µ)(0)≤NC∗forC∗:= |
|
Sρ(ζ,µ)(x(0))<∞. Then, by (4.13), lim j→∞{µ(j)/νρ(ζ,µ)(j)}= 0, so it fol- |
|
lowseasilyusing boundedconvergence that /⌊ard⌊lxN(0)−x(0)/⌊ard⌊lµ→0asN→ ∞. |
|
Hence, for any T < t max, it follows from (4.31) that /⌊ard⌊lxN(t)−x(t)/⌊ard⌊lµ→D0 |
|
asN→ ∞, fort≤T, with uniform bounds over the interval, where ‘ →D’ |
|
denotes convergence in distribution. Also, by Assumption 4.2, ther e is a con- |
|
stantc5such that /⌊ard⌊lxN(t)/⌊ard⌊lµ≤c5N−1S(N) |
|
rµ(t) for each t, whererµ≤r(2) |
|
maxand |
|
rµ≤ρ(ζ,µ). Hence, using Lemma 2.3 and Theorem 2.4, sup0≤t≤2T/⌊ard⌊lxN(t)/⌊ard⌊lµ |
|
remains bounded in probability as N→ ∞. Hence it is impossible that |
|
/⌊ard⌊lx(t)/⌊ard⌊lµ→ ∞asT→tmax<∞,implyingthatinfact tmax=∞forsuchx(0). |
|
Remark . The dependence on the initial conditions is considerably compli- |
|
cated by the way the constant Cappears in the exponent, for instance in the |
|
expression for /tildewideCrTin the proof of Theorem 4.7. However, if kr2in Assump- |
|
tions 2.1 can be chosen to be zero, as for instance in the examples be low, the |
|
dependence simplifies correspondingly. |
|
Therearebiologicallyplausiblemodelsinwhichtherestrictionto Jl≥ −1 |
|
is irksome. In populations in which members of a given type lcan fight one |
|
another, a natural possibility is to have a transition J=−2e(l)at a rate |
|
proportional to Xl(Xl−1), which translates to αJ=α(N) |
|
J=γxl(xl−N−1), |
|
a function depending on N. Replacing this with αJ=γ(xl)2removes the |
|
27N-dependence, but yields a process that can jump to negative value s ofXl. |
|
For this reason, it is useful to be able to allow the transition rates αJto |
|
depend on N. |
|
Since the arguments inthis paper are not limiting arguments for N→ ∞, |
|
it does not require many changes to derive the corresponding resu lts. Quan- |
|
tities such as A,F,Ur(x) andVr(x) now depend on N; however, Theorem 4.7 |
|
continues toholdwithconstants c1andc2thatdo notdepend on N, provided |
|
thatµ,w,ν, theklmfrom Assumption 2.1 and ζfrom Assumption 4.2 can |
|
be chosen to be independent of N, and that the quantities Z(l) |
|
∗from (4.21) |
|
can be bounded uniformly in N. On the other hand, the solution x=x(N) |
|
of (4.2) that acts as approximation to xNin Theorem 4.7 now itself depends |
|
onN, through R=R(N)andF=F(N). IfA(and hence R) can be taken |
|
to be independent of N, and lim N→∞/⌊ard⌊lF(N)−F/⌊ard⌊lµ= 0 for some fixed µ– |
|
Lipschitz function F, a Gronwall argument can be used to derive a bound |
|
for the difference between x(N)and the (fixed) solution xto equation (4.2) |
|
withN-independent RandF. IfAhas to depend on N, the situation is |
|
more delicate. |
|
5 Examples |
|
We begin with some general remarks, to show that the assumptions are sat- |
|
isfied in many practical contexts. We then discuss two particular ex amples, |
|
those of Kretzschmar (1993) and of Arrigoni (2003), that fitte d poorly or |
|
not at all into the general setting of Barbour & Luczak (2008), th ough the |
|
other systems referred to in the introduction could also be treate d similarly. |
|
In both of our chosen examples, the index jrepresents a number of individ- |
|
uals — parasites in a host in the first, animals in a patch in the second — |
|
and we shall for now use the former terminology for the preliminary, general |
|
discussion. |
|
Transitions that can typically be envisaged are: births of a few para sites, |
|
which may occur either in the same host, or in another, if infection is b eing |
|
represented; births and immigration of hosts, with or without para sites; mi- |
|
gration of parasites between hosts; deaths of parasites; death s of hosts; and |
|
treatment of hosts, leading to the deaths of many of the host’s pa rasites. For |
|
births of parasites, there is a transition X→X+J, whereJtakes the form |
|
Jl= 1;Jm=−1;Jj= 0, j/ne}ationslash=l,m, (5.1) |
|
28indicating that one m-host has become an l-host. For births of parasites |
|
within a host, a transition rate of the form bl−mmXmcould be envisaged, |
|
withl > m, the interpretation being that there are Xmhosts with parasite |
|
burdenm, each of which gives birth to soffspring at rate bs, for some small |
|
values of s. For infection of an m-host, a possible transition rate would be |
|
of the form |
|
Xm/summationdisplay |
|
j≥0N−1Xjλpj,l−m, |
|
since an m-host comes into contact with j-hosts at a rate proportional to |
|
their density in the host population, and pjrrepresents the probability of a |
|
j-host transferring rparasites to the infected host during the contact. The |
|
probability distributions pj·can be expected to be stochastically increasing |
|
inj. Deaths of parasites also give rise to transitions of the form (5.1), |
|
but now with l < m, the simplest form of rate being just dmXmforl= |
|
m−1, though d=dmcould also be chosen to increase with parasite burden. |
|
Treatment of a host would lead to values of lmuch smaller than m, and |
|
a rate of the form κXmfor the transition with l= 0 would represent fully |
|
successful treatment of randomly chosen individuals. Births and d eaths of |
|
hosts and immigration all lead to transitions of the form |
|
Jl=±1;Jj= 0, j/ne}ationslash=l. (5.2) |
|
Fordeaths, Jl=−1, anda typical ratewould be d′Xl. Forbirths, Jl= 1, and |
|
a possible rate would be/summationtext |
|
j≥0Xjb′ |
|
jl(withl= 0 only, if new-born individuals |
|
are free of parasites). For immigration, constant rates λlcould be supposed. |
|
Finally, for migration of individual parasites between hosts, transit ions are |
|
of the form |
|
Jl=Jm=−1;Jl+1= 1;Jm−1= 1;Jj= 0, j/ne}ationslash=l,m,l+1,m−1, |
|
(5.3) |
|
a possible rate being γmXmN−1Xl. |
|
For all the above transitions, we can take J∗= 2 in (1.2), and (1.3) is |
|
satisfied in biologically sensible models. (3.1) and (3.2) depend on the wa y in |
|
which the matrix Acan be defined, which is more model specific; in practice, |
|
(3.1) is very simple to check. The choice of µin (3.2) is influenced by the |
|
need to have (4.1) satisfied. For Assumptions 2.1, a possible choice o fνis to |
|
takeν(j) = (j+1) for each j≥0, withS1(X) then representing the num- |
|
ber of hosts plus the number of parasites. Satisfying (2.5) is then e asy for |
|
29transitions only involving the movement of a single parasite, but in gen eral |
|
requires assumptions as to the existence of the r-th moments of the distri- |
|
butions of the numbers of parasites introduced at birth, immigratio n and |
|
infection events. For (2.6), in which transitions involving a net reduc tion |
|
in the total number of parasites and hosts can be disregarded, th e parasite |
|
birth events are those in which the rates typically have a factor mXmfor |
|
transitions with Jm=−1, withmin principle unbounded. However, at such |
|
events, an m-individual changes to an m+sindividual, with the number s |
|
of offspring of the parasite being typically small, so that the value of JTνr |
|
associated with this rate has magnitude mr−1; the product mXmmr−1, when |
|
summed over m, then yields a contribution of magnitude Sr(X), which is al- |
|
lowable in(2.6). Similar considerations showthat theterms N−1S0(X)Sr(X) |
|
accommodate the migration rates suggested above. Finally, in orde r to have |
|
Assumptions 4.2 satisfied, it is in practice necessary that Assumptio ns 2.1 |
|
are satisfied for large values of r, thereby imposing restrictions on the dis- |
|
tributions of the numbers of parasites introduced at birth, immigra tion and |
|
infection events, as above. |
|
5.1 Kretzschmar’s model |
|
Kretzschmar (1993) introduced a model of a parasitic infection, in which the |
|
transitions from state Xare as follows: |
|
J=e(i−1)−e(i)at rate Niµxi, i ≥1; |
|
J=−e(i)at rate N(κ+iα)xi, i≥0; |
|
J=e(0)at rate Nβ/summationtext |
|
i≥0xiθi; |
|
J=e(i+1)−e(i)at rate Nλxiϕ(x), i ≥0, |
|
wherex:=N−1X,ϕ(x) :=/⌊ard⌊lx/⌊ard⌊l11{c+/⌊ard⌊lx/⌊ard⌊l1}−1withc >0, and/⌊ard⌊lx/⌊ard⌊l11:=/summationtext |
|
j≥1j|x|j; here, 0≤θ≤1, andθidenotes its i-th power (our θcorresponds |
|
to the constant ξin [7]). Both (1.2) and (1.3) are obviously satisfied. For |
|
Assumptions (3.1), (3.2) and (4.1), we note that equation corresp onding |
|
to (1.5) has |
|
Aii=−{κ+i(α+µ)};AT |
|
i,i−1=iµandAT |
|
i0=βθi, i≥2; |
|
A11=−{κ+α+µ};AT |
|
10=µ+βθ; |
|
A00=−κ+β, i≥1, |
|
30with all other elements of the matrix equal to zero, and |
|
Fi(x) =λ(xi−1−xi)ϕ(x), i≥1;F0(x) =−λx0ϕ(x). |
|
Hence Assumption (3.1) isimmediate, andAssumption (3.2)holds for µ(j) = |
|
(j+1)s, for any s≥0, withw= (β−κ)+. For the choice µ(j) =j+1,F |
|
maps elements of RµtoRµ, and is also locally Lipschitz in the µ-norm, with |
|
K(µ,F;Ξ) =c−2λΞ(2c+Ξ). |
|
For Assumptions 2.1, choose ν=µ; then (2.5) is a finite sum for each |
|
r≥0. Turning to (2.6), it is immediate that U0(x)≤βS0(x). Then, for |
|
r≥1, |
|
/summationdisplay |
|
i≥0λϕ(N−1X)Xi{(i+2)r−(i+1)r} ≤λS1(X) |
|
S0(X)/summationdisplay |
|
i≥0rXi(i+2)r−1 |
|
≤r2r−1λSr(X), |
|
since, by Jensen’s inequality, S1(X)Sr−1(X)≤S0(X)Sr(X). Hence we can |
|
takekr2=kr4= 0 and kr1=β+r2r−1λin (2.6), for any r≥1, so that |
|
r(1) |
|
max=∞. Finally, for (2.7), |
|
V0(x)≤(κ+β)S0(x)+αS1(x), |
|
so thatk03=κ+β+αandk05= 0, and |
|
Vr(x)≤r2(κS2r(x)+αS2r+1(x)+µS2r−1(x)+22(r−1)λS2r−1(x))+βS0(x), |
|
so that we can take p(r) = 2r+1,kr3=β+r2{κ+α+µ+22(r−1)λ}, and |
|
kr5= 0 for any r≥1, and so r(2) |
|
max=∞. In Assumptions 4.2, we can clearly |
|
takerµ= 1 and ζ(k) = (k+1)7, givingr(ζ) = 8,b(ζ) = 1 and ρ(ζ,µ) = 17. |
|
5.2 Arrigoni’s model |
|
Inthemetapopulation model ofArrigoni (2003), thetransitions f romstate X |
|
are as follows: |
|
J=e(i−1)−e(i)at rateNixi(di+γ(1−ρ)), i ≥2; |
|
J=e(0)−e(1)at rateNx1(d1+γ(1−ρ)+κ); |
|
J=e(i+1)−e(i)at rateNibixi, i ≥1; |
|
J=e(0)−e(i)at rateNxiκ, i ≥2; |
|
J=e(k+1)−e(k)+e(i−1)−e(i)at rateNixixkργ, k ≥0, i≥1; |
|
31as before, x:=N−1X. Here, the total number N=/summationtext |
|
j≥0Xj=S0(X) of |
|
patches remains constant throughout, and the number of animals in any one |
|
patch changes by at most one at each transition; in the final (migra tion) |
|
transition, however, the numbers in two patches change simultane ously. In |
|
the above transitions, γ,ρ,κare non-negative, and ( di),(bi) are sequences of |
|
non-negative numbers. |
|
Once again, both (1.2) and (1.3) are obviously satisfied. The equatio n |
|
corresponding to (1.4) can now be expressed by taking |
|
Aii=−{κ+i(bi+di+γ)};AT |
|
i,i−1=i(di+γ);AT |
|
i,i+1=ibi, i≥1; |
|
A00=−κ, |
|
with all other elements of Aequal to zero, and |
|
Fi(x) =ργ/⌊ard⌊lx/⌊ard⌊l11(xi−1−xi), i≥1;F0(x) =−ργx0/⌊ard⌊lx/⌊ard⌊l11+κ, |
|
where we have used the fact that N−1/summationtext |
|
j≥0Xj= 1. Hence Assumption (3.1) |
|
is again immediate, and Assumption (3.2) holds for µ(j) = 1 with w= 0, |
|
forµ(j) =j+ 1 with w= max i(bi−di−γ−κ)+(assuming ( bi) and (di) |
|
to be such that this is finite), or indeed for µ(j) = (j+1)swith any s≥2, |
|
with appropriate choice of w. With the choice µ(j) =j+1,Fagain maps |
|
elements of RµtoRµ, and is also locally Lipschitz in the µ-norm, with |
|
K(µ,F;Ξ) = 3ργΞ. |
|
To check Assumptions 2.1, take ν=µ; once again, (2.5) is a finite sum |
|
for each r. Then, for (2.6), it is immediate that U0(x) = 0. For any r≥1, |
|
using arguments from the previous example, |
|
Ur(x)≤r2r−1/braceleftBigg/summationdisplay |
|
i≥1ibixi(i+1)r−1+/summationdisplay |
|
i≥1/summationdisplay |
|
k≥0iργxixk(k+1)r−1/bracerightBigg |
|
≤r2r−1{max |
|
ibiSr(x)+ργS1(x)Sr−1(x)} |
|
≤r2r−1{max |
|
ibiSr(x)+ργS0(x)Sr(x)}, |
|
so that, since S0(x) = 1, we can take kr1=r2r−1(maxibi+ργ) andkr2= |
|
kr4= 0 in (2.6), and r(1) |
|
max=∞. Finally, for (2.7), V0(x) = 0 and, for r≥1, |
|
Vr(x) |
|
≤r2/braceleftBig |
|
22(r−1)max |
|
ibiS2r−1(x)+max |
|
i(i−1di)S2r(x)+γ(1−ρ)S2r−1(x) |
|
+ργ(22(r−1)S1(x)S2r−2(x)+S0(x)S2r−1(x))/bracerightBig |
|
+κS2r(x), |
|
32so that we can take p(r) = 2r, and (assuming i−1dito be finite) |
|
kr3=κ+r2{22(r−1)(max |
|
ibi+ργ)+max |
|
i(i−1di)+γ}, |
|
andkr5= 0 for any r≥1, andr(2) |
|
max=∞. In Assumptions 4.2, we can again |
|
takerµ= 1 and ζ(k) = (k+1)7, givingr(ζ) = 8,b(ζ) = 1 and ρ(ζ,µ) = 16. |
|
Acknowledgement |
|
We wish to thank a referee for recommendations that have substa ntially |
|
streamlined our arguments. ADB wishes to thank both the Institut e for |
|
MathematicalSciencesoftheNationalUniversityofSingaporeand theMittag– |
|
Leffler Institute for providing a welcoming environment while part of t his |
|
work was accomplished. MJL thanks the University of Z¨ urich for th eir hos- |
|
pitality on a number of visits. |
|
References |
|
[1]Arrigoni, F. (2003). Deterministic approximation of a stochastic |
|
metapopulation model. Adv. Appl. Prob. 35691–720. |
|
[2]Barbour, A. D. andKafetzaki, M. (1993). A host–parasite model |
|
yielding heterogeneous parasite loads. J. Math. Biology 31157–176. |
|
[3]Barbour, A. D. andLuczak, M. J. (2008). Laws of large numbers |
|
for epidemic models with countably many types. Ann. Appl. Probab. 18 |
|
2208–2238. |
|
[4]Chow, P.-L. (2007).Stochastic partial differential equations. Chapman |
|
and Hall, Boca Raton. |
|
[5]Eibeck, A. andWagner, W. (2003). Stochastic interacting particle |
|
systems and non-linear kinetic equations. Ann. Appl. Probab. 13845– |
|
889. |
|
[6]Kimmel, M. andAxelrod, D. E. (2002).Branching processes in biol- |
|
ogy.Springer, Berlin. |
|
33[7]Kretzschmar, M. (1993).Comparison ofaninfinite dimensional model |
|
for parasitic diseases with a related 2-dimensional system. J. Math. Anal- |
|
ysis Applics 176235–260. |
|
[8]Kurtz, T. G. (1970). Solutions of ordinary differential equations as |
|
limits of pure jump Markov processes. J. Appl. Probab. 749–58. |
|
[9]Kurtz, T. G. (1971).Limit theorems forsequences ofjumpMarkov pro- |
|
cesses approximating ordinary differential processes. J. Appl. Probab. 8 |
|
344–356. |
|
[10]L´eonard, C. (1990).Some epidemic systems are long range interacting |
|
particle systems. In: Stochastic Processes in Epidemic Theory , Eds J.-P. |
|
Gabriel, C. Lef` evre & P. Picard, Lecture Notes in Biomathematics 86 |
|
170–183: Springer, New York. |
|
[11]Luchsinger, C. J. (1999).MathematicalModelsofaParasiticDisease, |
|
Ph.D. thesis, University of Z¨ urich. |
|
[12]Luchsinger, C. J. (2001a). Stochastic models of a parasitic infection, |
|
exhibiting three basic reproduction ratios. J. Math. Biol. 42, 532–554. |
|
[13]Luchsinger, C. J. (2001b). Approximating the long term behaviour |
|
of a model for parasitic infection. J. Math. Biol. 42, 555–581. |
|
[14]Pazy, A. (1983).Semigroups of Linear Operators and Applications to |
|
Partial Differential Equations. Springer, Berlin. |
|
[15]Reuter, G. E. H. (1957). Denumerable Markov processes and the |
|
associated contraction semigroups on l.Acta Math. 97, 1–46. |
|
34 |